Improving the interpretability of deep neural networks with knowledge distillation

Xuan Liu, Xiaoguang Wang, Stan Matwin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

71 Citations (Scopus)
Original languageEnglish
Title of host publicationProceedings - 18th IEEE International Conference on Data Mining Workshops, ICDMW 2018
EditorsHanghang Tong, Zhenhui Li, Feida Zhu, Jeffrey Yu
PublisherIEEE Computer Society
Pages905-912
Number of pages8
ISBN (Electronic)9781538692882
DOIs
Publication statusPublished - Jul 2 2018
Event18th IEEE International Conference on Data Mining Workshops, ICDMW 2018 - Singapore, Singapore
Duration: Nov 17 2018Nov 20 2018

Publication series

NameIEEE International Conference on Data Mining Workshops, ICDMW
Volume2018-November
ISSN (Print)2375-9232
ISSN (Electronic)2375-9259

Conference

Conference18th IEEE International Conference on Data Mining Workshops, ICDMW 2018
Country/TerritorySingapore
CitySingapore
Period11/17/1811/20/18

ASJC Scopus Subject Areas

  • Software
  • Computer Science Applications

Keywords

  • Decision Tree
  • Neural Networks
  • TensorFlow
  • dark knowledge
  • interpretation
  • knowledge distillation

Fingerprint

Dive into the research topics of 'Improving the interpretability of deep neural networks with knowledge distillation'. Together they form a unique fingerprint.

Cite this