Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Abstract
Knowledge distillation (KD) is a new method for transferring knowledge of a
structure under training to another one. The typical application of KD is in
the form of learning a small model (named as a student) by soft labels produced
by a complex model (named as a teacher). Due to the novel idea introduced in
KD, recently, its notion is used in different methods such as compression and
processes that are going to enhance the model accuracy. Although different
techniques are proposed in the area of KD, there is a lack of a model to
generalize KD techniques. In this paper, various studies in the scope of KD are
investigated and analyzed to build a general model for KD. All the methods and
techniques in KD can be summarized through the proposed model. By utilizing the
proposed model, different methods in KD are better investigated and explored.
The advantages and disadvantages of different approaches in KD can be better
understood and develop a new strategy for KD can be possible. Using the
proposed model, different KD methods are represented in an abstract view.