Design of Knowledge Distillation using Multiple Assistants for Large Gap between Teacher and Student
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 황원준 | - |
dc.contributor.author | 손원철 | - |
dc.date.accessioned | 2022-11-29T02:32:29Z | - |
dc.date.available | 2022-11-29T02:32:29Z | - |
dc.date.issued | 2021-02 | - |
dc.identifier.other | 30587 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/handle/2018.oak/20013 | - |
dc.description | 학위논문(석사)--아주대학교 일반대학원 :인공지능학과,2021. 2 | - |
dc.description.tableofcontents | Ⅱ. Introduction 1 Ⅲ. Related Work 5 Ⅳ. Proposed Method 7 A. Background 7 B. Proposed Method 8 1. Densely Guided Knowledge Distillation 8 2. Parallel Teacher Assistants for on-the-fly CNN 12 Ⅴ. Experimental Results and Discussion 14 A. Datasets 14 B. Networks 14 C. Implementation Details 15 D. Ablation Study: Comparison with TAKD 15 E. Ablation Study: Classifier Ensemble 17 F. Error Avalanche Problem of TAKD 18 G. Knowledge Distillation Path 20 H. Stochastic DGKD 21 I. Comparison with The State-of-the-art Methods 23 J. Parallel Teacher Assistants Results 25 Ⅵ. Conclusion 29 Ⅶ. Reference 30 | - |
dc.language.iso | eng | - |
dc.publisher | The Graduate School, Ajou University | - |
dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | Design of Knowledge Distillation using Multiple Assistants for Large Gap between Teacher and Student | - |
dc.type | Thesis | - |
dc.contributor.affiliation | 아주대학교 일반대학원 | - |
dc.contributor.department | 일반대학원 인공지능학과 | - |
dc.date.awarded | 2021. 2 | - |
dc.description.degree | Master | - |
dc.identifier.localId | 1203003 | - |
dc.identifier.uci | I804:41038-000000030587 | - |
dc.identifier.url | http://dcoll.ajou.ac.kr:9080/dcollection/common/orgView/000000030587 | - |
dc.subject.keyword | computer vision | - |
dc.subject.keyword | deep learning | - |
dc.subject.keyword | model compression | - |
dc.subject.keyword | model optimization | - |
dc.subject.keyword | object classification | - |
dc.description.alternativeAbstract | With the success of deep neural networks, knowledge distillation which guides the learning of a small student network from a large teacher network is being actively studied for model compression and transfer learning. However, few studies have been performed to resolve the poor learning issue of the student network when the student and teacher model sizes significantly differ. In this paper, we propose a densely guided knowledge distillation using multiple teacher assistants that gradually decreases the model size to efficiently bridge the gap between the teacher and student networks. To stimulate more efficient learning of the student network, we guide each teacher assistant to every other smaller teacher assistants iteratively. Specifically, when teaching a smaller teacher assistant at the next step, the existing larger teacher assistants from the previous step are used as well as the teacher network. Moreover, we design stochastic teaching where, for each mini-batch, a teacher or teacher assistants are randomly dropped. This acts as a regularizer to improve the efficiency of teaching of the student network. Thus, the student can always learn salient distilled knowledge from the multiple sources. Additionally, there is a demand for on-the-fly computational systems with low power requirements, such as system-on-chips and embedded devices. We revise a parallel teacher assistant knowledge distillation as a way to use convolutional neural networks on the on-the-fly system where we consider a student and a teacher using 1 × 𝑁 and 𝑁 × 𝑁shape filters, respectively. We verified the effectiveness of the proposed method for a classification task using CIFAR-10, CIFAR-100, SVHN and Tiny ImageNet. We also achieved significant performance improvements with various backbone architectures such as ResNet, WideResNet, and VGG. | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.