심층 신경망을 위한 Skip 알고리즘 이용하는Convolution 가속기
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 선우명훈 | - |
dc.contributor.author | 김영호 | - |
dc.date.accessioned | 2018-11-08T08:26:33Z | - |
dc.date.available | 2018-11-08T08:26:33Z | - |
dc.date.issued | 2018-02 | - |
dc.identifier.other | 26938 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/handle/2018.oak/13757 | - |
dc.description | 학위논문(석사)--아주대학교 일반대학원 :전자공학과,2018. 2 | - |
dc.description.tableofcontents | I. Introduction II. Convolution Neural Network III. Proposed Architecture IV. Implementation and Results V. Conclusion Bibliography | - |
dc.language.iso | eng | - |
dc.publisher | The Graduate School, Ajou University | - |
dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | 심층 신경망을 위한 Skip 알고리즘 이용하는Convolution 가속기 | - |
dc.title.alternative | A Convolution Accelerator using Skip Algorithm for Deep Neural Network | - |
dc.type | Thesis | - |
dc.contributor.affiliation | 아주대학교 일반대학원 | - |
dc.contributor.alternativeName | Kim YoungHo | - |
dc.contributor.department | 일반대학원 전자공학과 | - |
dc.date.awarded | 2018. 2 | - |
dc.description.degree | Master | - |
dc.identifier.localId | 800665 | - |
dc.identifier.url | http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000026938 | - |
dc.subject.keyword | Convolution Neural Network | - |
dc.subject.keyword | Accelerator | - |
dc.subject.keyword | Skip Algorithm | - |
dc.description.alternativeAbstract | Convolution neural networks (CNNs) are a well-known neural network architecture. It is widely used in the field of computer vision, especially in image classification and object recognition. In addition, CNNs are showing good performance in this field with the development of graphics processing unit (GPU). However, GPU has a critical problem with power consumption and energy efficiency. This makes CNNs difficult to apply real-time image processing and mobile applications. In particular, CNNs consume most of the power and time in Convolution operations. Thus, the computational complexity of convolution operations in CNNs makes it difficult to use CNN in areas where resources are limited. In this thesis, we propose an accelerator that can efficiently perform convolution operations in the CNN inference phases. Convolution is the multiplication and accumulation (MAC) of weights and feature map data. If the data of the neurons in the neural networks are zero. These neurons have little effect on neural network performance. Therefore, it is possible to obtain excellent performance in terms of inference time, energy efficiency by skipping the convolution operation by checking the neuron having zero value. Our synthesis results in 65nm technology with the clock frequency of 400Mhz. The proposed accelerator is able to reach over 207 Giga Operation per second (GOp/s) and an efficiency of 144%, achieving a power efficiency of over 473 Gop/s/W in a core area of 1.2 Mega Gate Equivalents (MGE). | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.