Active Weighted Mapping-based Residual Convolutional Neural Network for Image Classification

Author(s)
정형호
Alternative Author(s)
Hyoungho Jung
Advisor
황원준
Department
일반대학원 인공지능학과
Publisher
The Graduate School, Ajou University
Publication Year
2021-02
Language
eng
Keyword
Convolutional neural networkDeep learningObject recognitionResidual convolutional network
Alternative Abstract
In visual recognition, the key to the performance improvement of ResNet is the success in establishing the stack of deep sequential convolutional layers using identical mapping by a shortcut connection. It results in multiple paths of data flow under a network and the paths are merged with the equal weights. However, it is questionable whether it is correct to use the fixed and predefined weights at the mapping units of all paths. In this paper, we introduce the active weighted mapping method which infers proper weight values based on the characteristic of input data on the fly. The weight values of each mapping unit are not fixed but changed as the input image is changed, and the most proper weight values for each mapping unit are derived according to the input image. For this purpose, channel-wise information is embedded from both the shortcut connection and convolutional block, and then the fully connected layers are used to estimate the weight values for the mapping units. We train the backbone network and the proposed module alternately for a more stable learning of the proposed method. With extension to DenseNet, we propose the block selection method which reduce the burden on memory and improve performance. We verify the superiority and generality of the proposed method on various datasets in comparison with the baseline.
URI
https://dspace.ajou.ac.kr/handle/2018.oak/19988
Fulltext

Appears in Collections:
Graduate School of Ajou University > Department of Artificial Intelligence > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse