For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
SDChannelNets: Extremely Small and Efficient Convolutional Neural Networks
JianNan ZHANG JiJun ZHOU JianFeng WU ShengYing YANG
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2019/12/01
Online ISSN: 1745-1361
Type of Manuscript: LETTER
Category: Biocybernetics, Neurocomputing
convolutional neural networks, parameter sharing, convolution kernel, reducing the number of model parameters,
Full Text: PDF(629.9KB)>>
Convolutional neural networks (CNNS) have a strong ability to understand and judge images. However, the enormous parameters and computation of CNNS have limited its application in resource-limited devices. In this letter, we used the idea of parameter sharing and dense connection to compress the parameters in the convolution kernel channel direction, thus greatly reducing the number of model parameters. On this basis, we designed Shared and Dense Channel-wise Convolutional Networks (SDChannelNets), mainly composed of Depth-wise Separable SD-Channel-wise Convolution layer. The advantage of SDChannelNets is that the number of model parameters is greatly reduced without or with little loss of accuracy. We also introduced a hyperparameter that can effectively balance the number of parameters and the accuracy of a model. We evaluated the model proposed by us through two popular image recognition tasks (CIFAR-10 and CIFAR-100). The results showed that SDChannelNets had similar accuracy to other CNNs, but the number of parameters was greatly reduced.