摘要

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with $L$ layers have $L$ connections—one between each layer and its subsequent layer—our network has $\frac {L(L+1)}{2}$ direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

DenseNet

DenseNet（Dense Convolutional Network）通过连接（concatenating）不同层输出的方式，将之前所有层的输出都作为当前层的输入。对于第$l$层而言，其包含了$l$个输入，由之前所有卷积块输出的特征图组成；而它输出的特征图作为后续$L-l$个层的输入数据，所以一个$L$层DenseNet共有$\frac {L(L+1)}{2}$个连接

复合函数

$H_{l}(\cdot)$由3个连续执行的操作组成：

1. 批量处理（BN
2. 激活函数（ReLU
3. 卷积操作（$3\times 3$ Conv

dense block/transition layer

DenseNet由多个dense blocktransition layer组成。在dense block中执行密集连接操作，在transition layer中执行特征图减半操作。其中transition层由以下操作组成：

• 批量处理（BN
• $1\times 1$卷积操作
• $2\times 2$平均池化操作

实现细节

• 输入大小为$224\times 224$
• 第一个卷积层使用$2k$个滤波器，大小为$7\times 7$，步长为$2$

数据集

CIFAR

• $32\times 32$大小彩色图像
• CIFAR-10(C10)包含10类，CIFAR-100(C100)包含100
• 5万张训练图像，1万张测试图像

SVHN

• $32\times 32$大小彩色图像
• 73257张训练集、 26032张测试集以及531131张图片用于额外训练

ImageNet

ILSVRC 2012分类数据集

• 120万张训练图片
• 5万张验证图片
• 1000

训练

训练参数

• 批量大小：64
• 训练次数：300（CIFAR）40（SVHN）
• 初始学习率0.1，迭代50%以及75%次后除以10

• 批量大小：256
• 训练次数：90
• 初始学习率0.1，迭代30以及60次后除以10

• 权重衰减：1e-4
• Nesterov动量：0.9

训练结果

DenseNet通过Dense BlockTransition Layer，相比于ResNet，使用更少的参数数目就能够实现相似的检测精度。实验结果如下表所示