博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
总结:nn.Module的children()与modules()方法、如何获取网络的某些层
阅读量:3593 次
发布时间:2019-05-20

本文共 21760 字,大约阅读时间需要 72 分钟。

一、nn.Module的children()方法与modules()方法的区别

children()与modules()都是返回网络模型里的组成元素,但是children()返回的是最外层的元素,modules()返回的是所有的元素,包括不同级别的子元素。

首先定义以下全连接网络:

import torchfrom torch import nnclass SimpleNet(nn.Module):    def __init__(self, in_dim, n_hidden_1, n_hidden_2, out_dim ):        super().__init__()        self.layer1 = nn.Sequential(            nn.Linear(in_dim, n_hidden_1),            nn.ReLU(),        )        self.layer2 = nn.Sequential(            nn.Linear(n_hidden_1, n_hidden_2),            nn.ReLU(),        )        self.layer3 = nn.Linear(n_hidden_2, out_dim)    def forward(self, x):        x = self.layer1(x),        x = self.layer2(x),        x = self.layer3(x)        return xif __name__ == "__main__":    net = SimpleNet(2, 3, 3, 2)    print(net)

测试运行,结果如下:

可以看到这个网络的结构如下:

1.1 Module类的children()方法

children()方法返回的是最外层,也就是1,2,3这三个。

Module.children()是一个生成器,生成器是一种迭代器。迭代器实现了__iter__() 和__next__()方法。迭代器肯定是可迭代对象,可迭代对象就能放在for x in ...后面进行遍历。

例:

import torchfrom torch import nnclass SimpleNet(nn.Module):    def __init__(self, in_dim, n_hidden_1, n_hidden_2, out_dim ):        super().__init__()        self.layer1 = nn.Sequential(            nn.Linear(in_dim, n_hidden_1),            nn.ReLU(),        )        self.layer2 = nn.Sequential(            nn.Linear(n_hidden_1, n_hidden_2),            nn.ReLU(),        )        self.layer3 = nn.Linear(n_hidden_2, out_dim)    def forward(self, x):        x = self.layer1(x),        x = self.layer2(x),        x = self.layer3(x)        return xif __name__ == "__main__":    net = SimpleNet(2, 3, 3, 2)    print(net.children())          #net.children()是一个生成器,生成器是一种迭代器    for i, e in enumerate(net.children()):        print("第{}个元素为:\n {}".format(i, e))

结果:

也就是输入了第一层的元素1,2,3。

1.2 Module类的modules()方法

modules()方法类似与深度优先遍历,不光返回的是最外层。

Module.modules()也是一个生成器。

import torchfrom torch import nnclass SimpleNet(nn.Module):    def __init__(self, in_dim, n_hidden_1, n_hidden_2, out_dim ):        super().__init__()        self.layer1 = nn.Sequential(            nn.Linear(in_dim, n_hidden_1),            nn.ReLU(),        )        self.layer2 = nn.Sequential(            nn.Linear(n_hidden_1, n_hidden_2),            nn.ReLU(),        )        self.layer3 = nn.Linear(n_hidden_2, out_dim)    def forward(self, x):        x = self.layer1(x),        x = self.layer2(x),        x = self.layer3(x)        return xif __name__ == "__main__":    net = SimpleNet(2, 3, 3, 2)    print(net.modules())          #net.modules()是一个生成器,生成器是一种迭代器    for i, e in enumerate(net.modules()):        print("第{}个元素为:\n {}".format(i, e))

结果:

即,按照以下顺序进行返回的。

二、如何获取网络的某些层

可以借助children()方法来获取网络的某些层,比如只要经典网络的前几层,后面的层不要了。

比如,resnet18:

import torchvision.models as modelsResnet = models.resnet18(pretrained=False)print(Resnet)

结果:

D:\Anaconda3\envs\pytorch_env\python.exe D:/pythonCodes/深度学习实验/行人重识别实验1:IDENet/aaa.pyResNet(  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  (relu): ReLU(inplace=True)  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)  (layer1): Sequential(    (0): BasicBlock(      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )    (1): BasicBlock(      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (layer2): Sequential(    (0): BasicBlock(      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (layer3): Sequential(    (0): BasicBlock(      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (layer4): Sequential(    (0): BasicBlock(      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))  (fc): Linear(in_features=512, out_features=1000, bias=True))Process finished with exit code 0

我们希望得到除最后两层外,剩余的部分。

方法:

self.base = nn.Sequential(*list(Resnet.children())[:-2])             #用来得到前面的层。

解释:

list(Resnet.children())            #前面讲了Resnet.children()是网络的各层。list(Resnet.children())是将各层转成一个list。【例1】

list(Resnet.children())[:-2]      #除去最后两项,剩余的部分组成一个列表。【例2】

nn.Sequential(*list(Resnet.children())[:-2])    #list变量前加一个星号*,目的是将该list变量拆解开多个独立的参数,传入函数中【例3】。这一句的作用是将网络的前面的层,组成一个网络。【例4】

例1:

import torchvision.models as modelsResnet = models.resnet18(pretrained=False)layers = list(Resnet.children())      #将Resnet的各层放到一个列表中print(layers)

结果:

D:\Anaconda3\envs\pytorch_env\python.exe D:/pythonCodes/深度学习实验/行人重识别实验1:IDENet/aaa.py[Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False),BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),ReLU(inplace=True), MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False), Sequential(  (0): BasicBlock(    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )  (1): BasicBlock(    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), AdaptiveAvgPool2d(output_size=(1, 1)), Linear(in_features=512, out_features=1000, bias=True)]Process finished with exit code 0

可以看到,所有层都放到了一个列表中。

例2:

import torchvision.models as modelsResnet = models.resnet18(pretrained=False)layers = list(Resnet.children())[:-2]    #除去最后两项,剩余的组成一个列表。print(layers)

结果:

D:\Anaconda3\envs\pytorch_env\python.exe D:/pythonCodes/深度学习实验/行人重识别实验1:IDENet/aaa.py[Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False),BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),ReLU(inplace=True), MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False), Sequential(  (0): BasicBlock(    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )  (1): BasicBlock(    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  )), Sequential(  (0): BasicBlock(    (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (downsample): Sequential(      (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (1): BasicBlock(    (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    (relu): ReLU(inplace=True)    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  ))]Process finished with exit code 0

例3:

def fun(a, b, c):    return a+b+clist1 = [1, 2, 3]res = fun(*list1)    #将list1拆解成多个独立的参数,传入到函数中。print(res)

结果:

例4:

import torchvision.models as modelsfrom torch import nnResnet = models.resnet18(pretrained=False)net = nn.Sequential(*list(Resnet.children())[:-2])print(net)

结果:

D:\Anaconda3\envs\pytorch_env\python.exe D:/pythonCodes/深度学习实验/行人重识别实验1:IDENet/aaa.pySequential(  (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)  (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)  (2): ReLU(inplace=True)  (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)  (4): Sequential(    (0): BasicBlock(      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )    (1): BasicBlock(      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (5): Sequential(    (0): BasicBlock(      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (6): Sequential(    (0): BasicBlock(      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  )  (7): Sequential(    (0): BasicBlock(      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (downsample): Sequential(        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      )    )    (1): BasicBlock(      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)      (relu): ReLU(inplace=True)      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)    )  ))Process finished with exit code 0

这样就变成了一个9层的网络。

 

转载地址:http://aqown.baihongyu.com/

你可能感兴趣的文章
Html实用型基础标签——表单元素
查看>>
CSS基础学习——格式、选择器、伪元素
查看>>
Emmet——帮助速写Html与CSS
查看>>
CSS基础学习——各类样式集合
查看>>
CSS基础学习——浮动float、盒子模型
查看>>
CSS基础学习——定位position
查看>>
CSS基础学习——基本布局
查看>>
Html基础学习——Html5新特性
查看>>
CSS基础学习——动画
查看>>
CSS基础学习——布局
查看>>
JavaScript基础学习——基础(一)
查看>>
JavaScript基础学习——基础(二)
查看>>
JavaScript基础学习——基础(三)
查看>>
JavaScript基础学习——DOM操作
查看>>
JavaScript基础学习——JQuery和DOM
查看>>
JavaScript基础学习——Bootstrap框架
查看>>
如何将项目上传到Github(步骤)
查看>>
JavaScript基础学习——事件以及Event对象(原生态)
查看>>
JavaScript基础学习——事件的冒泡、捕获、委托
查看>>
JavScript基础学习——面向对象编程
查看>>