Post

【pytorch笔记】搭建实战和Sequential使用

搭建实战和Sequential使用

【pytorch笔记】搭建实战和Sequential使用

前言

背景知识

开干!

引入库

1
2
import torch
import torch.nn as nn

定义神经网络 - sequential类使用

  • 不用sequential类的办法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class toy_model(nn.Module):

    def __init__(self):
        super().__init__()
        # 不加上面这行会报错:
        # AttributeError: cannot assign module before Module.__init__() call

        self.conv1 = nn.Conv2d(3, 32, 5, padding=2)
        # 这里padding是多少需要自己根据公式来计算
        # 实际上 奇数卷积核 直接用kernel_size // 2计算
        # 注意公式中是 H W  而不是通道数

        self.maxpool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 32, 5, padding=2)
        self.maxpool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(32, 64, 5, padding=2)
        self.maxpool3 = nn.MaxPool2d(2)
        self.flatten = nn.Flatten()
        # 被拉平后 变成了一维的 64*4*4 = 1024 数组

        self.linear1 = nn.Linear(1024, 64)
        self.linear2 = nn.Linear(64, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)

        return x
  • sequential类的办法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class toy_model(nn.Module):

    def __init__(self):
        super().__init__()
        # 不加上面这行会报错:
        # AttributeError: cannot assign module before Module.__init__() call

        self.model = nn.Sequential(
            nn.Conv2d(3, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model(x)
        return x

必须写上super().__init__(),要不然难绷报错AttributeError: cannot assign module before Module.__init__() call

结果测试

  • 使用代码
    1
    2
    3
    4
    5
    6
    
    if __name__ == '__main__':
      mymodel = toy_model()
      input = torch.ones((64, 3, 32, 32))
      output = mymodel(input)
      print(mymodel)
      print(output.shape)
    
  • 不用sequential类的办法
1
2
3
4
5
6
7
8
9
10
11
12
toy_model(
  (conv1): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear1): Linear(in_features=1024, out_features=64, bias=True)
  (linear2): Linear(in_features=64, out_features=10, bias=True)
)
torch.Size([64, 10])
  • sequential类的办法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
toy_model(
  (model): Sequential(
    (0): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Flatten(start_dim=1, end_dim=-1)
    (7): Linear(in_features=1024, out_features=64, bias=True)
    (8): Linear(in_features=64, out_features=10, bias=True)
  )
)
torch.Size([64, 10])
This post is licensed under CC BY 4.0 by the author.