PyTorch教程——小土堆笔记

这篇具有很好参考价值的文章主要介绍了PyTorch教程——小土堆笔记。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

PyTorch代码笔记

跟着B站小土堆PyTorch教程写的代码笔记
完整代码:
链接:https://pan.baidu.com/s/1-ZePujIeVjeZeCdq7Xek0Q
提取码:4vse

1_tensorboard

Tensorboard是一个可视化工具,能够可视化神经网络内部的组织、结构、及其训练过程。

from torch.utils.tensorboard import SummaryWriter
from PIL import Image
import numpy as np

#tensorboard --logdir=logs/xxxlogs
writer = SummaryWriter("logs")

img_path = "dataset/antbee/train/ants/0013035.jpg"
img_PIL = Image.open(img_path)
img_array = np.array(img_PIL)
#img_PIL.show()
print(type(img_array))
print(img_array.shape)

writer.add_image("ant", img_array, 2, dataformats='HWC')

for i in range(100):
    writer.add_scalar(tag="y=2x", scalar_value=2*i, global_step=i)

writer.flush()
writer.close()

2_transform

Transforms是pytorch的图像处理工具包,是torchvision模块下的一个一个类的集合,可以对图像或数据进行格式变换,裁剪,缩放,旋转等。

from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
from PIL import Image
import cv2

img_path = "dataset/antbee/train/ants/0013035.jpg"
img = Image.open(img_path)

writer = SummaryWriter("logs")

tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
#print(tensor_img)

writer.add_image("tensor_img", tensor_img)

writer.close()

#cv_img = cv2.imread(img_path)
#print(cv_img)

3_useful_transform

常用transform
ToTensor:把PIL.Image或ndarray从 (H x W x C)形状转换为 (C x H x W) 的tensor
Normalize:对图像进行标准化
Resize:调整PILImage对象的尺寸,注意不能是用io.imread或者cv2.imread读取的图片,这两种方法得到的是ndarray
Compose:串联多个图片变换的操作
RandomCrop:随机裁剪图片

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

img = Image.open("dataset/antbee/train/ants/0013035.jpg")
print(img)
writer = SummaryWriter("logs")

#ToTensor
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("tensor_img", tensor_img)

#Normalize
print(tensor_img[0][0][0])
trans_norm = transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
img_norm = trans_norm(tensor_img)
print(img_norm[0][0][0])
writer.add_image("Normalize", img_norm)

#Resize
print(img.size)
trans_resize = transforms.Resize((512, 512))
img_resize = trans_resize(img)

img_resize = trans_resize(tensor_img)
writer.add_image("Resize", img_resize, 0)
print(img_resize)

#Compose
trans_resize_2 = transforms.Resize(512)
trans_compose = transforms.Compose([trans_resize_2, tensor_trans])
img_resize_2 = trans_compose(img)
writer.add_image("Resize", img_resize_2, 1)

#RandomCrop
trans_random = transforms.RandomCrop((500, 600))
trans_compose_2 = transforms.Compose([trans_random, tensor_trans])
for i in range(20):
    img_crop = trans_compose_2(img)
    writer.add_image("RandomCrop", img_crop, i)

writer.close()

4_data

torchvision中的数据集使用

import torchvision
from torch.utils.tensorboard import SummaryWriter

dataset_transform = torchvision.transforms.Compose([
    torchvision.transforms.ToTensor()
])

train_set = torchvision.datasets.CIFAR10(root="dataset", train=True, transform=dataset_transform, download=False)
test_set = torchvision.datasets.CIFAR10(root="dataset", train=False, transform=dataset_transform, download=False)

# print(train_set[0])
# print(train_set.classes)
#
# img, target = train_set[0]
# print(img)
# print(target)
# print(train_set.classes[target])
# img.show()

#print(train_set[0])

writer = SummaryWriter("logs/datalogs")
for i in range(10):
    img, target = train_set[i]
    writer.add_image("data", img, i)

writer.close()

5_dataloader

DataLoader是Pytorch中用来处理模型输入数据的一个工具类。

import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

train_data = torchvision.datasets.CIFAR10("dataset", train=True, transform=torchvision.transforms.ToTensor())
test_data = torchvision.datasets.CIFAR10("dataset", train=False, transform=torchvision.transforms.ToTensor())

trian_loader = DataLoader(dataset=train_data, batch_size=64, shuffle=True, num_workers=0, drop_last=False)
test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=False)

img, target = train_data[0]
print(img.shape)
print(target)

writer = SummaryWriter("logs/dataloadlogs")
for epoh in range(2):
    step = 0
    for data in test_loader:
        imgs, targets = data
        #print(imgs.shape)
        #print(targets)
        writer.add_images("epoh {}".format(epoh), imgs, step)
        step = step+1
        print(step)

writer.close()

6_module

神经网络的基本骨架——nn.Module的使用

import torch
import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, input):
        output = input + 1
        return output

model = Model()
x = torch.tensor(1.0)
output = model(x)
print(output)

7_conv

卷积理解

import torch
import torch.nn.functional as F

input = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]])

kernel = torch.tensor([[1, 2, 1],
                       [0, 1, 0],
                       [2, 1, 0]])

input = torch.reshape(input, (1, 1, 5, 5))
kernel = torch.reshape(kernel, (1, 1, 3, 3))

output = F.conv2d(input, kernel, stride=1)
print(output)

output2 = F.conv2d(input, kernel, stride=2)
print(output2)

output3 = F.conv2d(input, kernel, stride=1, padding=1)
print(output3)

8_conv2d

卷积层

import torchvision
import torch
import torch.nn.functional as F
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)

dataloader = DataLoader(dataset, batch_size=64)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)

    def forward(self, x):
        x = self.conv1(x)
        return x

tudui = Tudui()
print(tudui)

writer = SummaryWriter("logs/convlogs")
step = 0
for data in dataloader:
    imgs, target = data
    output = tudui(imgs)
    #print(imgs.shape)
    #print(output.shape)
    writer.add_images("input", imgs, step)
    output = torch.reshape(output, (-1, 3, 30, 30))
    print(set, output.shape)
    writer.add_images("output", output, step)
    step = step + 1

9_maxpool

最大池化

import torch
from torch import nn
from torch.nn import MaxPool2d
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=64)

# input = torch.tensor([[1, 2, 0, 3, 1],
#                       [0, 1, 2, 3, 1],
#                       [1, 2, 1, 0, 0],
#                       [5, 2, 3, 1, 1],
#                       [2, 1, 0, 1, 1]], dtype=torch.float32)
#
# input = torch.reshape(input, (-1, 1, 5, 5))
# print(input.shape)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=True)

    def forward(self, input):
        output = self.maxpool1(input)
        return output

tudui = Tudui()
# output = tudui(input)
# print(output)

writer = SummaryWriter("logs/poollogs")
step = 0
for data in dataloader:
    imgs, target = data
    writer.add_images("input", imgs, step)
    output = tudui(imgs)
    writer.add_images("output", output, step)
    step = step + 1

writer.close()

10_nolinear

非线性激活

import torch
from torch import nn
from torch.nn import ReLU, Sigmoid
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=64)

#relu
# input = torch.tensor([[1, -0.5],
#                       [-1, 3]])
#
# output = torch.reshape(input, (-1, 1, 2, 2))
# print(output.shape)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.relu1 = ReLU()
        self.sigmoid1 = Sigmoid()

    def forward(self, input):
        output = self.sigmoid1(input)
        return output

tudui = Tudui()
# output = tudui(input)
# print(output)

writer = SummaryWriter("logs/sigmoidlogs")
step = 0
for data in dataloader:
    imgs, target = data
    writer.add_images("input", imgs, step)
    output = tudui(imgs)
    writer.add_images("output", output, step)
    step = step + 1

writer.close()

11_linear

线性层

import torch
from torch import nn
from torch.nn import Linear
import torchvision
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=64, drop_last=True)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.linear1 = Linear(196608, 10)

    def forward(self, input):
        output = self.linear1(input)
        return output

tudui = Tudui()

for data in dataloader:
    imgs, target = data
    print(imgs.shape)
    #output = torch.reshape(imgs, (1, 1, 1, -1))
    output = torch.flatten(imgs)
    print(output.shape)
    output = tudui(output)
    print(output.shape)

12_sequential

搭建小实战和Sequential的使用

import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 5, padding=2)
        self.maxpool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 32, 5, padding=2)
        self.maxpool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(32, 64, 5, padding=2)
        self.maxpool3 = nn.MaxPool2d(2)
        self.flatten = nn.Flatten()
        self.linear1 = nn.Linear(1024, 64)
        self.linear2 = nn.Linear(64, 10)

        self.model1 = nn.Sequential(
            nn.Conv2d(3, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)
        )

    def forward(self, x):
        # x = self.conv1(x)
        # x = self.maxpool1(x)
        # x = self.conv2(x)
        # x = self.maxpool2(x)
        # x = self.conv3(x)
        # x = self.maxpool3(x)
        # x = self.flatten(x)
        # x = self.linear1(x)
        # x = self.linear2(x)

        x = self.model1(x)

        return x

tudui = Tudui()
print(tudui)
input = torch.ones((64, 3, 32, 32))
output = tudui(input)
print(output.shape)

writer = SummaryWriter("logs/seqlogs")
writer.add_graph(tudui, input)
writer.close()

13_loss

损失函数理解

import torch
from torch import nn

outputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

outputs = torch.reshape(outputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))

loss = nn.L1Loss()
result = loss(outputs, targets)

loss_mse = nn.MSELoss()
result_mse = loss_mse(outputs, targets)

print(result)
print(result_mse)

x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, (1, 3))
loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x, y)
print(result_cross)

14_lossnetwork

损失函数+神经网络

import torch
from torch import nn
import torchvision
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=1)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = nn.Sequential(
            nn.Conv2d(3, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

loss = nn.CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
    imgs, target = data
    outputs = tudui(imgs)
    result_loss = loss(outputs, target)
    # print(outputs)
    # print(target)
    # print(result_loss)
    result_loss.backward()
    print("ok")

15_optimzer

优化器

import torch
from torch import nn
import torchvision
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=1)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = nn.Sequential(
            nn.Conv2d(3, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 32, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 5, padding=2),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

loss = nn.CrossEntropyLoss()
tudui = Tudui()
optim = torch.optim.SGD(tudui.parameters(), lr=0.01)

for epoh in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs, targets = data
        outputs = tudui(imgs)
        result_loss = loss(outputs, targets)
        optim.zero_grad()
        result_loss.backward()
        optim.step()
        running_loss = running_loss + result_loss
    print(running_loss)

16_model

现有网络模型的使用及修改

import torchvision
from torch import nn

vgg16_flase = torchvision.models.vgg16(progress=False)
vgg16_ture = torchvision.models.vgg16(progress=True)
print(vgg16_ture)

train_data = torchvision.datasets.CIFAR10('data', train=True, transform=torchvision.transforms.ToTensor(), download=True)

#添加
vgg16_ture.add_module('add_linear', nn.Linear(1000, 10))
print(vgg16_ture)

#修改
vgg16_flase.classifier[6] = nn.Linear(4096, 10)
print(vgg16_flase)

17_modelsave

模型保存

import torch
import torchvision
from torch import nn

vgg16 = torchvision.models.vgg16(progress=False)

#1、保存模型+参数
torch.save(vgg16, "model/vgg16_method1.pth")


#2、保存参数(字典形式)
torch.save(vgg16.state_dict(), "model/vgg16_method2.pth")

#陷阱
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = nn.Sequential(
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

tudui = Tudui()
torch.save(tudui, "model/tudui_method.pth")


18_modelload

模型加载

import torch
import torchvision
from torch import nn

#1、保存模型+参数
model = torch.load("model/vgg16_method1.pth")
#print(model)

#2、保存参数(字典形式)
vgg16 = torchvision.models.vgg16(progress=False)
vgg16.load_state_dict(torch.load("model/vgg16_method2.pth"))
#model = torch.load("model/vgg16_method2.pth")
#print(model)

#陷阱,需要有模型定义
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = nn.Sequential(
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

model = torch.load("model/tudui_method.pth")
print(model)

完整项目—model.py

模型定义

import torch
from torch import nn

#搭建神经网络
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model = nn.Sequential(
            nn.Conv2d(3, 32, 5, 1, 2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 32, 5, 1, 2),
            nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 5, 1, 2),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)
        )

    def forward(self, x):
        x = self.model(x)
        return x

# if __name__ == '__main__':
#     tudui = Tudui()
#     input = torch.ones((64, 3, 32, 32))
#     output = tudui(input)
#     print(output.shape)

完整项目—train.py

CPU训练

import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

from model import *

#准备数据集
train_data = torchvision.datasets.CIFAR10('data', train=True, transform=torchvision.transforms.ToTensor(), download=True)
test_data = torchvision.datasets.CIFAR10('data', train=False, transform=torchvision.transforms.ToTensor(), download=True)

#数据集长度
train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据集长度:{}".format(train_data_size))
print("测试数据集长度:{}".format(test_data_size))

#加载数据集(Dataloader)
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

#创建网络模型
tudui = Tudui()

#损失函数
loss_fn = nn.CrossEntropyLoss()

#优化器
#learning_rate = 0.01
learning_rate = 1e-2
optimizer = torch.optim.SGD(tudui.parameters(), lr=learning_rate)

#参数
total_train_step = 0
total_test_step = 0
epoh = 10

#添加tensorbooard
writer = SummaryWriter("logs/trainlogs")

for i in range(epoh):
    print("-----第{}轮训练-----".format(i+1))

    #训练开始
    tudui.train()#模型状态
    for data in train_dataloader:
        imgs, targets = data
        outputs = tudui(imgs)
        loss = loss_fn(outputs, targets)

        #优化器调优
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step = total_train_step + 1
        if total_train_step % 100 == 0:
            print("训练次数:{},Loss:{}".format(total_train_step, loss.item()))
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    #测试
    tudui.eval()#模型状态
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            outputs = tudui(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss = total_test_loss + loss.item()
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy

    print("测试集loss:{}".format(total_test_loss))
    print("测试集正确率: {}".format(total_accuracy / test_data_size))
    total_test_step = total_test_step + 1
    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)

    torch.save(tudui, "model/tudui_{}.pth".format(i))

writer.close()

完整项目—train_gpu1.py

GPU训练,方式1

import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

from model import *

#准备数据集
train_data = torchvision.datasets.CIFAR10('data', train=True, transform=torchvision.transforms.ToTensor(), download=True)
test_data = torchvision.datasets.CIFAR10('data', train=False, transform=torchvision.transforms.ToTensor(), download=True)

#数据集长度
train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据集长度:{}".format(train_data_size))
print("测试数据集长度:{}".format(test_data_size))

#加载数据集(Dataloader)
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

#创建网络模型
tudui = Tudui()
if torch.cuda.is_available():#1、网络模型
    tudui = tudui.cuda()

#损失函数
loss_fn = nn.CrossEntropyLoss()
if torch.cuda.is_available():#2、损失函数
    loss_fn = loss_fn.cuda()

#优化器
#learning_rate = 0.01
learning_rate = 1e-2
optimizer = torch.optim.SGD(tudui.parameters(), lr=learning_rate)

#参数
total_train_step = 0
total_test_step = 0
epoh = 10

#添加tensorbooard
writer = SummaryWriter("logs/trainlogs")

for i in range(epoh):
    print("-----第{}轮训练-----".format(i+1))

    #训练开始
    tudui.train()#模型状态
    for data in train_dataloader:
        imgs, targets = data
        if torch.cuda.is_available():#3、数据
            imgs = imgs.cuda()
            targets = targets.cuda()
        outputs = tudui(imgs)
        loss = loss_fn(outputs, targets)

        #优化器调优
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step = total_train_step + 1
        if total_train_step % 100 == 0:
            print("训练次数:{},Loss:{}".format(total_train_step, loss.item()))
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    #测试
    tudui.eval()#模型状态
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            if torch.cuda.is_available():#3、数据
                imgs = imgs.cuda()
                targets = targets.cuda()
            outputs = tudui(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss = total_test_loss + loss.item()
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy

    print("测试集loss:{}".format(total_test_loss))
    print("测试集正确率: {}".format(total_accuracy / test_data_size))
    total_test_step = total_test_step + 1
    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)

    torch.save(tudui, "model/tudui_{}.pth".format(i))

writer.close()

完整项目—train_gpu2.py

GPU训练,方式2

import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

from model import *

#定义训练的设备
#device = torch.device("cpu")
device = torch.device("cuda:0")
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

#准备数据集
train_data = torchvision.datasets.CIFAR10('data', train=True, transform=torchvision.transforms.ToTensor(), download=True)
test_data = torchvision.datasets.CIFAR10('data', train=False, transform=torchvision.transforms.ToTensor(), download=True)

#数据集长度
train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据集长度:{}".format(train_data_size))
print("测试数据集长度:{}".format(test_data_size))

#加载数据集(Dataloader)
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

#创建网络模型
tudui = Tudui()
tudui = tudui.to(device)

#损失函数
loss_fn = nn.CrossEntropyLoss()
loss_fn = loss_fn.to(device)

#优化器
#learning_rate = 0.01
learning_rate = 1e-2
optimizer = torch.optim.SGD(tudui.parameters(), lr=learning_rate)

#参数
total_train_step = 0
total_test_step = 0
epoh = 10

#添加tensorbooard
writer = SummaryWriter("logs/trainlogs")

for i in range(epoh):
    print("-----第{}轮训练-----".format(i+1))

    #训练开始
    tudui.train()#模型状态
    for data in train_dataloader:
        imgs, targets = data
        imgs = imgs.to(device)
        targets = targets.to(device)
        outputs = tudui(imgs)
        loss = loss_fn(outputs, targets)

        #优化器调优
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step = total_train_step + 1
        if total_train_step % 100 == 0:
            print("训练次数:{},Loss:{}".format(total_train_step, loss.item()))
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    #测试
    tudui.eval()#模型状态
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            imgs = imgs.to(device)
            targets = targets.to(device)
            outputs = tudui(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss = total_test_loss + loss.item()
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy

    print("测试集loss:{}".format(total_test_loss))
    print("测试集正确率: {}".format(total_accuracy / test_data_size))
    total_test_step = total_test_step + 1
    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)

    torch.save(tudui, "model/tudui_{}.pth".format(i))

writer.close()

完整项目—test.py

测试部分文章来源地址https://www.toymoban.com/news/detail-724259.html

import torch
import torchvision
from PIL import Image

from model import *

img_path = "test_imgs/1.jpg"
image = Image.open(img_path)
print(image)
image = image.convert('RGB')

transform = torchvision.transforms.Compose([torchvision.transforms.Resize((32, 32)),
                                            torchvision.transforms.ToTensor()])

image = transform(image)
print(image.shape)

model = torch.load("model/tudui_9.pth", map_location=torch.device('cpu'))
print(model)

image = torch.reshape(image, (1, 3, 32, 32))
print(image.shape)
model.eval()
with torch.no_grad():
    #image = image.cuda()
    output = model(image)
print(output)
print(output.argmax(1))

# 'airplane'=0
# 'automobile'=1
# 'brid'=2
# 'cat'=3
# 'deer'=4
# 'dog'=5
# 'frog'=6
# 'horse'=7
# 'ship'=8
# 'truck'=9

到了这里,关于PyTorch教程——小土堆笔记的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • PyTorch深度学习实战(5)——计算机视觉

    计算机视觉是指通过计算机系统对图像和视频进行处理和分析,利用计算机算法和方法,使计算机能够模拟和理解人类的视觉系统。通过计算机视觉技术,计算机可以从图像和视频中提取有用的信息,实现对环境的感知和理解,从而帮助人们解决各种问题和提高效率。本节中

    2024年02月15日
    浏览(50)
  • PyTorch深度学习实战(5)——计算机视觉基础

    计算机视觉是指通过计算机系统对图像和视频进行处理和分析,利用计算机算法和方法,使计算机能够模拟和理解人类的视觉系统。通过计算机视觉技术,计算机可以从图像和视频中提取有用的信息,实现对环境的感知和理解,从而帮助人们解决各种问题和提高效率。本节中

    2024年02月16日
    浏览(42)
  • 动手学CV-Pytorch计算机视觉 天池计算机视觉入门赛SVHN数据集实战

    这里我们以datawhale和天池合作的天池计算机视觉入门赛为例,通过案例实战来进一步巩固本章所介绍的图像分类知识。 该比赛以SVHN街道字符为赛题数据,数据集报名后可见并可下载,该数据来

    2024年02月04日
    浏览(47)
  • 【计算机视觉|人脸识别】 facenet-pytorch 项目中文说明文档

    下文搬运自GitHub,很多超链接都是相对路径所以点不了,属正常现象。 点击查看原文档 。转载请注明出处。 Click here to return to the English document 译者注: 本项目 facenet-pytorch 是一个十分方便的人脸识别库,可以通过 pip 直接安装。 库中包含了两个重要功能 人脸检测:使用MT

    2024年02月04日
    浏览(50)
  • 【Pytorch】计算机视觉项目——卷积神经网络CNN模型识别图像分类

    在上一篇笔记《【Pytorch】整体工作流程代码详解(新手入门)》中介绍了Pytorch的整体工作流程,本文继续说明如何使用Pytorch搭建卷积神经网络(CNN模型)来给图像分类。 其他相关文章: 深度学习入门笔记:总结了一些神经网络的基础概念。 TensorFlow专栏:《计算机视觉入门

    2024年02月05日
    浏览(57)
  • 【计算机视觉】干货分享:Segmentation model PyTorch(快速搭建图像分割网络)

    如何快速搭建图像分割网络? 要手写把backbone ,手写decoder 吗? 介绍一个分割神器,分分钟搭建一个分割网络。 仓库的地址: 该库的主要特点是: 高级 API(只需两行即可创建神经网络) 用于二元和多类分割的 9 种模型架构(包括传奇的 Unet) 124 个可用编码器(以及 timm

    2024年02月14日
    浏览(46)
  • 【计算机视觉 | Pytorch】timm 包的具体介绍和图像分类案例(含源代码)

    timm 是一个 PyTorch 原生实现的计算机视觉模型库。它提供了预训练模型和各种网络组件,可以用于各种计算机视觉任务,例如图像分类、物体检测、语义分割等等。 timm 的特点如下: PyTorch 原生实现: timm 的实现方式与 PyTorch 高度契合,开发者可以方便地使用 PyTorch 的 API 进行

    2024年02月15日
    浏览(41)
  • 【我是土堆 - PyTorch教程】学习随手记(已更新 | 已完结 | 10w字超详细版)

    目录 1. Pytorch环境的配置及安装 如何管理项目环境? 如何看自己电脑cuda版本? 安装Pytorch  2. Python编辑器的选择、安装及配置 PyCharm  PyCharm神器 Jupyter(可交互)  3. Python学习中的两大法宝函数 说明  实战操作 总结 4. Pycharm及Jupyter使用及对比 如何在PyCharm中新建项目? Pyth

    2024年01月21日
    浏览(69)
  • 计算机视觉的应用11-基于pytorch框架的卷积神经网络与注意力机制对街道房屋号码的识别应用

    大家好,我是微学AI,今天给大家介绍一下计算机视觉的应用11-基于pytorch框架的卷积神经网络与注意力机制对街道房屋号码的识别应用,本文我们借助PyTorch,快速构建和训练卷积神经网络(CNN)等模型,以实现街道房屋号码的准确识别。引入并注意力机制,它是一种模仿人类

    2024年02月12日
    浏览(51)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包