学习视频地址 —— B站
PyTorch深度学习快速入门教程(绝对通俗易懂!)【小土堆】
1 pyTorch安装和配置
彻底卸载anaconda
conda install anaconda-clean
anaconda-clean --yes
再用控制面板
anaconda 卸载环境 :conda uninstall -n yyy --all
- anaconda 安装路径:D:\anaconda3
- 创建环境: conda create -n pytorch python=3.9
- 切换环境 : conda activate pytorch
- 查看目前已经安装的工具包:pip list
Q 安装pytorch?
- 进入pytorch首页 下拉,https://pytorch.org/
- 查看gpu型号: 任务管理器-性能 - gpu
- 首先更改下载源,,然后,使用官网上的指令安装,直接安装torch一直下载不下来
已经测试,速度很慢,30kB/s
conda create -n pytorch python=3.9
conda activate pytorch
conda config --remove-key channels
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
版权声明:本文为CSDN博主「StarryHuangx」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_46017950/article/details/123280653
pip3 install torch torchvision torchaudio
已经测试,最有效的办法,速度最快,3MB/s
conda create -n pytorch python=3.9
conda activate pytorch
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
#使用的这个 —— 未成功
conda install pytorch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1 -c pytorch
# 用的这——未成功
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html
# 这个 未成功
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
pip install torch==1.10.1+cu102 torchvision==0.11.2+cu102 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu102/torch_stable.html
# 这个可以,但是gpu太low, cuda 版本太高,,gpu上无法训练
pip install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio===0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# 11.1
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
作者:insightya https://www.bilibili.com/read/cv15186754/ 出处:bilibili
2 python 编辑器的安装
pyCharm 安装注意事项
- 在 Installation Options 中,,勾选 .py
pyCharm 配置
- 打开
- 新建项目 的过程中
- 选择 Existing interpreter
- 手动找到编译器,, 选择 conda Environment ->
- 输入: D:/anaconda/envs/pytroch/python.exe
检测是否正常运行
- 点 “” python console “”
import torch
torch.cuda.is_available()
默认 Jupiter 安装在 base 环境中,,所以,现在 要在 新环境中 在安装一遍 jupiter。
新环境中安装jupyter
# open anaconda命令行
conda activate pytorch
conda list
conda install nb_conda
jupyter notebook
jupyter 使用指南
import torch
torch.cuda.is_available()
shift + 回车: 表示 创建一个新的代码块,并且运行上一个代码块
3 为什么 返回 false
我的是true ,跳过
4 python 学习中的两个 法宝函数
dir() : 打开
help() : 说明书
dir(pytorch)
5 pyCharm 和 Jupyter使用及对比
6 pytorch 加载数据
Dataset
提供 获取数据及其lable的方法
能得到 总共有 多少数据
能获取 每一个数据 及其 label
Dataloader
把 数据打包,,方便 后续使用
代开 Jupyter
conda activate pytorch
jupyter notebook
7 Dataset 实战
安装opencv
conda activate pytorch
conda install opencv-python
from torch.utils.data import Dataset
from PIL import Image
import os
class MyData(Dataset):
def __init__(self, root_dir, label_dir):
self.root_dir = root_dir
self.label_dir = label_dir
self.path = os.path.join(self.root_dir, self.label_dir)
self.img_path = os.listdir(self.path)
def __getitem__(self, idx):
img_name = self.img_path[idx]
img_item_path = os.path.join( self.root_dir, self.label_dir, img_name )
img = Image.open(img_item_path)
label = self.label_dir
return img, label
def __len__(self):
return len(self.img_path)
root_dir = "hymenoptera_data/hymenoptera_data/train"
ants_label_dir = "ants"
bees_label_dir = "bees"
ants_dataset = MyData(root_dir, ants_label_dir)
bees_dataset = MyData(root_dir, bees_label_dir)
train_dataset = ants_dataset + bees_dataset
8 Tensorboard 使用 数据可视化
SummaryWriter使用
- 安装tensorboard
pip install tensorboard
- 报错
包不匹配,解决办法如下
pip install setuptools==59.5.0
conda install tensorboard
运行结果是,,多了一个Logs文件夹
数据可视化:
一定要切换到Logs的上一层目录中
conda activate pytorch
D:
cd D:\pycharm-workplace\pythonProject
# 切换到 Logs的上一层目录中
tensorboard --logdir=logs
9 tensorboard使用2
利用opencv读取文件,转化为Numpy数组
pip install opencv-python
from torch.utils.tensorboard import SummaryWriter
import numpy as np
from PIL import Image
writer = SummaryWriter("logs")
image_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img_PIL = Image.open(image_path)
img_array = np.array(img_PIL)
writer.add_image("lyy", img_array, 1, dataformats='HWC')
for i in range(100):
writer.add_scalar("y=2x", 3*i, i)
writer.close()
print("end")
10 Transfroms 使用1——主要是对图片进行变换
ToTensor()使用—— 把 imgge 或 numpy 转换为 tensor
from PIL import Image
from torchvision import transforms
img_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img = Image.open(img_path)
# tensor_trans 是一个类
tensor_trans = transforms.ToTensor() # 吧 imgge 或 numpy 转换为 tensor
tensor_img = tensor_trans(img)
print(tensor_img)
11 Transfroms 使用2——主要是对图片进行变换
from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
img_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img = Image.open(img_path)
writer = SummaryWriter("logs")
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
# print(tensor_img)
writer.add_image("lyy", tensor_img)
writer.close()
12 常见transforms使用
normalize
from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
img_path = "hymenoptera_data/16.jpg"
img = Image.open(img_path)
writer = SummaryWriter("logs")
# totensor使用
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("lyy", tensor_img)
# Normalize 使用
trans_norm = transforms.Normalize([111,111,111],[10,10,10])
img_norm = trans_norm(tensor_img)
writer.add_image("normalize", img_norm, 2)
writer.close()
print("end")
13 常见transforms使用2
注意事项
关注输入输出,看官方文档,关注方法的参数
from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
img_path = "hymenoptera_data/16.jpg"
img = Image.open(img_path)
writer = SummaryWriter("logs")
# totensor使用
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("lyy", tensor_img)
# Normalize 使用
trans_norm = transforms.Normalize([111,111,111],[10,10,10])
img_norm = trans_norm(tensor_img)
writer.add_image("normalize", img_norm, 2)
# resize
trans_resize = transforms.Resize( (512, 512))
img_resize = trans_resize(img) # 输入的是Image类型的图像
img_resize_tensor = transforms.ToTensor()
tensor_img = tensor_trans(img_resize)
writer.add_image("resize", tensor_img, 0)
# compose 用法
trans_resize_2 =transforms.Resize(123)
trans_compose = transforms.Compose([trans_resize_2, tensor_trans])
img_resize_2 = trans_compose(img)
writer.add_image("resize2", img_resize_2,1)
# randomCrop
trans_random = transforms.RandomCrop((20,50))
trans_compose_2 = transforms.Compose([trans_random, tensor_trans])
for i in range(10):
img_crop = trans_compose_2(img)
writer.add_image("randomCrop", img_crop, i)
writer.close()
print("end")
14 torchvision数据集使用
下载网上的数据集,及使用
import torchvision
from torch.utils.tensorboard import SummaryWriter
dataset_transfrom = torchvision.transforms.Compose([
torchvision.transforms.ToTensor()
])
train_set = torchvision.datasets.CIFAR10(root="./cifar10", train=True, transform=dataset_transfrom, download=True)
test_set = torchvision.datasets.CIFAR10(root="./cifar10", train=False, transform=dataset_transfrom, download=True)
writer = SummaryWriter("p10")
for i in range(10):
img, target = test_set[i]
writer.add_image("test_set", img, i)
writer.close()
DataLoader 使用
Dataset 有数据,和标签
DataLoader,吧数据 加载 到神经网络
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
test_data = torchvision.datasets.CIFAR10("./cifar10", train=False, transform=torchvision.transforms.ToTensor())
test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=False)
writer = SummaryWriter("dataloader")
step = 0
for data in test_loader:
imgs, targets = data
writer.add_images("dataloader", imgs, step)
step = step + 1
writer.close()\
17 卷积操作
import torch
import torch.nn.functional as F
input = torch.tensor([[1, 2, 0, 3, 1],
[0, 1, 2, 3, 1],
[1, 2, 1, 0, 0],
[5, 2, 3, 1, 1],
[2, 1, 0, 1, 1]])
kernal = torch.tensor([[1, 2, 1],
[0, 1, 0],
[2, 1, 0]])
input = torch.reshape(input, (1,1,5,5))
kernal = torch.reshape(kernal, (1,1,3,3))
print(input.shape)
print(kernal.shape)
output = F.conv2d(input, kernal, stride=2, padding=1)
print(output)
18 神经网络 —— 卷积层
import torch
import torchvision
from torch.utils.data import DataLoader
from torch.nn import Conv2d
from torch import nn
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10("./cifar10",train=False,transform=torchvision.transforms.ToTensor(), download=False)
dataloader = DataLoader(dataset, batch_size=64)
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.conv1 = Conv2d(3, 6, 3, 1)
def forward(self, x):
x = self.conv1(x)
return x
lyy = Lyy()
writer = SummaryWriter("logs")
step = 0
for data in dataloader:
imgs, targets = data
output = lyy(imgs)
writer.add_images("input", imgs, step)
output = torch.reshape(output, (-1,3,30,30))
writer.add_images("output", output, step)
step += 1
19 最大池化层
移动步长,默认是 卷积核的大小
import torch
from torch.nn import Conv2d, MaxPool2d
from torch import nn
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10('./cifar10', train=False, transform=torchvision.transforms.ToTensor(),download=False)
dataloader = DataLoader(dataset, 64, False)
input = torch.tensor([[1, 2, 0, 3, 1],
[0, 1, 2, 3, 1],
[1, 2, 1, 0, 0],
[5, 2, 3, 1, 1],
[2, 1, 0, 1, 1]], dtype=torch.float32)
kernal = torch.tensor([[1, 2, 1],
[0, 1, 0],
[2, 1, 0]])
input = torch.reshape(input, (1,1,5,5))
# kernal = torch.reshape(kernal, (1,1,3,3))
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
# self.conv1 = Conv2d(3, 3, 3, 1,padding=1)
self.maxpool =MaxPool2d(kernel_size=3, ceil_mode=True)
def forward(self, x):
# x = self.conv1(x)
x = self.maxpool(x)
return x
lyy = Lyy()
# output = lyy(input)
writer = SummaryWriter("logs")
step = 0
for data in dataloader:
imgs, tagerts = data
writer.add_images("input", imgs, step)
output = lyy(imgs)
writer.add_images("output", output, step)
step += 1
writer.close()
20 非线性激活
import torch
import torchvision.datasets
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
input = torch.tensor([[1, 2, 0, 3, 1],
[0, 1, 2, 3, 1],
[1, 2, 1, 0, 0],
[5, 2, 3, 1, 1],
[2, 1, 0, 1, 1]], dtype=torch.float32)
input = torch.reshape(input, (-1,1, 5, 5))
dataset = torchvision.datasets.CIFAR10("./cifar10", False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, 64)
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.relu1 = ReLU()
self.sigmoid = Sigmoid()
def forward(self, input):
return self.sigmoid(input)
lyy = Lyy()
step = 0
writer = SummaryWriter("logs")
for data in dataloader:
imgs, targets = data
writer.add_images("input", imgs, step)
output = lyy(imgs)
writer.add_images("output", output, step)
step += 1
writer.close()
print("end")
21 线性层 和 其它层
为正确运行
import torch
import torchvision
from torch.nn import Linear
from torch.utils.data import DataLoader
from torch import nn
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
super.linear1 = Linear(196608,10)
def forward(self, input):
output = self.linear1(input)
return output
lyy = Lyy()
write = SummaryWriter("logs")
step = 0
for data in dataloader:
imgs, targers = data
print(imgs.shape)
write.add_images("input", imgs, step)
output = torch.reshape(imgs, (1,1,1, -1))
print(output.shape)
write.add_images("output", output, step)
write.close()
22 神经网络搭建小实战
计算 Padding 和 Stride 是多少?
Dilation 默认等于 1
RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x1024 and 10240x64) 原因
原因: 网络参数设置不合适!!
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.conv1 = Conv2d(3, 32, 5, padding=2)
self.maxpool1 = MaxPool2d(2)
self.conv2 = Conv2d(32, 32, 5, padding=2)
self.maxpool2 = MaxPool2d(2)
self.conv3 = Conv2d(32, 64, 5, padding=2)
self.maxpool3 = MaxPool2d(2)
self.flatten = Flatten()
self.linear1 = Linear(1024, 64)
self.linear2 = Linear(64, 10)
def forward(self, x):
x = self.conv1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.maxpool2(x)
x = self.conv3(x)
x = self.maxpool3(x)
x = self.flatten(x)
x = self.linear1(x)
x = self.linear2(x)
return x
lyy = Lyy()
input = torch.ones((64,3,32,32))
print(input.shape)
output = lyy(input)
print(output.shape)
上述代码的,等价替换版本
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
lyy = Lyy()
input = torch.ones((64,3,32,32))
print(input.shape)
output = lyy(input)
print(output.shape)
可以用tensorboard绘制流程图
writer = SummaryWriter('logs')
writer.add_graph(lyy, input)
writer.close()
23 损失函数与反向传播
分类问题经常用 交叉熵损失函数
import torch
from torch.nn import L1Loss
from torch import nn
inputs = torch.tensor([1,2,3], dtype=float)
outputs = torch.tensor([1,2,5], dtype=float)
inputs = torch.reshape(inputs, [1,1,1,3])
outputs = torch.reshape(outputs, (1,1,1,3))
loss = L1Loss(reduction='sum')
result = loss(inputs, outputs)
print(result)
loss_mse = nn.MSELoss()
result = loss_mse(inputs, outputs)
print(result)
# 交叉熵损失函数
x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, (1, 3))
loss_cross = nn.CrossEntropyLoss()
print(loss_cross)
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
loss = nn.CrossEntropyLoss()
lyy=Lyy()
for data in dataloader:
imgs, targets = data
outputs = lyy(imgs)
result_loss = loss(outputs, targets)
result_loss.backward()
print(result_loss)
24 优化器
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
import torch
dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
class Lyy(nn.Module):
def __init__(self):
super(Lyy, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
loss = nn.CrossEntropyLoss()
lyy=Lyy()
optis = torch.optim.SGD(lyy.parameters(), lr=0.1)
for epoch in range(10):
running_loss = 0
for data in dataloader:
imgs, targets = data
outputs = lyy(imgs)
result_loss = loss(outputs, targets)
optis.zero_grad()
result_loss.backward() # 设置对应的梯度
optis.step()
running_loss += result_loss
print(running_loss)
25 现有网络模型的使用及修改
AGG模型 + imageNet数据集
1. 安装 scipy包
2. pretrained参数,表示,网络是否训练好
import torchvision.models
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch
dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
#dataloader = DataLoader(dataset, batch_size=64)
vgg16_true = torchvision.models.vgg16(pretrained=True)
vgg16_false = torchvision.models.vgg16(pretrained=False)
# 改造vgg16
print(vgg16_false)
# 更改 vgg16, 增加一层
vgg16_true.classifier.add_module('add_linear', nn.Linear(1000, 10))
print(vgg16_true)
print(vgg16_false)
vgg16_false.classifier[6] = nn.Linear(4096, 10)
print(vgg16_false)
26 网络模型的保存与读取
方式1 和 方式2 都差不多,推荐 方式2
model_save
import torchvision
import torch
vgg16 = torchvision.models.vgg16(pretrained=False)
# 保存方式1 : 即保存模型结构,,也保存了参数
torch.save(vgg16, "vgg16_model1.pth")
# 保存方式2 : 把参数保存成字典,不保存结构 (官方推荐)
torch.save(vgg16.state_dict(), "vgg16_model2.pth")
print("end")
model_load
import torch
import torchvision
# 加载方式1 - 保存方式1
# model = torch.load("vgg16_model1.pth")
# # print(model)
# 加载方式2
vgg16 = torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_model2.pth"))
print(vgg16)
27,28,29 完整的训练 套路
argmax() 针对分类问题很有效
文章来源:https://www.toymoban.com/news/detail-733858.html
import torch
from torch.utils.tensorboard import SummaryWriter
from model_self import *
import torchvision
from torch.nn import Conv2d
from torch.optim import SGD
from torch.utils.data import DataLoader
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
# 准备数据及
train_data = torchvision.datasets.CIFAR10('./cifar10', True, transform=torchvision.transforms.ToTensor(),download=False)
test_data = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor(),download=False)
# 求长度
train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据及长度:{}".format(train_data_size))
print("测试数据集长度:{}".format(test_data_size))
# 加载数据及
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)
# 搭建网络
# 创建网络模型
lyy = Lyy()
# 创建损失函数
loss_fn = nn.CrossEntropyLoss()
# 优化器
learning_rate = 1e-2
optimizer = torch.optim.SGD(lyy.parameters(),lr=learning_rate)
# 设置训练网络参数
total_train_step = 0
total_test_step = 0
epoch = 10
# 添加tensorboard
writer = SummaryWriter("logs")
for i in range(epoch):
print("-----第{}轮训练开始了-----".format(i+1))
# 训练步骤开始
for data in train_dataloader:
imgs, tragets = data
output = lyy(imgs)
loss = loss_fn(output, tragets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_train_step += 1
if total_train_step % 100 == 0:
print("训练次数:{},Loss:{}".format(total_train_step, loss.item()))
writer.add_scalar("train_loss", loss.item(), total_train_step)
# 测试步骤开始
total_test_loss = 0
total_accuracy = 0
with torch.no_grad():
for data in test_dataloader:
imgs, tragets = data
output = lyy(imgs)
loss = loss_fn(output, tragets)
total_test_loss += loss
accuracy = (output.argmax(1) -- tragets).sum()
total_accuracy += accuracy
print("整体测试机上误差:{}".format(total_test_loss))
print("整体测试机上的正确率:{}".format(total_accuracy/test_data_size))
writer.add_scalar("test_loss", total_test_loss, total_test_step)
writer.add_scalar("test_accuracy", total_accuracy/total_test_step)
total_test_step += 1
# torch.save(lyy, "lyy_{}.pth".format(i))
# print("模型已保存")
writer.close()
30 利用GPU训练
找到 网络模型, 数据, 损失函数,调用.cuda()文章来源地址https://www.toymoban.com/news/detail-733858.html
到了这里,关于PyTorch深度学习快速入门教程【小土堆】 学习笔记的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!