在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速

这篇具有很好参考价值的文章主要介绍了在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

这几天又在玩树莓派,先是搞了个物联网,又在尝试在树莓派上搞一些简单的神经网络,这次搞得是卷积识别mnist手写数字识别

训练代码在电脑上,cpu就能训练,很快的:

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np

# 设置随机种子
torch.manual_seed(42)

# 定义数据预处理
transform = transforms.Compose([
    transforms.ToTensor(),
    # transforms.Normalize((0.1307,), (0.3081,))
])

# 加载训练数据集
train_dataset = datasets.MNIST('data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)

# 构建卷积神经网络模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.pool = nn.MaxPool2d(2)
        self.fc = nn.Linear(10 * 12 * 12, 10)

    def forward(self, x):
        x = self.pool(torch.relu(self.conv1(x)))
        x = x.view(-1, 10 * 12 * 12)
        x = self.fc(x)
        return x

model = Net()

# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)

# 训练模型
def train(model, device, train_loader, optimizer, criterion, epochs):
    model.train()
    for epoch in range(epochs):
        for batch_idx, (data, target) in enumerate(train_loader):
            data, target = data.to(device), target.to(device)
            optimizer.zero_grad()
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()
            if batch_idx % 100 == 0:
                print(f'Train Epoch: {epoch+1} [{batch_idx * len(data)}/{len(train_loader.dataset)} '
                      f'({100. * batch_idx / len(train_loader):.0f}%)]\tLoss: {loss.item():.6f}')

# 在GPU上训练(如果可用),否则使用CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

# 训练模型
train(model, device, train_loader, optimizer, criterion, epochs=5)

# 保存模型为NumPy数据
model_state = model.state_dict()
numpy_model_state = {key: value.cpu().numpy() for key, value in model_state.items()}
np.savez('model.npz', **numpy_model_state)
print("Model saved as model.npz")

然后需要自己在dataset里导出一些图片:我保存在了mnist_pi文件夹下,“_”后面的是标签,主要是在pc端导出保存到树莓派下

在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速

 树莓派推理端的代码,需要numpy手动重新搭建网络,并且需要手动实现conv2d卷积神经网络和maxpool2d最大池化,然后加载那些保存的矩阵参数,做矩阵乘法和加法

import numpy as np
import os
from PIL import Image

def conv2d(input, weight, bias, stride=1, padding=0):
    batch_size, in_channels, in_height, in_width = input.shape
    out_channels, in_channels, kernel_size, _ = weight.shape

    # 计算输出特征图的大小
    out_height = (in_height + 2 * padding - kernel_size) // stride + 1
    out_width = (in_width + 2 * padding - kernel_size) // stride + 1

    # 添加padding
    padded_input = np.pad(input, ((0, 0), (0, 0), (padding, padding), (padding, padding)), mode='constant')

    # 初始化输出特征图
    output = np.zeros((batch_size, out_channels, out_height, out_width))

    # 执行卷积操作
    for b in range(batch_size):
        for c_out in range(out_channels):
            for h_out in range(out_height):
                for w_out in range(out_width):
                    h_start = h_out * stride
                    h_end = h_start + kernel_size
                    w_start = w_out * stride
                    w_end = w_start + kernel_size

                    # 提取对应位置的输入图像区域
                    input_region = padded_input[b, :, h_start:h_end, w_start:w_end]

                    # 计算卷积结果
                    x = input_region * weight[c_out]
                    bia = bias[c_out]
                    conv_result = np.sum(x, axis=(0,1, 2)) + bia

                    # 将卷积结果存储到输出特征图中
                    output[b, c_out, h_out, w_out] = conv_result

    return output

def max_pool2d(input, kernel_size, stride=None, padding=0):
    batch_size, channels, in_height, in_width = input.shape

    if stride is None:
        stride = kernel_size

    out_height = (in_height - kernel_size + 2 * padding) // stride + 1
    out_width = (in_width - kernel_size + 2 * padding) // stride + 1

    padded_input = np.pad(input, ((0, 0), (0, 0), (padding, padding), (padding, padding)), mode='constant')

    output = np.zeros((batch_size, channels, out_height, out_width))

    for b in range(batch_size):
        for c in range(channels):
            for h_out in range(out_height):
                for w_out in range(out_width):
                    h_start = h_out * stride
                    h_end = h_start + kernel_size
                    w_start = w_out * stride
                    w_end = w_start + kernel_size

                    input_region = padded_input[b, c, h_start:h_end, w_start:w_end]

                    output[b, c, h_out, w_out] = np.max(input_region)

    return output


# 加载保存的模型数据
model_data = np.load('model.npz')

# 提取模型参数
conv_weight = model_data['conv1.weight']
conv_bias = model_data['conv1.bias']
fc_weight = model_data['fc.weight']
fc_bias = model_data['fc.bias']

# 进行推理
def inference(images):
    # 执行卷积操作
    conv_output = conv2d(images, conv_weight, conv_bias, stride=1, padding=0)
    conv_output = np.maximum(conv_output, 0)  # ReLU激活函数
    #maxpool2d
    pool = max_pool2d(conv_output,2)
    # 执行全连接操作
    flattened = pool.reshape(pool.shape[0], -1)
    fc_output = np.dot(flattened, fc_weight.T) + fc_bias
    fc_output = np.maximum(fc_output, 0)  # ReLU激活函数

    # 获取预测结果
    predictions = np.argmax(fc_output, axis=1)

    return predictions


folder_path = './mnist_pi'  # 替换为图片所在的文件夹路径
def infer_images_in_folder(folder_path):
    for file_name in os.listdir(folder_path):
        file_path = os.path.join(folder_path, file_name)
        if os.path.isfile(file_path) and file_name.endswith(('.jpg', '.jpeg', '.png')):
            image = Image.open(file_path)
            label = file_name.split(".")[0].split("_")[1]
            image = np.array(image)/255.0
            image = np.expand_dims(image,axis=0)
            image = np.expand_dims(image,axis=0)
            print("file_path:",file_path,"img size:",image.shape,"label:",label)
            predicted_class = inference(image)
            print('Predicted class:', predicted_class)

infer_images_in_folder(folder_path)

 文章来源地址https://www.toymoban.com/news/detail-464769.html

这代码完全就是numpy推理,不需要安装pytorch,树莓派也装不动pytorch,太重了,下面是推理结果,比之前的MLP网络慢很多,主要是手动实现的卷积网络全靠循环实现。

在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速

conv2d和max_pool2d加速程序: 

import numpy as np
import os
from PIL import Image


def conv2d(images,weight,bias,stride=1,padding=1):
    # 卷积操作
    N, C, H, W = images.shape
    F, _, HH, WW = weight.shape
    # 计算卷积后的输出尺寸
    H_out = (H - HH + 2 * padding) // stride + 1
    W_out = (W - WW + 2 * padding) // stride + 1
    # 初始化卷积层输出
    out = np.zeros((N, F, H_out, W_out))
    # 执行卷积运算
    for i in range(H_out):
        for j in range(W_out):
            # 提取当前卷积窗口
            window = images[:, :, i * stride:i * stride + HH, j * stride:j * stride + WW]
            # 执行卷积运算
            out[:, :, i, j] = np.sum(window * weight, axis=(1, 2, 3)) + bias
    # 输出结果
    # print("卷积层输出尺寸:", out.shape)
    return out


import numpy as np

def max_pool2d(input, kernel_size, stride=None, padding=0):
    # 输入尺寸
    batch_size,num_channels, input_height, input_width = input.shape

    # 默认stride等于kernel_size
    if stride is None:
        stride = kernel_size

    # 输出尺寸
    output_height = (input_height - kernel_size + 2 * padding) // stride + 1
    output_width = (input_width - kernel_size + 2 * padding) // stride + 1

    # 添加padding
    if padding > 0:
        padded_input = np.pad(input, [(0, 0), (padding, padding), (padding, padding), (0, 0)], mode='constant')
    else:
        padded_input = input

    # 用于存储输出特征图
    output = np.zeros((batch_size,num_channels, output_height, output_width))

    # 执行最大池化操作
    for i in range(output_height):
        for j in range(output_width):
            start_i = i * stride
            start_j = j * stride
            end_i = start_i + kernel_size
            end_j = start_j + kernel_size
            sub_future = padded_input[:,:, start_i:end_i, start_j:end_j]
            output[:, :,i, j] = np.max(sub_future, axis=(2,3))

    return output


# 加载保存的模型数据
model_data = np.load('conv2d_model.npz')
# 提取模型参数
conv_weight = model_data['conv1.weight']
conv_bias = model_data['conv1.bias']
fc_weight = model_data['fc.weight']
fc_bias = model_data['fc.bias']


# 进行推理
def inference(images):
    # 执行卷积操作
    conv_output = conv2d(images, conv_weight, conv_bias, stride=1, padding=0)
    conv_output = np.maximum(conv_output, 0)  # ReLU激活函数
    #maxpool2d
    pool = max_pool2d(conv_output,2)
    # 执行全连接操作
    flattened = pool.reshape(pool.shape[0], -1)
    fc_output = np.dot(flattened, fc_weight.T) + fc_bias
    fc_output = np.maximum(fc_output, 0)  # ReLU激活函数

    # 获取预测结果
    predictions = np.argmax(fc_output, axis=1)

    return predictions



def infer_images_in_folder(folder_path):
    for file_name in os.listdir(folder_path):
        file_path = os.path.join(folder_path, file_name)
        if os.path.isfile(file_path) and file_name.endswith(('.jpg', '.jpeg', '.png')):
            image = Image.open(file_path)
            label = file_name.split(".")[0].split("_")[1]
            image = np.array(image)/255.0
            image = np.expand_dims(image,axis=0)
            image = np.expand_dims(image,axis=0)
            print("file_path:",file_path,"img size:",image.shape,"label:",label)
            predicted_class = inference(image)
            print('Predicted class:', predicted_class)




if __name__ == "__main__":

    # conv2d(X,weight,b)
    folder_path = './mnist_pi'  # 替换为图片所在的文件夹路径
    infer_images_in_folder(folder_path)

 

那我们给它加加速吧,下面是一个多线程加速程序:

import numpy as np
import os
from PIL import Image
from multiprocessing import Pool

def conv2d(input, weight, bias, stride=1, padding=0):
    batch_size, in_channels, in_height, in_width = input.shape
    out_channels, in_channels, kernel_size, _ = weight.shape

    # 计算输出特征图的大小
    out_height = (in_height + 2 * padding - kernel_size) // stride + 1
    out_width = (in_width + 2 * padding - kernel_size) // stride + 1

    # 添加padding
    padded_input = np.pad(input, ((0, 0), (0, 0), (padding, padding), (padding, padding)), mode='constant')

    # 初始化输出特征图
    output = np.zeros((batch_size, out_channels, out_height, out_width))

    # 执行卷积操作
    for b in range(batch_size):
        for c_out in range(out_channels):
            for h_out in range(out_height):
                for w_out in range(out_width):
                    h_start = h_out * stride
                    h_end = h_start + kernel_size
                    w_start = w_out * stride
                    w_end = w_start + kernel_size

                    # 提取对应位置的输入图像区域
                    input_region = padded_input[b, :, h_start:h_end, w_start:w_end]

                    # 计算卷积结果
                    x = input_region * weight[c_out]
                    bia = bias[c_out]
                    conv_result = np.sum(x, axis=(0,1, 2)) + bia

                    # 将卷积结果存储到输出特征图中
                    output[b, c_out, h_out, w_out] = conv_result

    return output

def max_pool2d(input, kernel_size, stride=None, padding=0):
    batch_size, channels, in_height, in_width = input.shape

    if stride is None:
        stride = kernel_size

    out_height = (in_height - kernel_size + 2 * padding) // stride + 1
    out_width = (in_width - kernel_size + 2 * padding) // stride + 1

    padded_input = np.pad(input, ((0, 0), (0, 0), (padding, padding), (padding, padding)), mode='constant')

    output = np.zeros((batch_size, channels, out_height, out_width))

    for b in range(batch_size):
        for c in range(channels):
            for h_out in range(out_height):
                for w_out in range(out_width):
                    h_start = h_out * stride
                    h_end = h_start + kernel_size
                    w_start = w_out * stride
                    w_end = w_start + kernel_size

                    input_region = padded_input[b, c, h_start:h_end, w_start:w_end]

                    output[b, c, h_out, w_out] = np.max(input_region)

    return output

# 加载保存的模型数据
model_data = np.load('model.npz')

# 提取模型参数
conv_weight = model_data['conv1.weight']
conv_bias = model_data['conv1.bias']
fc_weight = model_data['fc.weight']
fc_bias = model_data['fc.bias']

# 进行推理
def inference(images):
    # 执行卷积操作
    conv_output = conv2d(images, conv_weight, conv_bias, stride=1, padding=0)
    conv_output = np.maximum(conv_output, 0)  # ReLU激活函数
    # maxpool2d
    pool = max_pool2d(conv_output, 2)
    # 执行全连接操作
    flattened = pool.reshape(pool.shape[0], -1)
    fc_output = np.dot(flattened, fc_weight.T) + fc_bias
    fc_output = np.maximum(fc_output, 0)  # ReLU激活函数

    # 获取预测结果
    predictions = np.argmax(fc_output, axis=1)

    return predictions

labels = []
preds = []
def infer_image(file_path):
    image = Image.open(file_path)
    label = file_path.split("/")[-1].split(".")[0].split("_")[1]
    image = np.array(image) / 255.0
    image = np.expand_dims(image, axis=0)
    image = np.expand_dims(image, axis=0)
    print("file_path:", file_path, "img size:", image.shape, "label:", label)
    predicted_class = inference(image)
    print('Predicted class:', predicted_class)


folder_path = './mnist_pi'  # 替换为图片所在的文件夹路径
pool = Pool(processes=4)  # 设置进程数为2,可以根据需要进行调整

def infer_images_in_folder(folder_path):
    for file_name in os.listdir(folder_path):
        file_path = os.path.join(folder_path, file_name)
        if os.path.isfile(file_path) and file_name.endswith(('.jpg', '.jpeg', '.png')):
            pool.apply_async(infer_image, args=(file_path,))

    pool.close()
    pool.join()


infer_images_in_folder(folder_path)

 

下图可以看出来,我的树莓派3b+,cpu直接拉满,速度提升4倍:

在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速

 

改了一个轻量级的卷积和maxpool的实现,然后再使用多进程加速,可以明显看到cpu的占用率更低了,下次争取把量化后的模型拿过来用

import numpy as np
import os
from PIL import Image
from multiprocessing import Pool


def conv2d(images,weight,bias,stride=1,padding=1):
    # 卷积操作
    N, C, H, W = images.shape
    F, _, HH, WW = weight.shape
    # 计算卷积后的输出尺寸
    H_out = (H - HH + 2 * padding) // stride + 1
    W_out = (W - WW + 2 * padding) // stride + 1
    # 初始化卷积层输出
    out = np.zeros((N, F, H_out, W_out))
    # 执行卷积运算
    for i in range(H_out):
        for j in range(W_out):
            # 提取当前卷积窗口
            window = images[:, :, i * stride:i * stride + HH, j * stride:j * stride + WW]
            # 执行卷积运算
            out[:, :, i, j] = np.sum(window * weight, axis=(1, 2, 3)) + bias
    # 输出结果
    # print("卷积层输出尺寸:", out.shape)
    return out


import numpy as np

def max_pool2d(input, kernel_size, stride=None, padding=0):
    # 输入尺寸
    batch_size,num_channels, input_height, input_width = input.shape

    # 默认stride等于kernel_size
    if stride is None:
        stride = kernel_size

    # 输出尺寸
    output_height = (input_height - kernel_size + 2 * padding) // stride + 1
    output_width = (input_width - kernel_size + 2 * padding) // stride + 1

    # 添加padding
    if padding > 0:
        padded_input = np.pad(input, [(0, 0), (padding, padding), (padding, padding), (0, 0)], mode='constant')
    else:
        padded_input = input

    # 用于存储输出特征图
    output = np.zeros((batch_size,num_channels, output_height, output_width))

    # 执行最大池化操作
    for i in range(output_height):
        for j in range(output_width):
            start_i = i * stride
            start_j = j * stride
            end_i = start_i + kernel_size
            end_j = start_j + kernel_size
            sub_future = padded_input[:,:, start_i:end_i, start_j:end_j]
            output[:, :,i, j] = np.max(sub_future, axis=(2,3))

    return output


# 加载保存的模型数据
model_data = np.load('conv2d_model.npz')
# 提取模型参数
conv_weight = model_data['conv1.weight']
conv_bias = model_data['conv1.bias']
fc_weight = model_data['fc.weight']
fc_bias = model_data['fc.bias']


# 进行推理
def inference(images):
    # 执行卷积操作
    conv_output = conv2d(images, conv_weight, conv_bias, stride=1, padding=0)
    conv_output = np.maximum(conv_output, 0)  # ReLU激活函数
    #maxpool2d
    pool = max_pool2d(conv_output,2)
    # 执行全连接操作
    flattened = pool.reshape(pool.shape[0], -1)
    fc_output = np.dot(flattened, fc_weight.T) + fc_bias
    fc_output = np.maximum(fc_output, 0)  # ReLU激活函数

    # 获取预测结果
    predictions = np.argmax(fc_output, axis=1)

    return predictions



# def infer_images_in_folder(folder_path):
#     for file_name in os.listdir(folder_path):
#         file_path = os.path.join(folder_path, file_name)
#         if os.path.isfile(file_path) and file_name.endswith(('.jpg', '.jpeg', '.png')):
#             image = Image.open(file_path)
#             label = file_name.split(".")[0].split("_")[1]
#             image = np.array(image)/255.0
#             image = np.expand_dims(image,axis=0)
#             image = np.expand_dims(image,axis=0)
#             print("file_path:",file_path,"img size:",image.shape,"label:",label)
#             predicted_class = inference(image)
#             print('Predicted class:', predicted_class)

def infer_image(file_path):
    image = Image.open(file_path)
    label = file_path.split("/")[-1].split(".")[0].split("_")[1]
    image = np.array(image) / 255.0
    image = np.expand_dims(image, axis=0)
    image = np.expand_dims(image, axis=0)
    predicted_class = inference(image)
    print("file_path:", file_path, "img size:", image.shape, "label:", label,'Predicted class:', predicted_class)


def infer_images_in_folder(folder_path):

    while True:
        pool = Pool(processes=4)  # 设置进程数为2,可以根据需要进行调整
        for file_name in os.listdir(folder_path):
            file_path = os.path.join(folder_path, file_name)
            if os.path.isfile(file_path) and file_name.endswith(('.jpg', '.jpeg', '.png')):
                pool.apply_async(infer_image, args=(file_path,))

        pool.close()
        pool.join()



if __name__ == "__main__":

    # conv2d(X,weight,b)
    folder_path = './mnist_pi'  # 替换为图片所在的文件夹路径

    
    infer_images_in_folder(folder_path)

 

在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速

 

到了这里,关于在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别,并使用多进程加速的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【PyTorch】nn.Conv2d函数详解

    CONV2D官方链接 in_channels:输入的通道数,RGB 图像的输入通道数为 3 out_channels:输出的通道数 kernel_size:卷积核的大小,一般我们会使用 5x5、3x3 这种左右两个数相同的卷积核,因此这种情况只需要写 kernel_size = 5这样的就行了。如果左右两个数不同,比如3x5的卷积核,那么写作

    2024年01月22日
    浏览(40)
  • 【知识点】nn.Conv2d参数设置

    reference   in_channels   这个很好理解,就是输入的四维张量[N, C, H, W]中的C了,即输入张量的channels数。这个形参是确定权重等可学习参数的shape所必需的。 out_channels   也很好理解,即期望的四维输出张量的channels数。 kernel_size   卷积核的大小,一般我们会使用5x5、3x3这

    2024年02月12日
    浏览(37)
  • pytorch中 nn.Conv2d的简单用法

    参数介绍 : in_channels :卷积层输入通道数 out_channels :卷积层输出通道数 kernel_size :卷积层的卷积核大小 padding :填充长度 stride :卷积核移动的步长 dilation :是否采用空洞卷积 groups :是否采用分组卷积 bias :是否添加偏置参数 padding_mode : padding 的模式 如果输入大小为:

    2024年02月11日
    浏览(44)
  • pytorch框架:conv1d、conv2d的输入数据维度是什么样的

    Conv1d 的输入数据维度通常是一个三维张量,形状为 (batch_size, in_channels, sequence_length),其中: batch_size 表示当前输入数据的批次大小; in_channels 表示当前输入数据的通道数,对于文本分类任务通常为 1,对于图像分类任务通常为 3(RGB)、1(灰度)等; sequence_length 表示当前输

    2024年01月16日
    浏览(50)
  • 在树莓派上实现numpy的LSTM长短期记忆神经网络做图像分类,加载pytorch的模型参数,推理mnist手写数字识别

    这几天又在玩树莓派,先是搞了个物联网,又在尝试在树莓派上搞一些简单的神经网络,这次搞得是LSTM识别mnist手写数字识别 训练代码在电脑上,cpu就能训练,很快的: 然后需要自己在dataset里导出一些图片:我保存在了mnist_pi文件夹下,“_”后面的是标签,主要是在pc端导出

    2024年02月07日
    浏览(44)
  • 一维卷积神经网络理解(torch.nn.Conv1d)

    in_channels : (int)输入数据的通道数,即对某条训练数据来说由多少组向量表示。例如对于由一维向量表示的一条数据来说,通道数为1;对于文本数据来说,一个句子是由m个单词组成,那么通道数就可以是m out_channels : (int)卷积产生的通道数,可以理解为卷积核的个数 kernel_siz

    2023年04月08日
    浏览(45)
  • CONV1D一维卷积神经网络运算过程(举例:n行3列➡n行6列)

    作者:CSDN @ _养乐多_ 背景 一维卷积的运算过程网上很多人说不清楚,示意图画的也不清楚。因此,本人针对一维卷积的过程,绘制了计算过程,以我的知识量解释一下 pytorch 中 Conv1d() 函数的机理。 Conv1d() 计算过程 假设我们现在有 n 行,3列数据。n 行可以是 n 个点,也可以

    2024年02月06日
    浏览(41)
  • sparse conv稀疏卷积

     很好的教程,感谢作者的分享 通俗易懂的解释Sparse Convolution过程 - 知乎 稀疏卷积和普通卷积的区别 spconv和普通卷积没有区别,最重要的区别在于卷积的数据的存储方式和计算方法,这种计算方法可以增加计算稀疏点云的效率,其他的都是完全相同的(但SubMConv3d还是稍微有点

    2024年02月11日
    浏览(39)
  • python-函数用法-F.conv_transpose2d

    F.conv_transpose2d 对由多个输入平面组成的输入图像应用二维转置卷积算子, 有时也称为反卷积. 其中,weight是:输入通道,输出通 input – 输入tensor weight – 卷积核 bias – 可选的偏置 stride –卷积核的步幅, 可以是单个数字或一个元素元组 (sH, sW). 默认值: 1 padding – 在输入的两边

    2024年02月16日
    浏览(41)
  • 理解3d卷积conv3d

    作分类时,对于不同类别的数据,无论是使用什么方法和分类器(仅限于线性回归和深度学习)去拟合数据,都首先要构建适合数据的多种特征(比如根据性别、年龄、身高来区分一个人是否喜欢打篮球).之后的处理过程是,权重参数都要和不同的特征分别相乘,然后再将不同的乘积加起

    2023年04月24日
    浏览(46)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包