简单有趣的轻量级网络 Efficientnet(可以直接替换自己数据集)-直接放置自己的数据集就能直接跑。跑的代码有问题的可以在评论区(网络结构详解+详细注释代码+核心思想讲解)——pytorch实现

这篇具有很好参考价值的文章主要介绍了简单有趣的轻量级网络 Efficientnet(可以直接替换自己数据集)-直接放置自己的数据集就能直接跑。跑的代码有问题的可以在评论区(网络结构详解+详细注释代码+核心思想讲解)——pytorch实现。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

        这期博客我们来学习一下Efficientnet网络,属于NAS系列中最优秀的轻量级网络之一,通过NAS搜索的方式确定最佳的网络结构。之前的神经网络的宽度深度,输入图像的分辨率,是怎么获得的呢,说白了就是经验,研究人员通过无数的设计经验获得的,但是网络的发展不可能一直通过经验,如何获取最佳的结构是大家研究的问题。

本博客代码可以直接生成训练集和测试集的损失和准确率的折线图,便于写论文使用。

Efficientnet论文下载地址:https://arxiv.org/abs/1905.11946

在Efficientnet出现之前,大家都在纠结对于提高网络性能的三点要求:

                                                    1、提高网络的深度

                                                    2、提高网络的宽度

                                                    3、提高输入图片的分辨率

        前两点的提高在理论上可以提升网络的学习能力,通过提升网络深度来提升网络性能的神经网络有alexnet、Vgg、resnet。通过提升网络宽度的神经网络有Googlenet系列,inception v2 inceptionv3等等。通过提升输入图像的分辨率来提升模型的性能,因为分辨率越高可以带来更细微的图像特征,使得网络可以学到更多的目标特征信息,因此可以提升网络的性能。

一举例子大家就知道这三个方面的提升到底是啥意思了,但是随着研究的深入,研究人员发现一味的提高这三项指标,并不能一直提升网络的性能。

1、网络深度的持续提升,会增加网络的学习能力,但是会出现梯度爆炸,梯度消失等问题。(可以参考Resnet诞生的原因)

2、网络宽度的持续提升,可以使得网络学到多种不同的特征,获得更为好的效果,但是对于深层次的特征无法获取,无法获取更为深层次的语义特征。

3、输入图片的分辨率越高,虽然网络可以获取更为精细化的特征,但是这样会大大增加计算量,增大对设备的要求,这显然不是我们想看到的。

因此,如何不通过人工实验的方式,获取最为理想化的网络深度、宽度、使用最合适的图片分辨率,如何平衡三者之间的关系,成为研究人员研究的问题。

       在这种背景下,Efficientnet应运而生,在我的思想中,这应该是NAS搜索网络的代表作之一把,很经典的NAS搜索网络。下图是我自己画的MBConv结构,是Efficientnet中使用的,跟mobilenet v3的Inverted residual block区别不大,可以参考如下链接。

简单有趣的轻量级网络MobileNets v1 、MobileNets v2、MobileNets v3(网络结构详解+详细注释代码+核心思想讲解)——pytorch实现_小馨馨的小翟的博客-CSDN博客

efficientnet使用,有趣的轻量级神经网络,深度学习,目标检测,人工智能

 MBConv结构

efficientnet使用,有趣的轻量级神经网络,深度学习,目标检测,人工智能

  MBConv具体参数

跟mobilenet v3的Inverted residual block的区别在于加入了SE(Squeeze-and-Excitation)注意力机制模块,且使用Swish激活函数。

EfficientNet作者给Efficientnet网络的8个版本,EfficientNet_b0-EfficientNet_b7

efficientnet使用,有趣的轻量级神经网络,深度学习,目标检测,人工智能

 EfficientNet_b0-EfficientNet_b7的具体参数如上表所示,其中

1、input_size为输入网络的图像大小

2、width_coefficient为通道的倍率,跟通道数相乘向上取整

3、depth_coefficient深度的倍率,跟深度相乘向上取整drop_connect_rate

4、drop_connect_rate在MBConv结构中dropout层中使用

5、dropout_rate在最后一个全连接层前的dropout层中使用

论文作者给出了EfficientNet系列网络相较于当时主流的其他网络的对比,可以看出相较于ResNet-50,DenseNet等网络,EfficientNet系列网络具有明显的优势。如下图所示

efficientnet使用,有趣的轻量级神经网络,深度学习,目标检测,人工智能

efficientnet使用,有趣的轻量级神经网络,深度学习,目标检测,人工智能

Efficientnet代码

训练代码:

import torch
import math
import copy
from torch import nn, Tensor
import torchvision.models
from torch.nn import functional as F
from matplotlib import pyplot as plt
from tqdm import tqdm
from torch import nn
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from functools import partial
from collections import OrderedDict
from typing import Optional, Callable
data_transform = {
    "train": transforms.Compose([transforms.RandomResizedCrop(120),
                                 transforms.RandomHorizontalFlip(),
                                 transforms.ToTensor(),
                                 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),
    "val": transforms.Compose([transforms.Resize((120, 120)),  # cannot 224, must (224, 224)
                               transforms.ToTensor(),
                               transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}

train_data = torchvision.datasets.ImageFolder(root = "./data/train" ,   transform = data_transform["train"])

traindata = DataLoader(dataset=train_data, batch_size = 32 , shuffle=True, num_workers=0)  # 将训练数据以每次32张图片的形式抽出进行训练

test_data = torchvision.datasets.ImageFolder(root = "./data/val" , transform = data_transform["val"])

train_size = len(train_data)  # 训练集的长度
test_size = len(test_data)  # 测试集的长度
print(train_size)   #输出训练集长度看一下,相当于看看有几张图片
print(test_size)    #输出测试集长度看一下,相当于看看有几张图片
testdata = DataLoader(dataset=test_data, batch_size = 32, shuffle=True, num_workers=0)  # 将训练数据以每次32张图片的形式抽出进行测试

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("using {} device.".format(device))


def _make_divisible(ch, divisor=8, min_ch=None):
    """
    This function is taken from the original tf repo.
    It ensures that all layers have a channel number that is divisible by 8
    It can be seen here:
    https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
    """
    if min_ch is None:
        min_ch = divisor
    new_ch = max(min_ch, int(ch + divisor / 2) // divisor * divisor)
    # Make sure that round down does not go down by more than 10%.
    if new_ch < 0.9 * ch:
        new_ch += divisor
    return new_ch


def drop_path(x, drop_prob: float = 0., training: bool = False):
    """
    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
    "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf

    This function is taken from the rwightman.
    It can be seen here:
    https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py#L140
    """
    if drop_prob == 0. or not training:
        return x
    keep_prob = 1 - drop_prob
    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNets
    random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
    random_tensor.floor_()  # binarize
    output = x.div(keep_prob) * random_tensor
    return output


class DropPath(nn.Module):
    """
    Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).
    "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf
    """
    def __init__(self, drop_prob=None):
        super(DropPath, self).__init__()
        self.drop_prob = drop_prob

    def forward(self, x):
        return drop_path(x, self.drop_prob, self.training)


class ConvBNActivation(nn.Sequential):
    def __init__(self,
                 in_planes: int,
                 out_planes: int,
                 kernel_size: int = 3,
                 stride: int = 1,
                 groups: int = 1,
                 norm_layer: Optional[Callable[..., nn.Module]] = None,
                 activation_layer: Optional[Callable[..., nn.Module]] = None):
        padding = (kernel_size - 1) // 2
        if norm_layer is None:
            norm_layer = nn.BatchNorm2d
        if activation_layer is None:
            activation_layer = nn.SiLU  # alias Swish  (torch>=1.7)

        super(ConvBNActivation, self).__init__(nn.Conv2d(in_channels=in_planes,
                                                         out_channels=out_planes,
                                                         kernel_size=kernel_size,
                                                         stride=stride,
                                                         padding=padding,
                                                         groups=groups,
                                                         bias=False),
                                               norm_layer(out_planes),
                                               activation_layer())


class SqueezeExcitation(nn.Module):
    def __init__(self,
                 input_c: int,   # block input channel
                 expand_c: int,  # block expand channel
                 squeeze_factor: int = 4):
        super(SqueezeExcitation, self).__init__()
        squeeze_c = input_c // squeeze_factor
        self.fc1 = nn.Conv2d(expand_c, squeeze_c, 1)
        self.ac1 = nn.SiLU()  # alias Swish
        self.fc2 = nn.Conv2d(squeeze_c, expand_c, 1)
        self.ac2 = nn.Sigmoid()

    def forward(self, x: Tensor) -> Tensor:
        scale = F.adaptive_avg_pool2d(x, output_size=(1, 1))
        scale = self.fc1(scale)
        scale = self.ac1(scale)
        scale = self.fc2(scale)
        scale = self.ac2(scale)
        return scale * x


class InvertedResidualConfig:
    # kernel_size, in_channel, out_channel, exp_ratio, strides, use_SE, drop_connect_rate
    def __init__(self,
                 kernel: int,          # 3 or 5
                 input_c: int,
                 out_c: int,
                 expanded_ratio: int,  # 1 or 6
                 stride: int,          # 1 or 2
                 use_se: bool,         # True
                 drop_rate: float,
                 index: str,           # 1a, 2a, 2b, ...
                 width_coefficient: float):
        self.input_c = self.adjust_channels(input_c, width_coefficient)
        self.kernel = kernel
        self.expanded_c = self.input_c * expanded_ratio
        self.out_c = self.adjust_channels(out_c, width_coefficient)
        self.use_se = use_se
        self.stride = stride
        self.drop_rate = drop_rate
        self.index = index

    @staticmethod
    def adjust_channels(channels: int, width_coefficient: float):
        return _make_divisible(channels * width_coefficient, 8)


class InvertedResidual(nn.Module):
    def __init__(self,
                 cnf: InvertedResidualConfig,
                 norm_layer: Callable[..., nn.Module]):
        super(InvertedResidual, self).__init__()

        if cnf.stride not in [1, 2]:
            raise ValueError("illegal stride value.")

        self.use_res_connect = (cnf.stride == 1 and cnf.input_c == cnf.out_c)

        layers = OrderedDict()
        activation_layer = nn.SiLU  # alias Swish

        # expand
        if cnf.expanded_c != cnf.input_c:
            layers.update({"expand_conv": ConvBNActivation(cnf.input_c,
                                                           cnf.expanded_c,
                                                           kernel_size=1,
                                                           norm_layer=norm_layer,
                                                           activation_layer=activation_layer)})

        # depthwise
        layers.update({"dwconv": ConvBNActivation(cnf.expanded_c,
                                                  cnf.expanded_c,
                                                  kernel_size=cnf.kernel,
                                                  stride=cnf.stride,
                                                  groups=cnf.expanded_c,
                                                  norm_layer=norm_layer,
                                                  activation_layer=activation_layer)})

        if cnf.use_se:
            layers.update({"se": SqueezeExcitation(cnf.input_c,
                                                   cnf.expanded_c)})

        # project
        layers.update({"project_conv": ConvBNActivation(cnf.expanded_c,
                                                        cnf.out_c,
                                                        kernel_size=1,
                                                        norm_layer=norm_layer,
                                                        activation_layer=nn.Identity)})

        self.block = nn.Sequential(layers)
        self.out_channels = cnf.out_c
        self.is_strided = cnf.stride > 1

        # 只有在使用shortcut连接时才使用dropout层
        if self.use_res_connect and cnf.drop_rate > 0:
            self.dropout = DropPath(cnf.drop_rate)
        else:
            self.dropout = nn.Identity()

    def forward(self, x: Tensor) -> Tensor:
        result = self.block(x)
        result = self.dropout(result)
        if self.use_res_connect:
            result += x

        return result


class EfficientNet(nn.Module):
    def __init__(self,
                 width_coefficient: float,
                 depth_coefficient: float,
                 num_classes: int = 1000,
                 dropout_rate: float = 0.2,
                 drop_connect_rate: float = 0.2,
                 block: Optional[Callable[..., nn.Module]] = None,
                 norm_layer: Optional[Callable[..., nn.Module]] = None
                 ):
        super(EfficientNet, self).__init__()

        # kernel_size, in_channel, out_channel, exp_ratio, strides, use_SE, drop_connect_rate, repeats
        default_cnf = [[3, 32, 16, 1, 1, True, drop_connect_rate, 1],
                       [3, 16, 24, 6, 2, True, drop_connect_rate, 2],
                       [5, 24, 40, 6, 2, True, drop_connect_rate, 2],
                       [3, 40, 80, 6, 2, True, drop_connect_rate, 3],
                       [5, 80, 112, 6, 1, True, drop_connect_rate, 3],
                       [5, 112, 192, 6, 2, True, drop_connect_rate, 4],
                       [3, 192, 320, 6, 1, True, drop_connect_rate, 1]]

        def round_repeats(repeats):
            """Round number of repeats based on depth multiplier."""
            return int(math.ceil(depth_coefficient * repeats))

        if block is None:
            block = InvertedResidual

        if norm_layer is None:
            norm_layer = partial(nn.BatchNorm2d, eps=1e-3, momentum=0.1)

        adjust_channels = partial(InvertedResidualConfig.adjust_channels,
                                  width_coefficient=width_coefficient)

        # build inverted_residual_setting
        bneck_conf = partial(InvertedResidualConfig,
                             width_coefficient=width_coefficient)

        b = 0
        num_blocks = float(sum(round_repeats(i[-1]) for i in default_cnf))
        inverted_residual_setting = []
        for stage, args in enumerate(default_cnf):
            cnf = copy.copy(args)
            for i in range(round_repeats(cnf.pop(-1))):
                if i > 0:
                    # strides equal 1 except first cnf
                    cnf[-3] = 1  # strides
                    cnf[1] = cnf[2]  # input_channel equal output_channel

                cnf[-1] = args[-2] * b / num_blocks  # update dropout ratio
                index = str(stage + 1) + chr(i + 97)  # 1a, 2a, 2b, ...
                inverted_residual_setting.append(bneck_conf(*cnf, index))
                b += 1

        # create layers
        layers = OrderedDict()

        # first conv
        layers.update({"stem_conv": ConvBNActivation(in_planes=3,
                                                     out_planes=adjust_channels(32),
                                                     kernel_size=3,
                                                     stride=2,
                                                     norm_layer=norm_layer)})

        # building inverted residual blocks
        for cnf in inverted_residual_setting:
            layers.update({cnf.index: block(cnf, norm_layer)})

        # build top
        last_conv_input_c = inverted_residual_setting[-1].out_c
        last_conv_output_c = adjust_channels(1280)
        layers.update({"top": ConvBNActivation(in_planes=last_conv_input_c,
                                               out_planes=last_conv_output_c,
                                               kernel_size=1,
                                               norm_layer=norm_layer)})

        self.features = nn.Sequential(layers)
        self.avgpool = nn.AdaptiveAvgPool2d(1)

        classifier = []
        if dropout_rate > 0:
            classifier.append(nn.Dropout(p=dropout_rate, inplace=True))
        classifier.append(nn.Linear(last_conv_output_c, num_classes))
        self.classifier = nn.Sequential(*classifier)

        # initial weights
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode="fan_out")
                if m.bias is not None:
                    nn.init.zeros_(m.bias)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.ones_(m.weight)
                nn.init.zeros_(m.bias)
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.zeros_(m.bias)

    def _forward_impl(self, x: Tensor) -> Tensor:
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)

        return x

    def forward(self, x: Tensor) -> Tensor:
        return self._forward_impl(x)


def efficientnet_b0(num_classes=1000):
    # input image size 224x224
    return EfficientNet(width_coefficient=1.0,
                        depth_coefficient=1.0,
                        dropout_rate=0.2,
                        num_classes=num_classes)


def efficientnet_b1(num_classes=1000):
    # input image size 240x240
    return EfficientNet(width_coefficient=1.0,
                        depth_coefficient=1.1,
                        dropout_rate=0.2,
                        num_classes=num_classes)


def efficientnet_b2(num_classes=1000):
    # input image size 260x260
    return EfficientNet(width_coefficient=1.1,
                        depth_coefficient=1.2,
                        dropout_rate=0.3,
                        num_classes=num_classes)


def efficientnet_b3(num_classes=1000):
    # input image size 300x300
    return EfficientNet(width_coefficient=1.2,
                        depth_coefficient=1.4,
                        dropout_rate=0.3,
                        num_classes=num_classes)


def efficientnet_b4(num_classes=1000):
    # input image size 380x380
    return EfficientNet(width_coefficient=1.4,
                        depth_coefficient=1.8,
                        dropout_rate=0.4,
                        num_classes=num_classes)


def efficientnet_b5(num_classes=1000):
    # input image size 456x456
    return EfficientNet(width_coefficient=1.6,
                        depth_coefficient=2.2,
                        dropout_rate=0.4,
                        num_classes=num_classes)


def efficientnet_b6(num_classes=1000):
    # input image size 528x528
    return EfficientNet(width_coefficient=1.8,
                        depth_coefficient=2.6,
                        dropout_rate=0.5,
                        num_classes=num_classes)


def efficientnet_b7(num_classes=1000):
    # input image size 600x600
    return EfficientNet(width_coefficient=2.0,
                        depth_coefficient=3.1,
                        dropout_rate=0.5,
                        num_classes=num_classes)




efficientnet = efficientnet_b0(num_classes = 2) #将模型命名为efficientnet,这里num_classes是数据集的种类,我用的猫狗数据集两类,所以等于2 你设置成你数据集的种类即可
#上面用的是efficientnet_b0,如果想用efficientnet其他系列的,直接把上面的efficientnet_b其他系列就行即可
efficientnet.to(device)

print(efficientnet.to(device))  #输出模型结构


test1 = torch.ones(64, 3, 120, 120)  # 测试一下输出的形状大小 输入一个64,3,120,120的向量

test1 = efficientnet(test1.to(device))    #将向量打入神经网络进行测试
print(test1.shape)  #查看输出的结果

epoch = 1 # 迭代次数即训练次数
learning = 0.0001  # 学习率
optimizer = torch.optim.Adam(efficientnet.parameters(), lr=learning)  # 使用Adam优化器-写论文的话可以具体查一下这个优化器的原理
loss = nn.CrossEntropyLoss()  # 损失计算方式,交叉熵损失函数

train_loss_all = []  # 存放训练集损失的数组
train_accur_all = []  # 存放训练集准确率的数组
test_loss_all = []  # 存放测试集损失的数组
test_accur_all = []  # 存放测试集准确率的数组
for i in range(epoch):  #开始迭代
    train_loss = 0   #训练集的损失初始设为0
    train_num = 0.0   #
    train_accuracy = 0.0  #训练集的准确率初始设为0
    efficientnet.train()   #将模型设置成 训练模式
    train_bar = tqdm(traindata)  #用于进度条显示,没啥实际用处
    for step, data in enumerate(train_bar):  #开始迭代跑, enumerate这个函数不懂可以查查,将训练集分为 data是序号,data是数据
        img, target = data    #将data 分位 img图片,target标签
        optimizer.zero_grad()  # 清空历史梯度
        outputs = efficientnet(img.to(device))  # 将图片打入网络进行训练,outputs是输出的结果

        loss1 = loss(outputs, target.to(device))  # 计算神经网络输出的结果outputs与图片真实标签target的差别-这就是我们通常情况下称为的损失
        outputs = torch.argmax(outputs, 1)   #会输出10个值,最大的值就是我们预测的结果 求最大值
        loss1.backward()   #神经网络反向传播
        optimizer.step()  #梯度优化 用上面的abam优化
        train_loss = train_loss + loss1.item()  #将所有损失的绝对值加起来
        accuracy = torch.sum(outputs == target.to(device))   #outputs == target的 即使预测正确的,统计预测正确的个数,从而计算准确率
        train_accuracy = train_accuracy + accuracy   #求训练集的准确率
        train_num += img.size(0)  #

    print("epoch:{} , train-Loss:{} , train-accuracy:{}".format(i + 1, train_loss / train_num,   #输出训练情况
                                                                train_accuracy / train_num))
    train_loss_all.append(train_loss / train_num)   #将训练的损失放到一个列表里 方便后续画图
    train_accur_all.append(train_accuracy.double().item() / train_num)#训练集的准确率
    test_loss = 0   #同上 测试损失
    test_accuracy = 0.0  #测试准确率
    test_num = 0
    efficientnet.eval()   #将模型调整为测试模型
    with torch.no_grad():  #清空历史梯度,进行测试  与训练最大的区别是测试过程中取消了反向传播
        test_bar = tqdm(testdata)
        for data in test_bar:
            img, target = data

            outputs = efficientnet(img.to(device))
            loss2 = loss(outputs, target.to(device)).cpu()
            outputs = torch.argmax(outputs, 1)
            test_loss = test_loss + loss2.item()
            accuracy = torch.sum(outputs == target.to(device))
            test_accuracy = test_accuracy + accuracy
            test_num += img.size(0)

    print("test-Loss:{} , test-accuracy:{}".format(test_loss / test_num, test_accuracy / test_num))
    test_loss_all.append(test_loss / test_num)
    test_accur_all.append(test_accuracy.double().item() / test_num)

#下面的是画图过程,将上述存放的列表  画出来即可,分别画出训练集和测试集的损失和准确率图
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(range(epoch), train_loss_all,
         "ro-", label="Train loss")
plt.plot(range(epoch), test_loss_all,
         "bs-", label="test loss")
plt.legend()
plt.xlabel("epoch")
plt.ylabel("Loss")
plt.subplot(1, 2, 2)
plt.plot(range(epoch), train_accur_all,
         "ro-", label="Train accur")
plt.plot(range(epoch), test_accur_all,
         "bs-", label="test accur")
plt.xlabel("epoch")
plt.ylabel("acc")
plt.legend()
plt.show()

torch.save(efficientnet, "efficientnet.pth")
print("模型已保存")

预测代码:

import torch
from PIL import Image
from torch import nn
from torchvision.transforms import transforms
from typing import Callable, List, Optional
from torch import nn, Tensor
from torch.nn import functional as F
image_path = "1.jpg"#相对路径 导入图片
trans = transforms.Compose([transforms.Resize((120 , 120)),
                           transforms.ToTensor()])   #将图片缩放为跟训练集图片的大小一样 方便预测,且将图片转换为张量
image = Image.open(image_path)  #打开图片
# print(image)  #输出图片 看看图片格式
image = image.convert("RGB")  #将图片转换为RGB格式
image = trans(image)   #上述的缩放和转张量操作在这里实现
# print(image)   #查看转换后的样子
image = torch.unsqueeze(image, dim=0)  #将图片维度扩展一维

classes = ["cat" , "dog" ]  #预测种类,我这里用的猫狗数据集,所以是这两种,你调成你的种类即可

def _make_divisible(ch, divisor=8, min_ch=None):
    """
    This function is taken from the original tf repo.
    It ensures that all layers have a channel number that is divisible by 8
    It can be seen here:
    https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
    """
    if min_ch is None:
        min_ch = divisor
    new_ch = max(min_ch, int(ch + divisor / 2) // divisor * divisor)
    # Make sure that round down does not go down by more than 10%.
    if new_ch < 0.9 * ch:
        new_ch += divisor
    return new_ch


def drop_path(x, drop_prob: float = 0., training: bool = False):
    """
    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
    "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf

    This function is taken from the rwightman.
    It can be seen here:
    https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py#L140
    """
    if drop_prob == 0. or not training:
        return x
    keep_prob = 1 - drop_prob
    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNets
    random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
    random_tensor.floor_()  # binarize
    output = x.div(keep_prob) * random_tensor
    return output


class DropPath(nn.Module):
    """
    Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).
    "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf
    """
    def __init__(self, drop_prob=None):
        super(DropPath, self).__init__()
        self.drop_prob = drop_prob

    def forward(self, x):
        return drop_path(x, self.drop_prob, self.training)


class ConvBNActivation(nn.Sequential):
    def __init__(self,
                 in_planes: int,
                 out_planes: int,
                 kernel_size: int = 3,
                 stride: int = 1,
                 groups: int = 1,
                 norm_layer: Optional[Callable[..., nn.Module]] = None,
                 activation_layer: Optional[Callable[..., nn.Module]] = None):
        padding = (kernel_size - 1) // 2
        if norm_layer is None:
            norm_layer = nn.BatchNorm2d
        if activation_layer is None:
            activation_layer = nn.SiLU  # alias Swish  (torch>=1.7)

        super(ConvBNActivation, self).__init__(nn.Conv2d(in_channels=in_planes,
                                                         out_channels=out_planes,
                                                         kernel_size=kernel_size,
                                                         stride=stride,
                                                         padding=padding,
                                                         groups=groups,
                                                         bias=False),
                                               norm_layer(out_planes),
                                               activation_layer())


class SqueezeExcitation(nn.Module):
    def __init__(self,
                 input_c: int,   # block input channel
                 expand_c: int,  # block expand channel
                 squeeze_factor: int = 4):
        super(SqueezeExcitation, self).__init__()
        squeeze_c = input_c // squeeze_factor
        self.fc1 = nn.Conv2d(expand_c, squeeze_c, 1)
        self.ac1 = nn.SiLU()  # alias Swish
        self.fc2 = nn.Conv2d(squeeze_c, expand_c, 1)
        self.ac2 = nn.Sigmoid()

    def forward(self, x: Tensor) -> Tensor:
        scale = F.adaptive_avg_pool2d(x, output_size=(1, 1))
        scale = self.fc1(scale)
        scale = self.ac1(scale)
        scale = self.fc2(scale)
        scale = self.ac2(scale)
        return scale * x


class InvertedResidualConfig:
    # kernel_size, in_channel, out_channel, exp_ratio, strides, use_SE, drop_connect_rate
    def __init__(self,
                 kernel: int,          # 3 or 5
                 input_c: int,
                 out_c: int,
                 expanded_ratio: int,  # 1 or 6
                 stride: int,          # 1 or 2
                 use_se: bool,         # True
                 drop_rate: float,
                 index: str,           # 1a, 2a, 2b, ...
                 width_coefficient: float):
        self.input_c = self.adjust_channels(input_c, width_coefficient)
        self.kernel = kernel
        self.expanded_c = self.input_c * expanded_ratio
        self.out_c = self.adjust_channels(out_c, width_coefficient)
        self.use_se = use_se
        self.stride = stride
        self.drop_rate = drop_rate
        self.index = index

    @staticmethod
    def adjust_channels(channels: int, width_coefficient: float):
        return _make_divisible(channels * width_coefficient, 8)


class InvertedResidual(nn.Module):
    def __init__(self,
                 cnf: InvertedResidualConfig,
                 norm_layer: Callable[..., nn.Module]):
        super(InvertedResidual, self).__init__()

        if cnf.stride not in [1, 2]:
            raise ValueError("illegal stride value.")

        self.use_res_connect = (cnf.stride == 1 and cnf.input_c == cnf.out_c)

        layers = OrderedDict()
        activation_layer = nn.SiLU  # alias Swish

        # expand
        if cnf.expanded_c != cnf.input_c:
            layers.update({"expand_conv": ConvBNActivation(cnf.input_c,
                                                           cnf.expanded_c,
                                                           kernel_size=1,
                                                           norm_layer=norm_layer,
                                                           activation_layer=activation_layer)})

        # depthwise
        layers.update({"dwconv": ConvBNActivation(cnf.expanded_c,
                                                  cnf.expanded_c,
                                                  kernel_size=cnf.kernel,
                                                  stride=cnf.stride,
                                                  groups=cnf.expanded_c,
                                                  norm_layer=norm_layer,
                                                  activation_layer=activation_layer)})

        if cnf.use_se:
            layers.update({"se": SqueezeExcitation(cnf.input_c,
                                                   cnf.expanded_c)})

        # project
        layers.update({"project_conv": ConvBNActivation(cnf.expanded_c,
                                                        cnf.out_c,
                                                        kernel_size=1,
                                                        norm_layer=norm_layer,
                                                        activation_layer=nn.Identity)})

        self.block = nn.Sequential(layers)
        self.out_channels = cnf.out_c
        self.is_strided = cnf.stride > 1

        # 只有在使用shortcut连接时才使用dropout层
        if self.use_res_connect and cnf.drop_rate > 0:
            self.dropout = DropPath(cnf.drop_rate)
        else:
            self.dropout = nn.Identity()

    def forward(self, x: Tensor) -> Tensor:
        result = self.block(x)
        result = self.dropout(result)
        if self.use_res_connect:
            result += x

        return result


class EfficientNet(nn.Module):
    def __init__(self,
                 width_coefficient: float,
                 depth_coefficient: float,
                 num_classes: int = 1000,
                 dropout_rate: float = 0.2,
                 drop_connect_rate: float = 0.2,
                 block: Optional[Callable[..., nn.Module]] = None,
                 norm_layer: Optional[Callable[..., nn.Module]] = None
                 ):
        super(EfficientNet, self).__init__()

        # kernel_size, in_channel, out_channel, exp_ratio, strides, use_SE, drop_connect_rate, repeats
        default_cnf = [[3, 32, 16, 1, 1, True, drop_connect_rate, 1],
                       [3, 16, 24, 6, 2, True, drop_connect_rate, 2],
                       [5, 24, 40, 6, 2, True, drop_connect_rate, 2],
                       [3, 40, 80, 6, 2, True, drop_connect_rate, 3],
                       [5, 80, 112, 6, 1, True, drop_connect_rate, 3],
                       [5, 112, 192, 6, 2, True, drop_connect_rate, 4],
                       [3, 192, 320, 6, 1, True, drop_connect_rate, 1]]

        def round_repeats(repeats):
            """Round number of repeats based on depth multiplier."""
            return int(math.ceil(depth_coefficient * repeats))

        if block is None:
            block = InvertedResidual

        if norm_layer is None:
            norm_layer = partial(nn.BatchNorm2d, eps=1e-3, momentum=0.1)

        adjust_channels = partial(InvertedResidualConfig.adjust_channels,
                                  width_coefficient=width_coefficient)

        # build inverted_residual_setting
        bneck_conf = partial(InvertedResidualConfig,
                             width_coefficient=width_coefficient)

        b = 0
        num_blocks = float(sum(round_repeats(i[-1]) for i in default_cnf))
        inverted_residual_setting = []
        for stage, args in enumerate(default_cnf):
            cnf = copy.copy(args)
            for i in range(round_repeats(cnf.pop(-1))):
                if i > 0:
                    # strides equal 1 except first cnf
                    cnf[-3] = 1  # strides
                    cnf[1] = cnf[2]  # input_channel equal output_channel

                cnf[-1] = args[-2] * b / num_blocks  # update dropout ratio
                index = str(stage + 1) + chr(i + 97)  # 1a, 2a, 2b, ...
                inverted_residual_setting.append(bneck_conf(*cnf, index))
                b += 1

        # create layers
        layers = OrderedDict()

        # first conv
        layers.update({"stem_conv": ConvBNActivation(in_planes=3,
                                                     out_planes=adjust_channels(32),
                                                     kernel_size=3,
                                                     stride=2,
                                                     norm_layer=norm_layer)})

        # building inverted residual blocks
        for cnf in inverted_residual_setting:
            layers.update({cnf.index: block(cnf, norm_layer)})

        # build top
        last_conv_input_c = inverted_residual_setting[-1].out_c
        last_conv_output_c = adjust_channels(1280)
        layers.update({"top": ConvBNActivation(in_planes=last_conv_input_c,
                                               out_planes=last_conv_output_c,
                                               kernel_size=1,
                                               norm_layer=norm_layer)})

        self.features = nn.Sequential(layers)
        self.avgpool = nn.AdaptiveAvgPool2d(1)

        classifier = []
        if dropout_rate > 0:
            classifier.append(nn.Dropout(p=dropout_rate, inplace=True))
        classifier.append(nn.Linear(last_conv_output_c, num_classes))
        self.classifier = nn.Sequential(*classifier)

        # initial weights
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode="fan_out")
                if m.bias is not None:
                    nn.init.zeros_(m.bias)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.ones_(m.weight)
                nn.init.zeros_(m.bias)
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.zeros_(m.bias)

    def _forward_impl(self, x: Tensor) -> Tensor:
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)

        return x

    def forward(self, x: Tensor) -> Tensor:
        return self._forward_impl(x)


def efficientnet_b0(num_classes=1000):
    # input image size 224x224
    return EfficientNet(width_coefficient=1.0,
                        depth_coefficient=1.0,
                        dropout_rate=0.2,
                        num_classes=num_classes)


def efficientnet_b1(num_classes=1000):
    # input image size 240x240
    return EfficientNet(width_coefficient=1.0,
                        depth_coefficient=1.1,
                        dropout_rate=0.2,
                        num_classes=num_classes)


def efficientnet_b2(num_classes=1000):
    # input image size 260x260
    return EfficientNet(width_coefficient=1.1,
                        depth_coefficient=1.2,
                        dropout_rate=0.3,
                        num_classes=num_classes)


def efficientnet_b3(num_classes=1000):
    # input image size 300x300
    return EfficientNet(width_coefficient=1.2,
                        depth_coefficient=1.4,
                        dropout_rate=0.3,
                        num_classes=num_classes)


def efficientnet_b4(num_classes=1000):
    # input image size 380x380
    return EfficientNet(width_coefficient=1.4,
                        depth_coefficient=1.8,
                        dropout_rate=0.4,
                        num_classes=num_classes)


def efficientnet_b5(num_classes=1000):
    # input image size 456x456
    return EfficientNet(width_coefficient=1.6,
                        depth_coefficient=2.2,
                        dropout_rate=0.4,
                        num_classes=num_classes)


def efficientnet_b6(num_classes=1000):
    # input image size 528x528
    return EfficientNet(width_coefficient=1.8,
                        depth_coefficient=2.6,
                        dropout_rate=0.5,
                        num_classes=num_classes)


def efficientnet_b7(num_classes=1000):
    # input image size 600x600
    return EfficientNet(width_coefficient=2.0,
                        depth_coefficient=3.1,
                        dropout_rate=0.5,
                        num_classes=num_classes)




#以上是神经网络结构,因为读取了模型之后代码还得知道神经网络的结构才能进行预测
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")  #将代码放入GPU进行训练
print("using {} device.".format(device))


model = torch.load("efficientnet.pth")  #读取模型
model.eval()  #关闭梯度,将模型调整为测试模式
with torch.no_grad():  #梯度清零
    outputs = model(image.to(device))  #将图片打入神经网络进行测试
    # print(model)  #输出模型结构
    # print(outputs)  #输出预测的张量数组
    ans = (outputs.argmax(1)).item()  #最大的值即为预测结果,找出最大值在数组中的序号,
    # 对应找其在种类中的序号即可然后输出即为其种类
    print("该图片的种类为:",classes[ans])
    # print(classes[ans])

网络结构搭建部分注释的很详细,有问题朋友欢迎在评论区指出,感谢!

不懂我代码使用方法的可以看看我之前开源的代码,更为详细:手撕Resnet卷积神经网络-pytorch-详细注释版(可以直接替换自己数据集)-直接放置自己的数据集就能直接跑。跑的代码有问题的可以在评论区指出,看到了会回复。训练代码和预测代码均有。_pytorch更换数据集需要改代码吗_小馨馨的小翟的博客-CSDN博客

本代码使用的数据集是猫狗数据集,已经分好训练集和测试集了,下面给出数据集的下载链接。

链接:https://pan.baidu.com/s/1_gUznMQnzI0UhvsV7wPgzw 
提取码:3ixd

Efficientnet 代码下载链接如下:

链接:https://pan.baidu.com/s/1czqLCOBtcxkwBgIiwq83tg 
提取码:x15v

MBConv结构图ppt可修改下载链接:https://pan.baidu.com/s/1T1FNRycPgIaBTjsHE25kzA 
提取码:05r2
文章来源地址https://www.toymoban.com/news/detail-858917.html

到了这里,关于简单有趣的轻量级网络 Efficientnet(可以直接替换自己数据集)-直接放置自己的数据集就能直接跑。跑的代码有问题的可以在评论区(网络结构详解+详细注释代码+核心思想讲解)——pytorch实现的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • PikVM:轻量级虚拟化解决方案,让云计算更简单

    项目地址:https://gitcode.com/pikvm/pikvm PikVM 是一个创新的、轻量级的虚拟化平台,旨在简化云计算环境的部署和管理。它基于KVM(Kernel-based Virtual Machine),但提供了一种更加简洁且易于使用的接口,适合开发者、运维人员以及对虚拟化技术感兴趣的任何人。 PikVM 的核心理念是“

    2024年04月16日
    浏览(53)
  • python轻量级web框架flask初探,搭建网站原来这么简单

    ✨✨ 欢迎大家来到景天科技苑✨✨ 🎈🎈 养成好习惯,先赞后看哦~🎈🎈 🏆 作者简介:景天科技苑 🏆《头衔》:大厂架构师,华为云开发者社区专家博主,阿里云开发者社区专家博主,CSDN新星创作者,掘金优秀博主,51CTO博客专家等。 🏆《博客》:Python全栈,前后端开

    2024年03月19日
    浏览(107)
  • 轻量级卷积神经网络MobileNets详细解读

    随着深度学习的飞速发展,计算机视觉领域内的卷积神经网络种类也层出不穷。从1998年的LeNet网络到2012引起深度学习热潮年的AlexNet网络,再到2014年的VGG网络,再到后来2015的ResNet网络,深度学习网络在图像处理上表现得越来越好。但是这些网络都在不断增加网络深度和宽度来

    2024年02月04日
    浏览(40)
  • 轻量级网络IP扫描器WatchYourLAN

    什么是 WatchYourLAN ? WatchYourLAN 是一款带有 Web GUI 的轻量级网络 IP 扫描器。支持使用不同的主题和色彩模式进行个性化设置。 扫描网络,首先要找到对应的网络接口,一般常见的包括 eth0 、 lo 、 docker0 等,可以在 SSH 客户端命令行,执行下面的命令 但是目前 WatchYourLAN 还不支

    2024年02月06日
    浏览(68)
  • C++轻量级跨平台桌面GUI库FLTK的简单使用

    C++的跨平台桌面GUI库有很多,大体上分成两种流派:retained mode和immediate mode。 其中前者是主流的桌面GUI机制框架,包括:Qt、wxwidgets、gtk、juce等 后者是一些游戏引擎编辑器常用的GUI机制框架,包括:imgui、nanogui等 使用这些框架都支持构建在windows、mac、linux上面能运行的桌面

    2024年02月08日
    浏览(51)
  • 【MFEN:轻量级多尺度特征提取:SR网络】

    (MFEN:嵌入式轻量级多尺度特征提取超分辨率网络) 深度卷积神经网络(CNN)在超分辨率(SR)方面取得了显著的成绩。然而,基于深度神经网络的方法由于计算量大、内存消耗大,很难应用于嵌入式便携设备。针对上述问题, 通过构造多尺度特征提取块(multi-scale feature

    2024年02月06日
    浏览(43)
  • 【表达式引擎】简单高效的轻量级Java表达式引擎:Aviator

    Aviator 是一个高性能、、轻量级的表达式引擎,支持表达式动态求值。其设计目标为轻量级和高性能,相比于 Groovy 和 JRuby 的笨重, Aviator 就显得更加的小巧。与其他的轻量级表达式引擎不同,其他的轻量级表达式引擎基本都是通过解释代码的方式来运行,而 Aviator 则是直接

    2024年02月09日
    浏览(58)
  • python开发构建轻量级卷积神经网络模型实现手写甲骨文识别系统

    手写汉字、手写数字、手写字母识别模型都已经做过很多了,但是手写甲骨文识别这个应该都是很少有听说过的吧,今天也是看到这个数据集就想着基于这批手写甲骨文数据集开发构建识别模型,首先来看下效果图: 接下来看下对应使用的数据集:  共包含40个不同类别对象

    2024年02月08日
    浏览(52)
  • 【图像分类】【深度学习】【轻量级网络】【Pytorch版本】ShuffleNet_V1模型算法详解

    ShuffleNet_V1是由旷视科技的Zhang, Xiangyu等人在《ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices【CVPR-2018】》【论文地址】一文中提出的模型,是一种采用了逐点组卷积和通道混洗的轻量级CNN网络,在保持精度的同时大大降低了计算成本。 一般的卷积是全通道卷

    2024年01月20日
    浏览(49)
  •    深度学习之轻量级神经网络在TWS蓝牙音频处理器上的部署

           加我微信hezkz17进数字音频系统研究开发交流答疑群(课题组)   深度学习之轻量级神经网络在TWS蓝牙音频处理器上的部署        深度学习之轻量级神经网络在TWS蓝牙音频处理器上的部署        深度学习之轻量级神经网络在TWS蓝牙音频处理器上的部署   项目一 科大

    2024年02月08日
    浏览(45)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包