英伟达结构化剪枝工具Nvidia Apex Automatic Sparsity [ASP](1)——使用方法
Apex是Nvdia维护的pytorch工具库,包括混合精度训练和分布式训练,Apex的目的是为了让用户能够更早的使用上这些“新鲜出炉”的训练工具。ASP(Automatic Sparsity)是Nvidia Apex模块中用于模型稀疏剪枝的算法,
项目地址:NVIDIA/apex: A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch (github.com)
本文主要介绍的是ASP中的一个用于模型剪枝的模块:ASP(Automatic sparsity),该模块仅仅向python模型训练文件中添加两行代码来实现模型的2:4稀疏剪枝,同时还可以通过开启通道置换算法将绝对值较大的参数进行保留,以求对模型精度的影响最小化。
项目地址:项目
论文链接:论文
Installation
从github clone源码安装需要checkout到23.05的tag
git clone https://github.com/NVIDIA/apex.git
cd apex
git checkout 23.05
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--permutation_search" ./
Usage
使用ASP对模型进行稀疏化只需要两步:
# 1. 导入sparsity模块
from apex.contrib.sparsity import ASP
# 2. 使用ASP来模型和优化器进行稀疏化
ASP.prune_trained_model(model, optimizer)
prune_trained_model
函数会计算出稀疏mask并将其施加在模型的权重上。
整体而言,通常需要在对模型稀疏化后重新进行训练,整个过程可以表示为:
ASP.prune_trained_model(model, optimizer)
x, y = DataLoader(args)
for epoch in range(epochs):
y_pred = model(x)
loss = loss_function(y_pred, y)
loss.backward()
optimizer.step()
torch.save(...)
非标准用法:
ASP还可以用来为模型生成稀疏的随机化参数,从而进行更加复杂高级的实验,如果在两个step之间重新计算权重的稀疏矩阵,可以通过在训练的step之间调用ASP.recompute_sparse_masks
函数来为模型重新生成稀疏mask。
Channel Permutation
该项目还可以通过开启通道置换算法,来为结构化稀疏后的模型保留最大的精度值。
通道置换算法,顾名思义,就是通过沿着权重矩阵的通道维度进行置换,并对其周围的模型层进行适当调整。
如果开启通道置换算法,那么最终的模型精度与置换算法的质量之间存在很大关系,置换的过程可以通过Apex CUDA拓展来进行加速,否则时间会非常的久。
在Installation步骤中,参数--global-option="--permutation_search"
即是用于安装permutation search CUDA extension 。
如果不希望开启通道置换算法,可以在ASP.init_model_for_pruning
方法中将参数allow_permutation
的值设置为False即可,这一点在后续的源代码分析中也会提到。
需要注意的是,当使用多个GPU时,需要为所有的GPU设置相同的随机种子,通过permutation_lib.py中的 set_identical_seed
来进行设置。
import torch
import numpy
import random
torch.manual_seed(identical_seed)
torch.cuda.manual_seed_all(identical_seed)
numpy.random.seed(identical_seed)
random.seed(identical_seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
Tips:
- 在使用ASP对一个新的(未经过稀疏的)推理模型启用结构化稀疏时需要同时调用
init_model_for_pruning
和compute_sparse_masks
方法。 -
init_model_for_pruning
会为模型层添加新的mask buffer
,用于保存compute_sparse_masks
生成的mask,因此调用了compute_sparse_masks
后的模型的state_dict
会比之前多出一些数据,这些数据均以_mma_mask
结尾的名字进行命名。 - 对于已经使用ASP enable了结构化稀疏的模型,在保存后重新加载时,需要先创建一个新的模型,并调用
init_model_for_pruning
方法为模型添加mask buffer后再load模型的state_dict
,否则因为新模型的state_dict
和之前保存的state_dict
不同而报错。
Example:
写了一个简单的Conv-FC网络,训练后使用ASP进行剪枝,随后再次进行训练
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from apex.contrib.sparsity import ASP
# 定义卷积神经网络模型
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
self.relu1 = nn.ReLU()
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.relu2 = nn.ReLU()
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32 * 7 * 7, 128)
self.relu3 = nn.ReLU()
self.fc2 = nn.Linear(128, 10)
self.sig = nn.Sigmoid()
def forward(self, x):
x = self.pool1(self.relu1(self.conv1(x)))
x = self.pool2(self.relu2(self.conv2(x)))
x = x.view(-1, 32 * 7 * 7)
x = self.relu3(self.fc1(x))
x = self.fc2(x)
x = self.sig(x)
return x
def train_loop(model, optimizer, criterion):
num_epochs = 1
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 100 == 99:
print(f'Epoch [{epoch+1}/{num_epochs}], Batch [{i+1}/{len(trainloader)}], Loss: {running_loss/100:.4f}')
running_loss = 0.0
def val(model):
correct = 0
total = 0
model.eval()
with torch.no_grad():
for images, labels in testloader:
images, labels = images.to(device), labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = correct / total * 100
print("Test Accuracy :{}%".format(accuracy))
return accuracy
def main():
# 训练网络
print('Begin to train the dense network!')
train_loop(model, optimizer, criterion)
print('Finish training the dense network!')
accuracy_dense = val(model)
print('The accuracy of the trained dense network is : {}'.format(accuracy_dense))
torch.save(model.state_dict(), 'model_weights.pth')
ASP.prune_trained_model(model, optimizer)
accuracy_sparse = val(model)
print('The accuracy of the truned network is : {}'.format(accuracy_sparse))
print('Begin to train the sparse network!')
train_loop(model, optimizer, criterion)
print('Finish training the sparse network!')
accuracy_sparse = val(model)
print('The accuracy of the trained sparse network is : {}'.format(accuracy_sparse))
torch.save(model.state_dict(), 'model_weights_sparse.pth')
print('Training finished!')
if __name__ == '__main__':
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = ConvNet().to(device)
print('original weights has been saved!')
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
main()
运行结果文章来源:https://www.toymoban.com/news/detail-654401.html
root:/home/shanlin/cnn_demo# python train.py
Found permutation search CUDA kernels
[ASP][Info] permutation_search_kernels can be imported.
original weights has been saved!
Begin to train the dense network!
The accuracy of the trained dense network is : 94.77
...
The accuracy of the truned network is : 94.15
...
The accuracy of the trained sparse network is : 96.6
Training finished!
root:/home/shanlin/cnn_demo#
可以看出,第一次训练后accuracy达到了94.77,剪枝后下降到了94.15,再次训练后重新上升到了96.6,比第一次训练还高,应该是因为模型是随便写的且数据集太简单的原因,文章来源地址https://www.toymoban.com/news/detail-654401.html
到了这里,关于英伟达结构化剪枝工具Nvidia Apex Automatic Sparsity [ASP](1)——使用方法的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!