PyTorch之Torch Script的简单使用

这篇具有很好参考价值的文章主要介绍了PyTorch之Torch Script的简单使用。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

一、参考资料

TorchScript 简介
Torch Script
Loading a TorchScript Model in C++
TorchScript 解读(一):初识 TorchScript
libtorch教程(一)开发环境搭建:VS+libtorch和Qt+libtorch

二、Torch Script模型格式

1. Torch Script简介

Torch Script 是一种序列化和优化 PyTorch 模型的格式,在优化过程中,一个 torch.nn.Module 模型会被转换成 Torch Script 的 torch.jit.ScriptModule 模型。通常,TorchScript 被当成一种中间表示使用。

Torch Script 的主要用途是进行模型部署,需要记录生成一个便于推理优化的 IR,对计算图的编辑通常都是面向性能提升等等,不会给模型本身添加新的功能。

模型格式 支持语言 适用场景
PyTorch model Python 模型训练
Torch Script C++ 模型推理,模型部署

2. 生成Torch Script模型

如何将PyTorch model格式转换为Torch Script,有两种方式: torch.jit.tracetorch.jit.script

As its name suggests, the primary interface to PyTorch is the Python programming language. While Python is a suitable and preferred language for many scenarios requiring dynamism and ease of iteration, there are equally many situations where precisely these properties of Python are unfavorable. One environment in which the latter often applies is production – the land of low latencies and strict deployment requirements. For production scenarios, C++ is very often the language of choice, even if only to bind it into another language like Java, Rust or Go. The following paragraphs will outline the path PyTorch provides to go from an existing Python model to a serialized representation that can be loaded and executed purely from C++, with no dependency on Python.

A PyTorch model’s journey from Python to C++ is enabled by Torch Script, a representation of a PyTorch model that can be understood, compiled and serialized by the Torch Script compiler. If you are starting out from an existing PyTorch model written in the vanilla “eager” API, you must first convert your model to Torch Script.

There exist two ways of converting a PyTorch model to Torch Script. The first is known as tracing, a mechanism in which the structure of the model is captured by evaluating it once using example inputs, and recording the flow of those inputs through the model. This is suitable for models that make limited use of control flow. The second approach is to add explicit annotations to your model that inform the Torch Script compiler that it may directly parse and compile your model code, subject to the constraints imposed by the Torch Script language.

2.1 trace跟踪模式

Converting to Torch Script via Tracing
How to convert your PyTorch model to TorchScript

功能:将不带控制流的模型转换为 Torch Script,并生成一个 ScriptModule 对象。

函数原型:torch.jit.trace

所谓 trace 指的是进行一次模型推理,在推理的过程中记录所有经过的计算,将这些记录整合成计算图,即模型的静态图。

trace跟踪模式的缺点是:无法识别出模型中的控制流(如循环)

To convert a PyTorch model to Torch Script via tracing, you must pass an instance of your model along with an example input to the torch.jit.trace function. This will produce a torch.jit.ScriptModule object with the trace of your model evaluation embedded in the module’s forward method.

import torch
import torchvision


# An instance of your model.
model = torchvision.models.resnet18(pretrained=True)

# Switch the model to eval model
model.eval()

# An example input you would normally provide to your model's forward() method.
dummy_input = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, dummy_input)

# Save the TorchScript model
traced_script_module.save("traced_resnet18_model.pt")

output = traced_script_module(torch.ones(1, 3, 224, 224))

# IR中间表示
print(traced_script_module.graph)
print(traced_script_module.code)

# 调用traced_cell会产生与 Python 模块相同的结果
print(model(x, h))
print(traced_script_module(x, h))

2.2 script记录模式(带控制流)

功能:将带控制流的模型转换为 Torch Script,并生成一个 ScriptModule 对象。

函数原型:torch.jit.script

script记录模式,通过解析模型来正确记录所有的控制流。script记录模式直接解析网络定义的 python 代码,生成抽象语法树 AST。

Because the forward method of this module uses control flow that is dependent on the input, it is not suitable for tracing. Instead, we can convert it to a ScriptModule. In order to convert the module to the ScriptModule, one needs to compile the module with torch.jit.script as follows.

class MyModule(torch.nn.Module):
    def __init__(self, N, M):
        super(MyModule, self).__init__()
        self.weight = torch.nn.Parameter(torch.rand(N, M))

    def forward(self, input):
        if input.sum() > 0:
          output = self.weight.mv(input)
        else:
          output = self.weight + input
        return output

    
my_module = MyModule(10,20)
sm = torch.jit.script(my_module)

# Save the ScriptModule
sm.save("my_module_model.pt")

2.3 trace格式转换

import torch
import torchvision
from unet import UNet


model = UNet(3, 2) 
model.load_state_dict(torch.load("best_weights.pth"))
model.eval()

example = torch.rand(1, 3, 320, 480) 
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")

三、Loading a TorchScript Model in C++

1. C++加载 ScriptModule

创建一个简单的程序,目录结构如下所示:

example-app/
  CMakeLists.txt
  example-app.cpp

1.1 example-app.cpp

#include <torch/script.h> // One-stop header.

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  if (argc != 2) {
    std::cerr << "usage: example-app <path-to-exported-script-module>\n";
    return -1;
  }


  torch::jit::script::Module module;
  try {
    // Deserialize the ScriptModule from a file using torch::jit::load().
    module = torch::jit::load(argv[1]);
  }
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    return -1;
  }

  std::cout << "ok\n";
}

The <torch/script.h> header encompasses all relevant includes from the LibTorch library necessary to run the example. Our application accepts the file path to a serialized PyTorch ScriptModule as its only command line argument and then proceeds to deserialize the module using the torch::jit::load() function, which takes this file path as input. In return we receive a torch::jit::script::Module object.

1.2 CMakeLists.txt

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops)

find_package(Torch REQUIRED)

add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)

1.3 编译执行

mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..
cmake --build . --config Release
make

输出结果

root@4b5a67132e81:/example-app# mkdir build
root@4b5a67132e81:/example-app# cd build
root@4b5a67132e81:/example-app/build# cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Configuring done
-- Generating done
-- Build files have been written to: /example-app/build
root@4b5a67132e81:/example-app/build# make
Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app

1.4 执行结果

If we supply the path to the traced ResNet18 model traced_resnet_model.pt we created earlier to the resulting example-app binary, we should be rewarded with a friendly “ok”. Please note, if try to run this example with my_module_model.pt you will get an error saying that your input is of an incompatible shape. my_module_model.pt expects 1D instead of 4D.

root@4b5a67132e81:/example-app/build# ./example-app <path_to_model>/traced_resnet_model.pt
ok

2. C++推理ScriptModule

main()函数中添加模型推理的代码:文章来源地址https://www.toymoban.com/news/detail-845853.html

// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

编译执行

root@4b5a67132e81:/example-app/build# make
Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app
root@4b5a67132e81:/example-app/build# ./example-app traced_resnet_model.pt
-0.2698 -0.0381  0.4023 -0.3010 -0.0448
[ Variable[CPUFloatType]{1,5} ]

到了这里,关于PyTorch之Torch Script的简单使用的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • JAX: 快如 PyTorch,简单如 NumPy - 深度学习与数据科学

    JAX 是 TensorFlow 和 PyTorch 的新竞争对手。 JAX 强调简单性而不牺牲速度和可扩展性。由于 JAX 需要更少的样板代码,因此程序更短、更接近数学,因此更容易理解。 长话短说: 使用 import jax.numpy 访问 NumPy 函数,使用 import jax.scipy 访问 SciPy 函数。 通过使用 @jax.jit 进行装饰,可

    2024年02月03日
    浏览(61)
  • 深度学习之pytorch 中 torch.nn介绍

    pytorch 中必用的包就是 torch.nn,torch.nn 中按照功能分,主要如下有几类: 1. Layers(层):包括全连接层、卷积层、池化层等。 2. Activation Functions(激活函数):包括ReLU、Sigmoid、Tanh等。 3. Loss Functions(损失函数):包括交叉熵损失、均方误差等。 4. Optimizers(优化器):包括

    2024年02月22日
    浏览(42)
  • PyTorch深度学习实战(3)——使用PyTorch构建神经网络

    我们已经学习了如何从零开始构建神经网络,神经网络通常包括输入层、隐藏层、输出层、激活函数、损失函数和学习率等基本组件。在本节中,我们将学习如何在简单数据集上使用 PyTorch 构建神经网络,利用张量对象操作和梯度值计算更新网络权重。 1.1 使用 PyTorch 构建神

    2024年02月08日
    浏览(43)
  • 深度学习:图像去雨网络实现Pytorch (二)一个简单实用的基准模型(PreNet)实现

            本文参考文献: Progressive Image Deraining Networks: A Better and Simpler Baseline Dongwei Ren1, Wangmeng Zuo 2 , Qinghua Hu ∗ 1 , Pengfei Zhu 1 , and Deyu Meng 31College of Computing and Intelligence, Tianjin University, Tianjin, China 2School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China 3Xi’an

    2023年04月21日
    浏览(38)
  • 在conda虚拟环境中配置cuda+cudnn+pytorch深度学习环境(新手必看!简单可行!)

    本人最近接触深度学习,想在服务器上配置深度学习的环境,看了很多资料后总结出来了对于新手比较友好的配置流程,创建了一个关于深度学习环境配置的专栏,包括从anaconda到cuda到pytorch的一系列操作,专栏中的另外两篇文章如下,如果有不对的地方欢迎大家批评指正!

    2023年04月15日
    浏览(61)
  • 深度学习 pytorch的使用(张量2)

    tensor([[7, 6, 9, 4, 6],         [1, 9, 0, 9, 2],         [5, 7, 1, 7, 4],         [1, 2, 7, 2, 1]]) tensor([7, 6, 9, 4, 6]) tensor([7, 1, 5, 1]) tensor([6, 0]) tensor([[7, 6],         [1, 9],         [5, 7]]) tensor([[5, 7],         [1, 2]]) tensor([[7, 6, 9, 4, 6],         [1, 2, 7, 2, 1]]) tensor([[6, 4],         [9, 9],  

    2024年01月23日
    浏览(51)
  • 深度学习 pytorch的使用(张量1)

     tensor(10) tensor([[-1.0923, -0.0842,  1.5959],         [ 0.4562,  1.0242,  0.0691]], dtype=torch.float64) tensor([[10., 20., 30.],         [40., 50., 60.]]) --------------------------------------------------  tensor([[-1.4189e-09,  1.7614e-42,  0.0000e+00],         [ 0.0000e+00,  0.0000e+00,  0.0000e+00]]) tensor([10.]) tensor([10., 20

    2024年01月22日
    浏览(39)
  • 【Python】使用Anaconda创建PyTorch深度学习虚拟环境

    使用Anaconda Prompt 查看环境: 创建虚拟环境(python3.10): 激活创建的环境: 在虚拟环境内安装PyTorch: 【Python】CUDA11.7/11.8安装PyTorch三件套_cuda 11.6对应pytorch-CSDN博客 文章浏览阅读3.3w次,点赞29次,收藏169次。安装PyTorch_cuda 11.6对应pytorch https://blog.csdn.net/ericdiii/article/details/125

    2024年01月22日
    浏览(61)
  • 使用PyTorch解决多分类问题:构建、训练和评估深度学习模型

    💗💗💗欢迎来到我的博客,你将找到有关如何使用技术解决问题的文章,也会找到某个技术的学习路线。无论你是何种职业,我都希望我的博客对你有所帮助。最后不要忘记订阅我的博客以获取最新文章,也欢迎在文章下方留下你的评论和反馈。我期待着与你分享知识、互

    2024年02月07日
    浏览(41)
  • 【pytorch】torch.cdist使用说明

    torch.cdist的使用介绍如官网所示, 它是批量计算两个向量集合的距离。 其中, x1和x2是输入的两个向量集合。 p 默认为2,为欧几里德距离。 它的功能上等同于 scipy.spatial.distance.cdist (input,’minkowski’, p=p) 如果x1的shape是 [B,P,M], x2的shape是[B,R,M],则cdist的结果shape是 [B,P,R] x1一般

    2024年01月15日
    浏览(45)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包