Triton Server 快速入门

这篇具有很好参考价值的文章主要介绍了Triton Server 快速入门。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton
官方文档

背景

  • 在工业场景中,常常阻碍模型部署的不是模型本身,而是算力原因,
  • 许多高精度的模型,都有一个比较大的参数量
  • Triton server 是英伟达Nvidia开源的高性能推理,可以在CPU、GPU上加速模型推理的一个工具

是什么

  • triton是一个模型推理服务工具
  • 具有动态批处理,并发执行,模型集成和串流输入,使用配置方式实现模型pipline
  • 使用脚本方式充当模型,以便使计算过程用在显存中
  • triton server 服务对外可以提供api-GRPC/HTTP,以及导出 Prometheus 指标,用于监控 GPU 利用率、延迟、内存使用率和推理吞吐量
  • 可以使用triton client 发送推理请求
  • Triton 支持一些主流加速推理框架ONNXRuntime、TensorFlow SavedModel 和 TensorRT 后端
  • Triton支持深度学习,机器学习,逻辑回归等学习模型
  • Triton 支持基于GPU,x86,ARM CPU,除此之外支持国产GCU(需要安装GCU的ONNXRUNTIME)
  • 模型可在生成环境中实时更新,无需重启Triton Server
  • Triton 支持对单个 GPU 显存无法容纳的超大模型进行多 GPU 以及多节点推理
  • 支持性能评估,包括GPU利用率、server吞吐量和server延迟时间

代码实例

准备onnx 模型

  • 我们需要提前训练好自己onnx模型文件,这里我们使用官方提供好的onnx模型文件
# Create model repository with placeholder for model and version 1
mkdir -p ./models/densenet_onnx/1

# Download model and place it in model repository
wget -O ./models/densenet_onnx/1/model.onnx https://contentmamluswest001.blob.core.windows.net/content/14b2744cf8d6418c87ffddc3f3127242/9502630827244d60a1214f250e3bbca7/08aed7327d694b8dbaee2c97b8d0fcba/densenet121-1.2.onnx

创建一个最小化的模型配置

vim ./models/densenet_onnx/config.pbtxt
name: "densenet_onnx"
backend: "onnxruntime"
max_batch_size: 0
input: [
  {
    name: "data_0",
    data_type: TYPE_FP32,
    dims: [ 1, 3, 224, 224]
  }
]
output: [
  {
    name: "fc6_1",
    data_type: TYPE_FP32,
    dims: [ 1, 1000, 1, 1 ]
  }
]

这里定义模型的输入输出可以通过netron工具可视化查看该模型的输入输出参数

拉取两个官方镜像

docker pull nvcr.io/nvidia/tritonserver:23.02-py3  # triton server
docker pull nvcr.io/nvidia/tritonserver:23.02-py3-sdk # triton client

启动triton server

  • 启动容器
# Start server container in the background
docker run -d --gpus=all --network=host -v $PWD:/mnt --name triton-server nvcr.io/nvidia/tritonserver:23.02-py3 bash
  • 进入容器并运行
[~]# tritonserver --model-repository=/mnt/models --model-control-mode=poll
I0403 06:07:10.866992 1186 server.cc:522] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0403 06:07:10.867083 1186 server.cc:549] 
+-------------+-------------------------------------------------------------------------+--------+
| Backend     | Path                                                                    | Config |
+-------------+-------------------------------------------------------------------------+--------+
| pytorch     | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so                 | {}     |
| tensorflow  | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so         | {}     |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so         | {}     |
| openvino    | /opt/tritonserver/backends/openvino_2021_2/libtriton_openvino_2021_2.so | {}     |
+-------------+-------------------------------------------------------------------------+--------+

I0403 06:07:10.867131 1186 server.cc:592] 
+---------------+---------+--------+
| Model         | Version | Status |
+---------------+---------+--------+
| densenet_onnx | 2       | READY  |
+---------------+---------+--------+

I0403 06:07:10.947730 1186 metrics.cc:623] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I0403 06:07:10.947760 1186 metrics.cc:623] Collecting metrics for GPU 1: NVIDIA GeForce RTX 3090
I0403 06:07:10.947772 1186 metrics.cc:623] Collecting metrics for GPU 2: NVIDIA GeForce RTX 3090
I0403 06:07:10.947784 1186 metrics.cc:623] Collecting metrics for GPU 3: NVIDIA GeForce RTX 3090
I0403 06:07:10.947800 1186 metrics.cc:623] Collecting metrics for GPU 4: NVIDIA GeForce RTX 3090
I0403 06:07:10.947819 1186 metrics.cc:623] Collecting metrics for GPU 5: NVIDIA GeForce RTX 3090
I0403 06:07:10.947852 1186 metrics.cc:623] Collecting metrics for GPU 6: NVIDIA GeForce RTX 3090
I0403 06:07:10.947886 1186 metrics.cc:623] Collecting metrics for GPU 7: NVIDIA GeForce RTX 3090
I0403 06:07:10.949215 1186 tritonserver.cc:1932] 
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                                                                        |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                                                                       |
| server_version                   | 2.19.0                                                                                                                                                                                       |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
| model_repository_path[0]         | /mnt/models                                                                                                                                                                                  |
| model_control_mode               | MODE_POLL                                                                                                                                                                                    |
| strict_model_config              | 1                                                                                                                                                                                            |
| rate_limit                       | OFF                                                                                                                                                                                          |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                                    |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{1}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{2}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{3}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{4}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{5}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{6}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{7}    | 67108864                                                                                                                                                                                     |
| response_cache_byte_size         | 0                                                                                                                                                                                            |
| min_supported_compute_capability | 6.0                                                                                                                                                                                          |
| strict_readiness                 | 1                                                                                                                                                                                            |
| exit_timeout                     | 30                                                                                                                                                                                           |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0403 06:07:10.950873 1186 grpc_server.cc:4375] Started GRPCInferenceService at 0.0.0.0:8001
I0403 06:07:10.951176 1186 http_server.cc:3075] Started HTTPService at 0.0.0.0:8000
I0403 06:07:10.992539 1186 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002

1.日志中显示densenet_onnx 模型已加载完毕,并启动了GRPC、HTTP、Metrics接口
2.–model-control-mode=poll该参数用于启动模型热更新,当模型文件发生变化,或者新增版本时,程序先启动新的实例版本出来,在将旧版本或者实例卸载掉

  • 新增模型版本
[~]# cp -rf 1 2
[~]# cp -rf 1 3

I0403 06:07:26.109494 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3)
I0403 06:07:26.119616 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0)
I0403 06:07:26.319224 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1)
I0403 06:07:26.495285 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2)
I0403 06:07:26.669370 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3)
I0403 06:07:26.829762 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4)
I0403 06:07:27.007662 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5)
I0403 06:07:27.182506 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6)
I0403 06:07:27.367420 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7)
I0403 06:07:27.532531 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3
I0403 06:07:27.532561 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:2
I0403 06:07:27.532729 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.548199 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.561028 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.573967 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.585593 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.596050 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.605498 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.614892 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.624120 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state
I0403 06:07:27.624158 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 2
I0403 06:14:42.551308 1186 model_repository_manager.cc:994] loading: densenet_onnx:3
I0403 06:14:42.651625 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3)
I0403 06:14:42.659502 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0)
I0403 06:14:42.851975 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1)
I0403 06:14:43.027086 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2)
I0403 06:14:43.203822 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3)
I0403 06:14:43.378325 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4)
I0403 06:14:43.552427 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5)
I0403 06:14:43.732855 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6)
I0403 06:14:43.903087 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7)
I0403 06:14:44.071766 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3
I0403 06:14:44.071795 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:3
I0403 06:14:44.071970 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.081007 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.089658 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.098768 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.107905 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.116819 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.125697 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.134503 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.143469 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state
I0403 06:14:44.143503 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 3

上述结果中将新增版本2,3,最终将triton server版本换到3

  • 整体运行实例截图
    tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton

客户端压测

  • 启动客户端
docker run -itd --gpus=all --network=host -v $PWD:/mnt --name triton-client nvcr.io/nvidia/tritonserver:23.02-py3-sdk bash
  • 使用perf_analyzer压测
[~]# perf_analyzer -m densenet_onnx -u 127.0.0.1:8000 --concurrency-range 1:6

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 96.1522 infer/sec, latency 10396 usec
Concurrency: 2, throughput: 197.181 infer/sec, latency 10138 usec
Concurrency: 3, throughput: 305.046 infer/sec, latency 9832 usec
Concurrency: 4, throughput: 425.759 infer/sec, latency 9392 usec
Concurrency: 5, throughput: 564.87 infer/sec, latency 8850 usec
Concurrency: 6, throughput: 704.574 infer/sec, latency 8514 usec

模型分析工具

  • 进入triton-server 容器中
tritonserver --model-repository=/mnt/models --model-control-mode=explicit # 必须使用这个模式否则无法load/unload模型
  • 进入triton-client容器中
root@53:/mnt# cat config.yaml # 使用remote方式的原因,可以通过这种方式进行测试当我们不需要GPU时,来测试国产芯片的吞吐以及延迟
model_repository: /mnt/models
#checkpoint_directory: /mnt/checkpoints/
profile_models: densenet_onnx
triton_grpc_endpoint: 127.0.0.1:9001
triton_metrics_url: 127.0.0.1:9002
triton_launch_mode: remote

root@53:/mnt# rm -rf output_model_repository/ checkpoints/ && model-analyzer profile -f config.yaml  # 这里选择使用配置文件方式
[Model Analyzer] Initializing GPUDevice handles
[Model Analyzer] Using GPU 0 NVIDIA GeForce RTX 3090 with UUID GPU-b6d3bb44-b607-e9c1-c898-3977340c20a4
[Model Analyzer] Using GPU 1 NVIDIA GeForce RTX 3090 with UUID GPU-f37fdb1b-77c7-ff1f-21c0-e2db53fe0818
[Model Analyzer] Using GPU 2 NVIDIA GeForce RTX 3090 with UUID GPU-1a0e40f7-65eb-9694-f91c-253808416e71
[Model Analyzer] Using GPU 3 NVIDIA GeForce RTX 3090 with UUID GPU-c889529d-734f-8a13-f820-02597663a704
[Model Analyzer] Using GPU 4 NVIDIA GeForce RTX 3090 with UUID GPU-9f08b528-c421-bc60-2fc6-7f906e13404a
[Model Analyzer] Using GPU 5 NVIDIA GeForce RTX 3090 with UUID GPU-9c9fbba1-0558-4f8e-1534-8ff8e8b03a6c
[Model Analyzer] Using GPU 6 NVIDIA GeForce RTX 3090 with UUID GPU-55808174-5a3e-8082-8759-b248794a1e34
[Model Analyzer] Using GPU 7 NVIDIA GeForce RTX 3090 with UUID GPU-2a0fd91b-3ca8-8249-2d0c-70c7853491a6
[Model Analyzer] Using remote Triton Server
[Model Analyzer] WARNING: GPU memory metrics reported in the remote mode are not accurate. Model Analyzer uses Triton explicit model control to load/unload models. Some frameworks do not release the GPU memory even when the memory is not being used. Consider using the "local" or "docker" mode if you want to accurately monitor the GPU memory usage for different models.
[Model Analyzer] WARNING: Config sweep parameters are ignored in the "remote" mode because Model Analyzer does not have access to the model repository of the remote Triton Server.
[Model Analyzer] No checkpoint file found, starting a fresh run.
[Model Analyzer] Profiling server only metrics...
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=1
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=2
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=4
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=8
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=16
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=32
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=64
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=128
[Model Analyzer] No longer increasing concurrency as throughput has plateaued
[Model Analyzer] Saved checkpoint to /mnt/checkpoints/0.ckpt
[Model Analyzer] Profile complete. Profiled 1 configurations for models: ['densenet_onnx']
[Model Analyzer] 
[Model Analyzer] WARNING: GPU output field "gpu_used_memory", has no data
[Model Analyzer] WARNING: GPU output field "gpu_utilization", has no data
[Model Analyzer] WARNING: GPU output field "gpu_power_usage", has no data
[Model Analyzer] WARNING: Server output field "gpu_used_memory", has no data
[Model Analyzer] WARNING: Server output field "gpu_utilization", has no data
[Model Analyzer] WARNING: Server output field "gpu_power_usage", has no data
[Model Analyzer] Exporting inference metrics to /mnt/results/metrics-model-inference.csv
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] Exporting Summary Report to /mnt/reports/summaries/densenet_onnx/result_summary.pdf
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] To generate detailed reports for the 1 best configurations, run `model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml

root@53:/mnt# ls
checkpoints  config.yaml  models  models23  output_model_repository  plots  reports  results
root@53:/mnt# cat results/metrics-
metrics-model-inference.csv  metrics-server-only.csv      
root@53:/mnt# cat results/metrics-model-inference.csv 
Model,Batch,Concurrency,Model Config Path,Instance Group,Max Batch Size,Satisfies Constraints,Throughput (infer/sec),p99 Latency (ms)
densenet_onnx,1,16,densenet_onnx,8:GPU,0,Yes,1394.9,13.1
densenet_onnx,1,64,densenet_onnx,8:GPU,0,Yes,1384.2,50.0
densenet_onnx,1,32,densenet_onnx,8:GPU,0,Yes,1384.0,25.3
densenet_onnx,1,128,densenet_onnx,8:GPU,0,Yes,1331.5,104.6
densenet_onnx,1,8,densenet_onnx,8:GPU,0,Yes,1215.6,7.7
densenet_onnx,1,4,densenet_onnx,8:GPU,0,Yes,472.0,12.2
densenet_onnx,1,2,densenet_onnx,8:GPU,0,Yes,172.5,17.3
densenet_onnx,1,1,densenet_onnx,8:GPU,0,Yes,95.6,15.6
  • 生成可视化文档
root@53:/mnt# model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml
[Model Analyzer] Loaded checkpoint from file /mnt/checkpoints/0.ckpt
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] Exporting Detailed Report to /mnt/reports/detailed/densenet_onnx/detailed_report.pdf

tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton

  • model-analyzer profile 参数
root@sse-lg-113-53:/mnt# model-analyzer profile --help
usage: model-analyzer profile [-h] [-f CONFIG_FILE] [-s CHECKPOINT_DIRECTORY] [-i MONITORING_INTERVAL] [-d DURATION_SECONDS] [--collect-cpu-metrics] [--gpus GPUS] [--skip-summary-reports]
                              [-m MODEL_REPOSITORY] [--output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH] [--override-output-model-repository] [-r CLIENT_MAX_RETRIES]
                              [--client-protocol {http,grpc}] [--profile-models PROFILE_MODELS] [-b BATCH_SIZES] [-c CONCURRENCY] [--reload-model-disable] [--perf-analyzer-timeout PERF_ANALYZER_TIMEOUT]
                              [--perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL] [--perf-analyzer-path PERF_ANALYZER_PATH] [--perf-output] [--perf-output-path PERF_OUTPUT_PATH]
                              [--perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS] [--triton-launch-mode {local,docker,remote,c_api}] [--triton-docker-image TRITON_DOCKER_IMAGE]
                              [--triton-http-endpoint TRITON_HTTP_ENDPOINT] [--triton-grpc-endpoint TRITON_GRPC_ENDPOINT] [--triton-metrics-url TRITON_METRICS_URL] [--triton-server-path TRITON_SERVER_PATH]
                              [--triton-output-path TRITON_OUTPUT_PATH] [--triton-docker-mounts TRITON_DOCKER_MOUNTS] [--triton-docker-shm-size TRITON_DOCKER_SHM_SIZE]
                              [--triton-install-path TRITON_INSTALL_PATH] [--early-exit-enable] [--run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY]
                              [--run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY] [--run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT]
                              [--run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT] [--run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE]
                              [--run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE] [--run-config-search-mode {brute,quick}] [--run-config-search-disable]
                              [--run-config-profile-models-concurrently-enable] [-e EXPORT_PATH] [--filename-model-inference FILENAME_MODEL_INFERENCE] [--filename-model-gpu FILENAME_MODEL_GPU]
                              [--filename-server-only FILENAME_SERVER_ONLY] [--num-configs-per-model NUM_CONFIGS_PER_MODEL] [--num-top-model-configs NUM_TOP_MODEL_CONFIGS]
                              [--inference-output-fields INFERENCE_OUTPUT_FIELDS] [--gpu-output-fields GPU_OUTPUT_FIELDS] [--server-output-fields SERVER_OUTPUT_FIELDS] [--latency-budget LATENCY_BUDGET]
                              [--min-throughput MIN_THROUGHPUT]

optional arguments:
  -h, --help            show this help message and exit
  -f CONFIG_FILE, --config-file CONFIG_FILE
                        Path to Config File for subcommand 'profile'.
  -s CHECKPOINT_DIRECTORY, --checkpoint-directory CHECKPOINT_DIRECTORY
                        Full path to directory to which to read and write checkpoints and profile data.
  -i MONITORING_INTERVAL, --monitoring-interval MONITORING_INTERVAL
                        Interval of time between metrics measurements in seconds
  -d DURATION_SECONDS, --duration-seconds DURATION_SECONDS
                        Specifies how long (seconds) to gather server-only metrics
  --collect-cpu-metrics
                        Specify whether CPU metrics are collected or not
  --gpus GPUS           List of GPU UUIDs to be used for the profiling. Use 'all' to profile all the GPUs visible by CUDA.
  --skip-summary-reports
                        Skips the generation of analysis summary reports and tables.
  -m MODEL_REPOSITORY, --model-repository MODEL_REPOSITORY
                        Triton Model repository location
  --output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH
                        Output model repository path used by Model Analyzer. This is the directory that will contain all the generated model configurations
  --override-output-model-repository
                        Will override the contents of the output model repository and replace it with the new results.
  -r CLIENT_MAX_RETRIES, --client-max-retries CLIENT_MAX_RETRIES
                        Specifies the max number of retries for any requests to Triton server.
  --client-protocol {http,grpc}
                        The protocol used to communicate with the Triton Inference Server
  --profile-models PROFILE_MODELS
                        List of the models to be profiled
  -b BATCH_SIZES, --batch-sizes BATCH_SIZES
                        Comma-delimited list of batch sizes to use for the profiling
  -c CONCURRENCY, --concurrency CONCURRENCY
                        Comma-delimited list of concurrency values or ranges <start:end:step> to be used during profiling
  --reload-model-disable
                        Flag to indicate whether or not to disable model loading and unloading in remote mode.
  --perf-analyzer-timeout PERF_ANALYZER_TIMEOUT
                        Perf analyzer timeout value in seconds.
  --perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL
                        Maximum CPU utilization value allowed for the perf_analyzer.
  --perf-analyzer-path PERF_ANALYZER_PATH
                        The full path to the perf_analyzer binary executable
  --perf-output         Enables the output from the perf_analyzer to a file specified by perf_output_path. If perf_output_path is None, output will be written to stdout.
  --perf-output-path PERF_OUTPUT_PATH
                        Path to the file to which write perf_analyzer output, if enabled.
  --perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS
                        Maximum number of times perf_analyzer is launched with auto adjusted parameters in an attempt to profile a model.
  --triton-launch-mode {local,docker,remote,c_api}
                        The method by which to launch Triton Server. 'local' assumes tritonserver binary is available locally. 'docker' pulls and launches a triton docker container with the specified
                        version. 'remote' connects to a running server using given http, grpc and metrics endpoints. 'c_api' allows direct benchmarking of Triton locallywithout the use of endpoints.
  --triton-docker-image TRITON_DOCKER_IMAGE
                        Triton Server Docker image tag
  --triton-http-endpoint TRITON_HTTP_ENDPOINT
                        Triton Server HTTP endpoint url used by Model Analyzer client.
  --triton-grpc-endpoint TRITON_GRPC_ENDPOINT
                        Triton Server HTTP endpoint url used by Model Analyzer client.
  --triton-metrics-url TRITON_METRICS_URL
                        Triton Server Metrics endpoint url.
  --triton-server-path TRITON_SERVER_PATH
                        The full path to the tritonserver binary executable
  --triton-output-path TRITON_OUTPUT_PATH
                        The full path to the file to which Triton server instance will append their log output. If not specified, they are not written.
  --triton-docker-mounts TRITON_DOCKER_MOUNTS
                        A list of strings representing volumes to be mounted. The strings should have the format '<host path>:<container path>:<access mode>'.
  --triton-docker-shm-size TRITON_DOCKER_SHM_SIZE
                        The size of the /dev/shm for the triton docker container
  --triton-install-path TRITON_INSTALL_PATH
                        Path to Triton install directory i.e. the parent directory of 'lib/libtritonserver.so'.Required only when using triton_launch_mode=c_api.
  --early-exit-enable   Flag to indicate if Model Analyzer can skip some configurations when manually searching concurrency or max_batch_size
  --run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY
                        Max concurrency value that run config search should not go beyond that.
  --run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY
                        Min concurrency value that run config search should start with.
  --run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT
                        Max instance count value that run config search should not go beyond that.
  --run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT
                        Min instance count value that run config search should start with.
  --run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE
                        Value for the model's max_batch_size that run config search will not go beyond.
  --run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE
                        Value for the model's max_batch_size that run config search will start from.
  --run-config-search-mode {brute,quick}
                        The search mode for Model Analyzer to find and evaluate model configurations. 'brute' will brute force all combinations of configuration options. 'quick' will attempt to find a
                        near-optimal configuration as fast as possible, but isn't guaranteed to find the best.
  --run-config-search-disable
                        Disable run config search.
  --run-config-profile-models-concurrently-enable
                        Enable the profiling of all supplied models concurrently.
  -e EXPORT_PATH, --export-path EXPORT_PATH
                        Full path to directory in which to store the results
  --filename-model-inference FILENAME_MODEL_INFERENCE
                        Specifies filename for storing model inference metrics
  --filename-model-gpu FILENAME_MODEL_GPU
                        Specifies filename for storing model GPU metrics
  --filename-server-only FILENAME_SERVER_ONLY
                        Specifies filename for server-only metrics
  --num-configs-per-model NUM_CONFIGS_PER_MODEL
                        The number of configurations to plot per model in the summary.
  --num-top-model-configs NUM_TOP_MODEL_CONFIGS
                        Model Analyzer will compare this many of the top models configs across all models.
  --inference-output-fields INFERENCE_OUTPUT_FIELDS
                        Specifies column keys for model inference metrics table
  --gpu-output-fields GPU_OUTPUT_FIELDS
                        Specifies column keys for model gpu metrics table
  --server-output-fields SERVER_OUTPUT_FIELDS
                        Specifies column keys for server-only metrics table
  --latency-budget LATENCY_BUDGET
                        Shorthand flag for specifying a maximum latency in ms.
  --min-throughput MIN_THROUGHPUT
                        Shorthand flag for specifying a minimum throughput.

总体架构图

tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton

整体处理流程

  • Model Repository表示模型仓库,可以是本地的,也可以是在线的,用于存储Triton将会读取的模型(也就是示例代码中的挂载目录);
  • Triton通过HTTP/gRPC协议接收外部请求,或者是直接通过调用C API接收外部请求,然后把外部请求的推理数据发送给每一个模型scheduler,模型scheduler对请求数据进行最优化组批,并根据其管理的各个模型的类型将数据发送到对应的backend;
  • 每个深度学习框架的backend接收请求数据,执行推理过程,获取结果;
  • Triton再把所有的推理结果返回给client端;

开发

  • 可扩充:Triton支持C API的后端,可以自定义预处理和后处理操作,甚至可以兼容新的深度学习框架;
  • 模型管理API:模型管理API可以查询和控制Triton管理的模型;
  • 状态监控:可以实时查看各终端是否就绪、是否在运行及其利用率,总体的吞吐量和延迟;

并发模型执行

  • Triton结构允许多个模型或单/多个模型的多个实例并发执行
    tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton
  • 如果Triton管理的一个模型同时接收到了多个请求,那么Triton会对这些请求进行序列化排序,一次只执行一个
    tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton

Triton提供了一个叫做instance-group的模型配置项,允许指定每一个模型允许的并发实例的数量,这些并发的模型数量称之为一个instance。默认情况下,Triton是一个GPU上放一个模型,一次只推理一份数据。但通过设置模型的instance_group参数,可以对模型的并发实例数据量进行扩充
tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton文章来源地址https://www.toymoban.com/news/detail-707743.html

三种模型调度策略

stateless

stateful

ensemble 集成模型

  • ensemble模型表示把多个模型组成一个pipeline进行处理,把内部包含的前面模型的输出作为下一个模型的输入,例如定义图像预处理 -> 推理 -> 图像后处理过程
  • ensemble模型必须使用ensemble scheduler,不需要考虑ensemble model管理的各个子model的scheduler,一个ensemble model并不是一个真正的模型,只是用来指明各个子model之间的数据流动路径
name: "ensemble_model"
platform: "ensemble"
max_batch_size: 1
input [
  {
    name: "IMAGE"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]
output [
  {
    name: "CLASSIFICATION"
    data_type: TYPE_FP32
    dims: [ 1000 ]
  },
  {
    name: "SEGMENTATION"
    data_type: TYPE_FP32
    dims: [ 3, 224, 224 ]
  }
]
ensemble_scheduling {
  step [
    {
      model_name: "image_preprocess_model"
      model_version: -1
      input_map {
        key: "RAW_IMAGE"
        value: "IMAGE"
      }
      output_map {
        key: "PREPROCESSED_OUTPUT"
        value: "preprocessed_image"
      }
    },
    {
      model_name: "classification_model"
      model_version: -1
      input_map {
        key: "FORMATTED_IMAGE"
        value: "preprocessed_image"
      }
      output_map {
        key: "CLASSIFICATION_OUTPUT"
        value: "CLASSIFICATION"
      }
    },
    {
      model_name: "segmentation_model"
      model_version: -1
      input_map {
        key: "FORMATTED_IMAGE"
        value: "preprocessed_image"
      }
      output_map {
        key: "SEGMENTATION_OUTPUT"
        value: "SEGMENTATION"
      }
    }
  ]
}
  • 流程图
    tritonserver,机器学习,python,深度学习,机器学习,深度学习,人工智能,triton

模型仓库

  • 启动 triton server 服务时,我们需要指定加载模型的路径
  • tritonserver --model-repository=xxx

本地路径

  • tritonserver --model-repository=/path/to/model/repository

S3对象存储

  • tritonserver --model-repository=s3://bucket/path/to/model/repository

其他

  • Google Cloud Storage、Amazon S3和Azure Storage

模型类型

  • TensorRT模型:model.plan
  • Onnx模型:model.onnx
  • TorchScript模型:model.pt
  • Tensorflow模型:model.graphdef 或 model.savedmodel
  • OpenVINO模型:model.xml model.bin
  • Python模型:model.py
  • Dali模型:model.dali

模型管理模式

NONE模式

  • Triton启动时会尝试加载模型仓库中的所有模型,对于无法加载的模型,会标识为UNAVAILABLE且无法用于推理;
  • Triton运行过程中,对模型仓库的修改会被忽略。调用模型控制API发送的请求无效且会收到错误的Response;
  • 在启动Triton是设置–model-control-mode=none会启用本模式,None模式也是默认的模型管理模式。

EXPLICIT模式

  • Triton启动时仅会加载标注了–load-model命令行的模型,如果所有的模型都没有指定–load-model,那么就不会加载任何一个模型。对于无法加载的模型,会标识为UNAVAILABLE且无法用于推理;
  • Triton运行过程中,调用模型控制API可以进行模型的加载和卸载,返回的response的状态用于标识加载和卸载行为是否成功。当尝试重新加载一个已经加载了的模型时,如果重新加载失败了则不会对已加载的模型造成影响。如果重新加载成功了,则会使用新加载的模型。
  • 在启动Triton是设置–model-control-mode=explicit会启用本模式;

POLL模式

  • 在启动Triton是设置–model-control-mode=poll会启用本模式,通过设置–repository-poll-secs为非零值设置poll的时间间隔
  • 该模式是一个热加载模式,文件变动或者新增版本都可以触发重新加载新版本模型

模型配置说明

platform

  • triton的backend,表示使用什么类型推理

max_batch_size

input、output

  • 定义输入输出
max_batch_size: 0
input: [
  {
    name: "data_0",
    data_type: TYPE_FP32,
    dims: [ 1, 3, 224, 224]
  }
]
output: [
  {
    name: "prob_1",
    data_type: TYPE_FP32,
    dims: [ 1, 1000, 1, 1 ]
  }
]

name

  • 默认为model所在的文件夹的名字

version_policy

  • 指定可用模型的版本信息
  • 默认为version_policy: { latest: { num_versions: 1}},即使用最新的一个模型
  • model repo中的所有版本的模型都可用;version_policy: { all: {}}

instance_group

  • 设定model的instance数量,以支持并发响应外部数据的推理请求;
instance_group [
	{
	count: 2
	kind: KIND_GPU #每个GPU上创建2个instance
	
	}
]

instance_group [
	{
	count: 1
	kind: KIND_GPU
	gpus: [ 0 ]
	},
	{
	count: 2
	kind: KIND_GPU
	gpus: [ 1, 2 ]
	}
] #gpu0上创建一个instance,gpu1和gpu2上各创建两个instance

instance_group [
	{
	count: 2
	kind: KIND_CPU
	}
] #在CPU上创建两个instance



rate_limiter

instance_group [
    {
      count: 1
      kind: KIND_GPU
      gpus: [ 0, 1, 2 ]
      rate_limiter {
        resources [
          {
            name: "R1"
            count: 4
          },
          {
            name: "R2"
            global: True
            count: 2
          }
        ]
        priority: 2
      }
    }
  ]

到了这里,关于Triton Server 快速入门的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【零基础入门学习Python---Python网络编程之django快速入门实践】

    🚀 Python 🚀 🌲 算法刷题专栏 | 面试必备算法 | 面试高频算法 🍀 🌲 越难的东西,越要努力坚持,因为它具有很高的价值,算法就是这样✨ 🌲 作者简介:硕风和炜,CSDN-Java领域优质创作者🏆,保研|国家奖学金|高中学习JAVA|大学完善JAVA开发技术栈|面试刷题|面经八股文|经验

    2024年02月16日
    浏览(39)
  • 学习 Python 数据可视化,如何快速入门?

    Python 是一种非常流行的编程语言,具有简单易学、高效、丰富的库和工具等特点。其中,数据可视化是 Python 的一个重要应用领域,可以帮助人们更好地理解和分析数据。本文将介绍如何快速入门 Python 数据可视化,以及常用的可视化工具。 1、安装 Python 和相关库 首先,需要

    2024年02月05日
    浏览(58)
  • 机器学习Python7天入门计划--第一天-机器学习基础-讲人话

    机器学习Python7天入门计划 - 第一天: 机器学习基础 学习目标: 理解机器学习的基本概念和过程。 掌握基本的数据预处理技巧。 理解线性回归的原理和应用。 学习内容: 机器学习基础 什么是机器学习:机器学习是一种使计算机能够从数据中学习规律和模式的技术。 为什么

    2024年01月20日
    浏览(47)
  • 【零基础入门学习Python---Python中数据分析与可视化之快速入门实践】

    🚀 零基础入门学习Python🚀 🌲 算法刷题专栏 | 面试必备算法 | 面试高频算法 🍀 🌲 越难的东西,越要努力坚持,因为它具有很高的价值,算法就是这样✨ 🌲 作者简介:硕风和炜,CSDN-Java领域优质创作者🏆,保研|国家奖学金|高中学习JAVA|大学完善JAVA开发技术栈|面试刷题

    2024年02月13日
    浏览(55)
  • 编程探秘:Python深渊之旅-----机器学习入门(七)

    团队决定在他们的项目中加入一些机器学习功能。瑞宝,对新技术充满好奇,跃跃欲试地想了解更多。 瑞宝 (兴奋地):我一直想学习机器学习,现在终于有机会了! 龙 (微笑着):机器学习是一个很广阔的领域,让我们从基础开始。我们可以使用 Python 的 scikit-learn 库来

    2024年01月20日
    浏览(57)
  • Python机器学习算法入门教程(第二部分)

    接着Python机器学习算法入门教程(第一部分),继续展开描述。 在 Python机器学习算法入门教程(第一部分)中的第六部分:线性回归:损失函数和假设函数 一节,从数学的角度解释了假设函数和损失函数,我们最终的目的要得到一个最佳的“拟合”直线,因此就需要将损失

    2024年02月05日
    浏览(39)
  • Python 机器学习入门:数据集、数据类型和统计学

    机器学习是通过研究数据和统计信息使计算机学习的过程。机器学习是迈向人工智能(AI)的一步。机器学习是一个分析数据并学会预测结果的程序。 在计算机的思维中,数据集是任何数据的集合。它可以是从数组到完整数据库的任何东西。 数组的示例: [99,86,87,88,111,86,10

    2024年02月05日
    浏览(46)
  • python机器学习入门之opencv的使用(超详细,必看)

    源码及图片请点赞关注收藏后私信博主要  opencv 广泛用于多种于计算机视觉和机器学习相关的算法 其用C++语言编写 ,主要接口也是C++语言 但也有 python等环境的接口 接下来我们着重介绍他的使用。 opencv python是一个用于解决计算机视觉问题的python库  opencv python与numpy兼容 数

    2024年02月15日
    浏览(36)
  • 【100天精通Python】Day73:python机器学习入门算法详解与代码示例

    目录 1. 监督学习算法: 1.1 线性回归(Linear Regression): 1.2  逻辑回归(Logistic Regression): 1.3 决策树(Decision Tree): 1.4 支持向量机(Support Vector Machine): 1.5 随机森林(Random Forest):  2. 无监督学习算法:  2.1 聚类算法(Clustering): 2.2 主成分分析(PCA): 2.3 K均值聚

    2024年02月05日
    浏览(63)
  • 掌握 Scikit-Learn: Python 中的机器学习库入门

    机器学习 (Machine Learning) 是一个近年来频繁出现在科技新闻, 研究报告, 行业分析和实际应用中的热门领域. 机器学习 (Machine Learning) 正以前所未有的速度影响着我们的生活. 从智能音响的语音识别, 手机摄像头的人脸解锁, 到金融领域的评估, 医疗健康的预测分析. 机器学习的应

    2024年02月07日
    浏览(52)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包