适合只有cpu,没有显卡的AI 玩家,初学者
问题:AVX AVX2 AVX_VNNI FMA 是什么?
AVX、AVX2、AVX_VNNI 和 FMA 都是针对 Intel CPU 的特殊指令集(instruction set),用于加速数值计算和向量化操作。这些指令集提供了高级别的并行性和向量化支持,可以在适用的硬件上加快特定计算任务的速度。
-
AVX (Advanced Vector Extensions):AVX 是 Intel CPU 中引入的指令集扩展,其提供了更宽的 SIMD (Single Instruction, Multiple Data) 向量寄存器,允许同时处理更多的数据元素。AVX 可以加速包含向量化计算的一些数值操作,例如加法、乘法和平方根等。
-
AVX2:AVX2 是对 AVX 指令集的扩展,引入了更多的 SIMD 指令,包括整数运算和更复杂的浮点运算。AVX2 提供更高级别的向量化支持,可以加速更多类型的计算任务。
-
AVX_VNNI (AVX Vector Neural Network Instructions):AVX_VNNI 是用于加速神经网络计算的指令集扩展。它针对矩阵乘法和卷积等神经网络中常见的计算操作进行了优化,可以显著加快神经网络的训练和推理速度。
-
FMA (Fused Multiply-Add):FMA 是一种融合乘法和加法操作的指令,可以在单个指令中同时进行乘法和加法运算。FMA 可以加速包含乘法和加法的复杂数值计算,例如矩阵乘法和向量点积等。
这些指令集通常用于高性能计算和深度学习等需要大量数值计算的任务。当软件(如 TensorFlow)在支持这些指令集的硬件上进行优化时,可以显著提高计算速度和性能。因此,在使用 TensorFlow 或其他数值计算库时,如果你的 CPU 支持这些指令集,建议启用它们以获得更好的性能和效率。
使用Docker 环境编译 tensorflow
1 先安装Docker
sudo apt-get install docker
2 拉取 Docker 镜像 tensorflow/tensorflow:devel
docker pull tensorflow/tensorflow:devel
3 运行 TensorFlow 开发版
我这里 $PWD
是当前路径 /home/wmx/software/tensorDocker
挂载到 Docker 目录 /mnt
docker run -it -w /tensorflow_src -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" tensorflow/tensorflow:devel bash
这是一个用于运行 TensorFlow 开发版 Docker 容器的命令。让我们逐步解析这个命令:
-
docker run
: 这是运行 Docker 容器的命令。 -
-it
: 这是两个选项的组合,用于以交互式终端运行容器,并将终端连接到容器的输入/输出(stdin/stdout)。 -
-w /tensorflow_src
: 这是一个选项,指定容器内的工作目录为/tensorflow_src
。换句话说,当容器启动时,它会进入到容器内的/tensorflow_src
目录,以供后续的命令在该目录下执行。 -
-v $PWD:/mnt
: 这是一个选项,用于将当前主机的当前工作目录 ($PWD
) 挂载到容器内的/mnt
目录。这样,当前主机的文件和目录就可以在容器内通过/mnt
目录访问。 -
-e HOST_PERMS="$(id -u):$(id -g)"
: 这是一个选项,设置名为HOST_PERMS
的环境变量,并将其值设为当前用户的用户 ID (id -u
) 和用户组 ID (id -g
)。这样可以确保容器中的操作使用与主机相同的用户权限,避免权限问题。 -
tensorflow/tensorflow:devel
: 这是指定使用的 TensorFlow Docker 镜像及其标签(tag)。devel
是 TensorFlow Docker 镜像的一个标签,表示开发版的 TensorFlow 镜像。 -
bash
: 这是在容器内运行的命令。在这个命令中,我们指定容器在启动后直接运行bash
终端。
综上所述,这个命令将在一个 TensorFlow 开发版 Docker 容器中启动一个交互式终端,并将主机当前工作目录挂载到容器中的 /mnt
目录,同时设置容器中的用户权限与主机相同。然后,容器启动后会进入 /tensorflow_src
目录,并在该目录下运行 bash
终端,从而可以在容器内进行 TensorFlow 开发和测试工作。
- 更新tensorflow源码
git pull
- 查看所有版本
git branch -a
- 使用对应的版本,我这里使用 v2.12.0
git checkout v2.12.0
4 配置 bazel 编译参数
1 ./configure
2 配置python路径,直接回车,使用默认值
3 配置Python library path,直接回车,使用默认配置
4 Do you wish to build TensorFlow with ROCm support? [y/N]: n # 不是AMD显卡 ,我选择 n
5 Do you wish to build TensorFlow with CUDA support? [y/N]: n # 我没有显卡,目前i9-13900k ,所我选择 n
6 Do you wish to download a fresh release of clang? (Experimental) [y/N]: n # 我ubuntu20.04已经安装了clang ,我使用默认的,选择 n
7 Please specify optimization flags to use during compilation when bazel option “–config=opt” is specified [Default is -Wno-sign-compare]: --copt=-march=native # 配置 bazel 编译的参数,我这是纯 cpu 加速,所以填写 --copt=-march=native
到这里配置完毕,上面配置针对纯CPU 加速,如果有显卡,大家根据自己的显卡配置,下面是 配置过程的shell :
(base) wmx@wmx-ubuntu:/media/wmx/ws1/ai/tensorflow_src$ ./configure
You have bazel 5.3.0 installed.
Please specify the location of python. [Default is /bin/python3]:
Found possible Python library paths:
/home/wmx/software/tensorBuild/lib/python3.11/site-packages
/opt/ros/noetic/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3.8.10/site-packages]
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: --copt=-march=native
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=mkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
--config=monolithic # Config for mostly static monolithic build.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v1 # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
--config=nogcp # Disable GCP support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
5 编译 tensorFlow 源码 ,创建 pip 软件包的工具
需要提前开启科学上网,哈哈哈,不然编译不通过,一些依赖国内获取不了
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
6 运行该工具,以创建 pip 软件包 ,指定输出目录 /mnt
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt
7 调整文件在容器外部的所有权
我这里是 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
chown $HOST_PERMS /mnt/tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
我这里主机目录 /home/wmx/software/tensorDocker
挂载到 Docker 目录 /mnt
8 在主机环境中安装 生成的 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
注意 python版本必须对应
因为我docker 里面默认的python 版本是 3.8.10 ,所以我主机环境需要安装 python3.8.10 才能安装 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
conda 创建对应python版本的虚拟环境
conda create --prefix ./tensorBuild python=3.8.10
conda activate ./tensorBuild
# 在tensorBuild虚拟环境中,安装
python -m pip install /home/wmx/software/tensorDocker/tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
9 测试main.py
import tensorflow as tf
# 准备数据集
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# 构建模型
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# 编译模型
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 训练模型
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# 评估模型
test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test accuracy:', test_acc)
# 保存模型
model.save('model.h5')
shell 输出:
2023-07-30 12:31:03.740540: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-30 12:31:03.795251: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-07-30 12:31:03.809482: E tensorflow/tsl/lib/monitoring/collection_registry.cc:81] Cannot register 2 metrics with the same name: /tensorflow/core/bfc_allocator_delay
2023-07-30 12:31:04.681085: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/10
1563/1563 [==============================] - 5s 3ms/step - loss: 1.8800 - accuracy: 0.3663 - val_loss: 1.5088 - val_accuracy: 0.4526
Epoch 2/10
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3361 - accuracy: 0.5216 - val_loss: 1.4776 - val_accuracy: 0.4846
Epoch 3/10
1563/1563 [==============================] - 4s 3ms/step - loss: 1.1816 - accuracy: 0.5824 - val_loss: 1.1282 - val_accuracy: 0.6061
Epoch 4/10
1563/1563 [==============================] - 4s 3ms/step - loss: 1.0730 - accuracy: 0.6242 - val_loss: 1.1308 - val_accuracy: 0.6108
Epoch 5/10
1563/1563 [==============================] - 5s 3ms/step - loss: 0.9949 - accuracy: 0.6540 - val_loss: 1.1160 - val_accuracy: 0.6223
Epoch 6/10
1563/1563 [==============================] - 4s 3ms/step - loss: 0.9268 - accuracy: 0.6784 - val_loss: 1.0251 - val_accuracy: 0.6576
Epoch 7/10
1563/1563 [==============================] - 4s 3ms/step - loss: 0.8666 - accuracy: 0.6949 - val_loss: 1.0190 - val_accuracy: 0.6523
Epoch 8/10
1563/1563 [==============================] - 4s 3ms/step - loss: 0.8127 - accuracy: 0.7170 - val_loss: 1.0383 - val_accuracy: 0.6534
Epoch 9/10
1563/1563 [==============================] - 5s 3ms/step - loss: 0.7579 - accuracy: 0.7362 - val_loss: 1.0633 - val_accuracy: 0.6542
Epoch 10/10
1563/1563 [==============================] - 5s 3ms/step - loss: 0.7169 - accuracy: 0.7483 - val_loss: 1.0449 - val_accuracy: 0.6719
313/313 [==============================] - 0s 1ms/step - loss: 1.0449 - accuracy: 0.6719
Test accuracy: 0.6718999743461609
可以看到已经启用了 CPU 指令加速 :耗时44秒
对比以下是没有开启CPU 指令加速的输出:
2023-07-30 19:40:50.104394: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-30 19:40:51.399995: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Epoch 1/10
1563/1563 [==============================] - 144s 92ms/step - loss: 1.6986 - accuracy: 0.4010 - val_loss: 1.3685 - val_accuracy: 0.5083
Epoch 2/10
1563/1563 [==============================] - 135s 87ms/step - loss: 1.2644 - accuracy: 0.5548 - val_loss: 1.3069 - val_accuracy: 0.5370
Epoch 3/10
1563/1563 [==============================] - 158s 101ms/step - loss: 1.1150 - accuracy: 0.6121 - val_loss: 1.1304 - val_accuracy: 0.5997
Epoch 4/10
1563/1563 [==============================] - 160s 103ms/step - loss: 1.0098 - accuracy: 0.6516 - val_loss: 1.0305 - val_accuracy: 0.6457
Epoch 5/10
1563/1563 [==============================] - 167s 107ms/step - loss: 0.9326 - accuracy: 0.6744 - val_loss: 1.0547 - val_accuracy: 0.6364
Epoch 6/10
1563/1563 [==============================] - 176s 113ms/step - loss: 0.8756 - accuracy: 0.6958 - val_loss: 1.0110 - val_accuracy: 0.6595
Epoch 7/10
1563/1563 [==============================] - 177s 113ms/step - loss: 0.8207 - accuracy: 0.7145 - val_loss: 1.0024 - val_accuracy: 0.6663
Epoch 8/10
1563/1563 [==============================] - 173s 111ms/step - loss: 0.7732 - accuracy: 0.7323 - val_loss: 1.0233 - val_accuracy: 0.6614
Epoch 9/10
1563/1563 [==============================] - 162s 103ms/step - loss: 0.7310 - accuracy: 0.7463 - val_loss: 0.9851 - val_accuracy: 0.6783
Epoch 10/10
1563/1563 [==============================] - 169s 108ms/step - loss: 0.6951 - accuracy: 0.7576 - val_loss: 1.0829 - val_accuracy: 0.6524
313/313 [==============================] - 2s 7ms/step - loss: 1.0829 - accuracy: 0.6524
Test accuracy: 0.652400016784668
耗时 (144+135+158+160+167+176+177+173+162+169 ) 秒 / 60 = 1621秒 / 60 = 27分钟文章来源:https://www.toymoban.com/news/detail-619344.html
性能
没有cpu加速 耗时增加 1621÷44 = 36.8 倍 ,或者说开启CPU加速,性能提高 36.8 倍
我配置:
CPU是 i9-13900k
华硕z790A-wifi 吹雪 D5 ,开启自动超频
三星990pro 1T
海盗船 ddr5 32x2=64G 6000MHz 内存文章来源地址https://www.toymoban.com/news/detail-619344.html
到了这里,关于从源码编译 tensorFlow ,启用CPU 指令加速的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!