【AI实战】llama.cpp量化cuBLAS编译;nvcc fatal:Value ‘native‘ is not defined for option ‘gpu-architecture‘

这篇具有很好参考价值的文章主要介绍了【AI实战】llama.cpp量化cuBLAS编译;nvcc fatal:Value ‘native‘ is not defined for option ‘gpu-architecture‘。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

llama.cpp量化介绍

对于使用 LLaMA 模型来说,无论从花销还是使用体验,量化这个步骤是不可或缺的。

llama.cpp 量化部署 llama 参考这篇文章:【AI实战】llama.cpp 量化部署 llama-33B

llama.cpp 编译GPU版

1.错误描述

与 cuBLAS 一起编译时,执行:

cd /notebooks/llama.cpp
make LLAMA_CUBLAS=1

报错信息如下:

# make LLAMA_CUBLAS=1
I llama.cpp build info:
I UNAME_S:  Linux
I UNAME_P:  x86_64
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include
I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include
I LDFLAGS:   -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
I CC:       cc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
I CXX:      g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0

cc  -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include   -c ggml.c -o ggml.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c llama.cpp -o llama.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c examples/common.cpp -o common.o
cc -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include   -c -o k_quants.o k_quants.c
nvcc --forward-unknown-to-host-compiler -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -Wno-pedantic -c ggml-cuda.cu -o ggml-cuda.o
nvcc fatal   : Value 'native' is not defined for option 'gpu-architecture'
make: *** [Makefile:191: ggml-cuda.o] Error 1

错误信息在倒数第二行:

nvcc fatal   : Value 'native' is not defined for option 'gpu-architecture'

2.错误排查

执行下面命令参考本机的 gpu-architecture:

nvcc --list-gpu-arch

输出:

compute_35
compute_37
compute_50
compute_52
compute_53
compute_60
compute_61
compute_62
compute_70
compute_72
compute_75
compute_80
compute_86
compute_87

可以看出 ‘native’ 不在上述列表中,我猜可能是 cuda 版本的问题。

解决方法

1. 查找 native

执行:

grep -nr native *

输出:

CMakeLists.txt:41:option(LLAMA_NATIVE                     "llama: enable -march=native flag"                      OFF)
CMakeLists.txt:387:        add_compile_options(-march=native)
CMakeLists.txt:460:    add_compile_options(-mcpu=native -mtune=native)
Makefile:107:   CFLAGS   += -march=native -mtune=native
Makefile:108:   CXXFLAGS += -march=native -mtune=native
Makefile:166:   NVCCFLAGS = --forward-unknown-to-host-compiler -arch=native
Makefile:222:   CFLAGS   += -mcpu=native
Makefile:223:   CXXFLAGS += -mcpu=native
README.md:658:Termux from F-Droid offers an alternative route to execute the project on an Android device. This method empowers you to construct the project right from within the terminal, negating the requirement for a rooted device or SD Card.
convert.py:283:    # conversion because numpy doesn't natively support int4s).
convert.py:577:                    "which is not yet natively supported by GGML. "
convert.py:734:# PyTorch can't do this natively as of time of writing:
examples/server/README.md:208:### Extending or building alternative Web Front End
examples/server/json.hpp:3069:// the following utilities are natively available in C++14
flake.nix:39:          nativeBuildInputs = with pkgs; [ cmake ];
ggml.h:79:// in advance how much memory you need for your computation. Alternatively, you can allocate a large enough memory
ggml.h:144:// Alternatively, there are helper functions, such as ggml_get_f32_1d() and ggml_set_f32_1d() that can be used.
Binary file models/ggml-vocab.bin matches
pocs/vdot/vdot.cpp:85:// Alternative version of the above. Faster on my Mac (~45 us vs ~55 us per dot product),
Binary file quantize-stats matches
Binary file server matches
spm-headers/ggml.h:79:// in advance how much memory you need for your computation. Alternatively, you can allocate a large enough memory
spm-headers/ggml.h:144:// Alternatively, there are helper functions, such as ggml_get_f32_1d() and ggml_set_f32_1d() that can be used.
Binary file zh-models/33B/ggml-model-q4_0.bin matches

我们使用 make 编译,所以只需关注 Makefile 中的 native

Makefile:107:   CFLAGS   += -march=native -mtune=native
Makefile:108:   CXXFLAGS += -march=native -mtune=native
Makefile:166:   NVCCFLAGS = --forward-unknown-to-host-compiler -arch=native
Makefile:222:   CFLAGS   += -mcpu=native
Makefile:223:   CXXFLAGS += -mcpu=native

根据错误提示,不难想到是 -march=native 这里的问题。

2.修改 Makefile 源码

vim Makefile 打开文件查看对应的源码

根据我的多次尝试:

166         NVCCFLAGS = --forward-unknown-to-host-compiler -arch=native

修改为:

166         NVCCFLAGS = --forward-unknown-to-host-compiler -arch=compute_87

【说明】compute_87 是在 nvcc --list-gpu-arch 命令中的输出结果。

3.重新编译

执行:

make clean
make LLAMA_CUBLAS=1

输出:

# make LLAMA_CUBLAS=1
I llama.cpp build info:
I UNAME_S:  Linux
I UNAME_P:  x86_64
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include
I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include
I LDFLAGS:   -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
I CC:       cc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
I CXX:      g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0

cc  -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include   -c ggml.c -o ggml.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c llama.cpp -o llama.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c examples/common.cpp -o common.o
cc -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include   -c -o k_quants.o k_quants.c
nvcc --forward-unknown-to-host-compiler -arch=compute_87 -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -Wno-pedantic -c ggml-cuda.cu -o ggml-cuda.o
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/main/main.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o main  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib

====  Run ./main -h for help.  ====

g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/quantize/quantize.cpp ggml.o llama.o k_quants.o ggml-cuda.o -o quantize  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/quantize-stats/quantize-stats.cpp ggml.o llama.o k_quants.o ggml-cuda.o -o quantize-stats  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/perplexity/perplexity.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o perplexity  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/embedding/embedding.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o embedding  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include pocs/vdot/vdot.cpp ggml.o k_quants.o ggml-cuda.o -o vdot  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/train-text-from-scratch/train-text-from-scratch.cpp ggml.o llama.o k_quants.o ggml-cuda.o -o train-text-from-scratch  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/simple/simple.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o simple  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -Iexamples/server examples/server/server.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o server  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ --shared -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/embd-input/embd-input-lib.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o libembdinput.so  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib
g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include examples/embd-input/embd-input-test.cpp ggml.o llama.o common.o k_quants.o ggml-cuda.o -o embd-input-test  -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L. -lembdinput

编译成功!!!

测试

【AI实战】llama.cpp量化cuBLAS编译;nvcc fatal:Value ‘native‘ is not defined for option ‘gpu-architecture‘,大语言模型,llama,llama.cpp,量化

参考

1.【AI实战】llama.cpp 量化部署 llama-33B
2.https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/llama.cpp量化部署
3.https://github.com/ggerganov/whisper.cpp/issues/876
4.https://github.com/coreylowman/dfdx/pull/564
5.https://github.com/ggerganov/llama.cpp文章来源地址https://www.toymoban.com/news/detail-655084.html

到了这里,关于【AI实战】llama.cpp量化cuBLAS编译;nvcc fatal:Value ‘native‘ is not defined for option ‘gpu-architecture‘的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • ggerganov/llama.cpp 编译

    如果想在 Windows 系统编译出 llama.cpp 项目(这个是github上的仓库, ggerganov/llama.cpp ),需要在Visual Studio上添加项目内的若干个源文件。这篇简陋的笔记记录了截至目前为止项目中的 main 可执行程序编译时依赖的各个代码文件和它们的路径,方便我自己事后回过头来查,算是备

    2024年02月12日
    浏览(28)
  • 5.llama.cpp编译及使用

    llama.cpp ggml 向量库 cmake 编译:版本稍高一些,我的是3.22 支持cuda 最后在build/bin目录下生成 meta官网下载,贼麻烦 huggingface下载 Linly: 国内Linly开源 模型量化的python代码在llama.cpp下面找到。在硬件资源有限的情况下才对模型进行量化。 在build/bin找到quantize 模型下载 模型转换

    2024年04月28日
    浏览(20)
  • AI-windows下使用llama.cpp部署本地Chinese-LLaMA-Alpaca-2模型

    生成的文件在 .buildbin ,我们要用的是 main.exe , binmain.exe -h 查看使用帮助 本项目基于Meta发布的可商用大模型Llama-2开发,是中文LLaMAAlpaca大模型的第二期项目,开源了中文LLaMA-2基座模型和Alpaca-2指令精调大模型。这些模型在原版Llama-2的基础上扩充并优化了中文词表,使用

    2024年04月25日
    浏览(40)
  • LLM大模型推理加速实战:vllm、fastllm与llama.cpp使用指南

    随着人工智能技术的飞速发展,大型语言模型(LLM)在诸如自然语言处理、智能问答、文本生成等领域的应用越来越广泛。然而,LLM模型往往具有庞大的参数规模,导致推理过程计算量大、耗时长,成为了制约其实际应用的关键因素。为了解决这个问题,一系列大模型推理加

    2024年04月13日
    浏览(29)
  • LLM系列 | 22 : Code Llama实战(下篇):本地部署、量化及GPT-4对比

    引言 模型简介 依赖安装 模型inference 代码补全 4-bit版模型 代码填充 指令编码 Code Llama vs ChatGPT vs GPT4 小结 青山隐隐水迢迢,秋尽江南草未凋。 小伙伴们好,我是《小窗幽记机器学习》的小编:卖热干面的小女孩。紧接前文: 今天这篇小作文作为代码大语言模型Code Llama的下

    2024年02月07日
    浏览(30)
  • 【ollama】(2):在linux搭建环境,编译ollama代码,测试qwen大模型,本地运行速度飞快,本质上是对llama.cpp 项目封装

    https://github.com/ollama/ollama/tree/main/docs https://www.bilibili.com/video/BV1oS421w7aM/ 【ollama】(2):在linux搭建环境,编译ollama代码,测试qwen大模型,本地运行速度飞快,本质上是对llama.cpp 项目封装 要是失败执行,子模块更新: 需要编译 llama.cpp 的代码, 然后经过漫长的编译,就而可以

    2024年04月08日
    浏览(54)
  • nvcc fatal:nvcc cannot find a supported version of Microsoft Visual Studio.

    花了一天时间解决这个问题,装了好几个版本的VS2010,2015,2017;然后cuda也下载了11.1;11.3;卸载重装了好几次,电脑还重启了好几次,最后还是不行,打算把这个问题放一放来着,还是想挣扎一下,解决了。 这些概念的理解可以参考: 显卡,显卡驱动,nvcc, cuda driver,cudatoo

    2024年02月08日
    浏览(31)
  • 用stable_diffusion_webui遇到CUBLAS_STATUS_INVALID_VALUE when calling cublasGemmStridedBatchedExFix 报错

    1,【新修正】手把手教你在linux中部署stable-diffusion-webui N卡A卡显卡可用 2,手把手教你在linux中手动编译并安装xformers (以上文章作者为B站up主:青空朝颜モー) 在点击生成图片时发生以下报错: RuntimeError: CUDA error : CUBLAS_STATUS_INVALID_VALUE when calling cublasGemmStridedBatchedExFix ( h

    2024年02月09日
    浏览(38)
  • AI时代Python量化交易实战:ChatGPT让量化交易插上翅膀

    目录 一、引言 二、ChatGPT与量化交易的融合 三、实践应用:ChatGPT在量化交易中的成功案例 四、挑战与前景 五、结论 《AI时代Python量化交易实战:ChatGPT让量化交易插上翅膀》📚→ 当当 | 京东 亮点 内容简介 获取方式 前些天发现了一个巨牛的人工智能学习网站,通俗易懂

    2024年02月03日
    浏览(40)
  • win10+2019+cuda11.6 nvcc fatal : Cannot find compiler ‘cl.exe‘ in PATH

    第一步: 在系统变量无名称变量 Path 列表中添加如下 2 个位置 C:Program Files (x86)Microsoft Visual Studio2019CommunityVCToolsMSVC*14.27.29110*(根据自己环境该码不同)binHostx64x64 C:Program Files (x86)Microsoft Visual Studio2019CommunityCommon7IDE 第二步: 在系统变量中新建一个变量起名为 LIB,

    2024年02月09日
    浏览(34)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包