【大模型】大模型 CPU 推理之 llama.cpp

这篇具有很好参考价值的文章主要介绍了【大模型】大模型 CPU 推理之 llama.cpp。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

llama.cpp

  • 描述

    The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

    • Plain C/C++ implementation without any dependencies
    • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
    • AVX, AVX2 and AVX512 support for x86 architectures
    • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
    • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP)
    • Vulkan, SYCL, and (partial) OpenCL backend support
    • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
  • 官网
    https://github.com/ggerganov/llama.cpp

  • Supported platforms:

     Mac OS
     Linux
     Windows (via CMake)
     Docker
     FreeBSD
    
  • Supported models:

    • Typically finetunes of the base models below are supported as well.

    LLaMA 🦙
    LLaMA 2 🦙🦙
    Mistral 7B
    Mixtral MoE
    Falcon
    Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
    Vigogne (French)
    Koala
    Baichuan 1 & 2 + derivations
    Aquila 1 & 2
    Starcoder models
    Refact
    Persimmon 8B
    MPT
    Bloom
    Yi models
    StableLM models
    Deepseek models
    Qwen models
    PLaMo-13B
    Phi models
    GPT-2
    Orion 14B
    InternLM2
    CodeShell
    Gemma
    Mamba
    Xverse
    Command-R

    • Multimodal models:

    LLaVA 1.5 models, LLaVA 1.6 models
    BakLLaVA
    Obsidian
    ShareGPT4V
    MobileVLM 1.7B/3B models
    Yi-VL

安装llama.cpp

  • 下载代码
    git clone https://github.com/ggerganov/llama.cpp
    
    
  • Build
    On Linux or MacOS:
    cd llama.cpp
    
    make
    
    其他编译方法参考官网https://github.com/ggerganov/llama.cpp

Memory/Disk Requirements

【大模型】大模型 CPU 推理之 llama.cpp,大语言模型,人工智能,大模型,人工智能,llama.cpp

Quantization

【大模型】大模型 CPU 推理之 llama.cpp,大语言模型,人工智能,大模型,人工智能,llama.cpp

测试推理

下载模型

快速下载模型,参考: 无需 VPN 即可急速下载 huggingface 上的 LLM 模型
我这里下 qwen/Qwen1.5-1.8B-Chat-GGUF 进行测试

huggingface-cli download --resume-download  qwen/Qwen1.5-1.8B-Chat-GGUF  --local-dir  qwen/Qwen1.5-1.8B-Chat-GGUF

测试

cd ./llama.cpp

./main -m /your/path/qwen/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf -n 512 --color -i -cml -f ./prompts/chat-with-qwen.txt

需要修改提示语,可以编辑 ./prompts/chat-with-qwen.txt 进行修改。

加载模型输出信息:

llama.cpp# ./main -m /mnt/data/llm/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf -n 512 --color -i -cml -f ./prompts/chat-with-qwen
.txt
Log start
main: build = 2527 (ad3a0505)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1711760850
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /mnt/data/llm/Qwen1.5-1.8B-Chat-GGUF/qwen1_5-1_8b-chat-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen1.5-1.8B-Chat-AWQ-fp16
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 2048
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 5504
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 16
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  10:                qwen2.use_parallel_residual bool             = true
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  16:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  18:                    tokenizer.chat_template str              = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - kv  20:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:   12 tensors
llama_model_loader: - type q8_0:   12 tensors
llama_model_loader: - type q4_K:  133 tensors
llama_model_loader: - type q6_K:   13 tensors
llm_load_vocab: special tokens definition check successful ( 293/151936 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5504
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 1.84 B
llm_load_print_meta: model size       = 1.13 GiB (5.28 BPW)
llm_load_print_meta: general.name     = Qwen1.5-1.8B-Chat-AWQ-fp16
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  1155.67 MiB
...................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =    96.00 MiB
llama_new_context_with_model: KV self size  =   96.00 MiB, K (f16):   48.00 MiB, V (f16):   48.00 MiB
llama_new_context_with_model:        CPU  output buffer size =   296.75 MiB
llama_new_context_with_model:        CPU compute buffer size =   300.75 MiB
llama_new_context_with_model: graph nodes  = 868
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 4 / 4 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
main: interactive mode on.
Reverse prompt: '<|im_start|>user
'
sampling:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = 512, n_keep = 10


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

system
You are a helpful assistant.
user

>

输入文本:What’s AI?

输出示例:
【大模型】大模型 CPU 推理之 llama.cpp,大语言模型,人工智能,大模型,人工智能,llama.cpp文章来源地址https://www.toymoban.com/news/detail-851352.html

参考

  • https://github.com/ggerganov/llama.cpp

到了这里,关于【大模型】大模型 CPU 推理之 llama.cpp的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 解密 LLAMA2 代码:揭开语言人工智能惊奇的秘密

    简介 在不断发展的 AI 和自然语言处理领域,深度学习模型的突破推动着机器理解和生成人类语言的能力。在这些杰出的模型中,LLAMA2 Transformer 脱颖而出,成为真正的游戏规则改变者,将语言理解和生成的可能性推向新的高度。 LLAMA2 基于 Transformer 架构,融入了先进技术和架

    2024年02月21日
    浏览(40)
  • 【人工智能】— 贝叶斯网络、概率图模型、全局语义、因果链、朴素贝叶斯模型、枚举推理、变量消元

    频率学派: 概率是事件发生的长期预期频率。 P(A) = n/N,其中n是事件A在N次机会中发生的次数。 \\\"某事发生的概率是0.1\\\"意味着0.1是在无穷多样本的极限条件下能够被观察到的比例。 在许多情况下,不可能进行重复实验。 例如问题:第三次世界大战发生的概率是多少? 概率是信

    2024年02月05日
    浏览(77)
  • 人工智能_普通服务器CPU_安装清华开源人工智能AI大模型ChatGlm-6B_001---人工智能工作笔记0096

    使用centos安装,注意安装之前,保证系统可以联网,然后执行yum update 先去更新一下系统,可以省掉很多麻烦 20240219_150031 这里我们使用centos系统吧,使用习惯了. ChatGlm首先需要一台个人计算机,或者服务器, 要的算力,训练最多,微调次之,推理需要算力最少 其实很多都支持CPU,但为什么

    2024年02月20日
    浏览(42)
  • 人工智能 | Llama大模型:与AI伙伴合二为一,共创趣味交流体验

    Llama 大模型介绍 我们介绍 LLaMA,这是一个基础语言模型的集合,参数范围从 7B 到 65B。我们在数万亿个Token上训练我们的模型,并表明可以专门使用公开可用的数据集来训练最先进的模型,而无需诉诸专有的和无法访问的数据集。特别是,LLaMA-13B 在大多数基准测试中都优于

    2024年02月03日
    浏览(33)
  • llama.cpp LLM模型 windows cpu安装部署;运行LLaMA-7B模型测试

    参考: https://www.listera.top/ji-xu-zhe-teng-xia-chinese-llama-alpaca/ https://blog.csdn.net/qq_38238956/article/details/130113599 cmake windows安装参考:https://blog.csdn.net/weixin_42357472/article/details/131314105 1、下载: 2、编译 3、测试运行 参考: https://zhuanlan.zhihu.com/p/638427280 模型下载: https://huggingface.co/nya

    2024年02月15日
    浏览(37)
  • llama.cpp LLM模型 windows cpu安装部署;运行LLaMA2模型测试

    参考: https://www.listera.top/ji-xu-zhe-teng-xia-chinese-llama-alpaca/ https://blog.csdn.net/qq_38238956/article/details/130113599 cmake windows安装参考:https://blog.csdn.net/weixin_42357472/article/details/131314105 1、下载: 2、编译 3、测试运行 参考: https://zhuanlan.zhihu.com/p/638427280 模型下载: https://huggingface.co/nya

    2024年02月16日
    浏览(33)
  • 人工智能_PIP3安装使用国内镜像源_安装GIT_普通服务器CPU_安装清华开源人工智能AI大模型ChatGlm-6B_002---人工智能工作笔记0097

    接着上一节来看,可以看到,这里 创建软连接以后 可以看到执行python3 -V 就可以看到已经安装成功 python3 然后再去安装pip3  首先去下载软件,到/data/soft ,可以用wget命令也可以自己用浏览器下载 然后再去安装 python3 get-pip.py 可以看到报错了

    2024年02月21日
    浏览(45)
  • llama.cpp LLM模型 windows cpu安装部署

    参考: https://www.listera.top/ji-xu-zhe-teng-xia-chinese-llama-alpaca/ https://blog.csdn.net/qq_38238956/article/details/130113599 cmake windows安装参考:https://blog.csdn.net/weixin_42357472/article/details/131314105 1、下载: 2、编译 3、测试运行 参考: https://zhuanlan.zhihu.com/p/638427280 模型下载: https://huggingface.co/nya

    2024年02月11日
    浏览(32)
  • LLM大模型推理加速实战:vllm、fastllm与llama.cpp使用指南

    随着人工智能技术的飞速发展,大型语言模型(LLM)在诸如自然语言处理、智能问答、文本生成等领域的应用越来越广泛。然而,LLM模型往往具有庞大的参数规模,导致推理过程计算量大、耗时长,成为了制约其实际应用的关键因素。为了解决这个问题,一系列大模型推理加

    2024年04月13日
    浏览(29)
  • 上海人工智能实验室发布LLaMA-Adapter | 如何1小时训练你的多模态大模型用于下游任务

    本文首发于微信公众号 CVHub,未经授权不得以任何形式售卖或私自转载到其它平台,违者必究! Title: LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention Code: https://github.com/zrrskywalker/llama-adapter PDF: https://arxiv.org/pdf/2303.16199.pdf Instruction-Following 指令跟随方法:是指通过

    2024年02月09日
    浏览(44)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包