NLP(六十四)使用FastChat计算LLaMA-2模型的token长度

这篇具有很好参考价值的文章主要介绍了NLP(六十四)使用FastChat计算LLaMA-2模型的token长度。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

LLaMA-2模型部署

  在文章NLP(五十九)使用FastChat部署百川大模型中,笔者介绍了FastChat框架,以及如何使用FastChat来部署百川模型。
  本文将会部署LLaMA-2 70B模型,使得其兼容OpenAI的调用风格。部署的Dockerfile文件如下:

FROM nvidia/cuda:11.7.1-runtime-ubuntu20.04

RUN apt-get update -y && apt-get install -y python3.9 python3.9-distutils curl
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.9 get-pip.py
RUN pip3 install fschat

Docker-compose.yml文件如下:

version: "3.9"

services:
  fastchat-controller:
    build:
      context: .
      dockerfile: Dockerfile
    image: fastchat:latest
    ports:
      - "21001:21001"
    entrypoint: ["python3.9", "-m", "fastchat.serve.controller", "--host", "0.0.0.0", "--port", "21001"]

  fastchat-model-worker:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - ./model:/root/model
    image: fastchat:latest
    ports:
      - "21002:21002"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0', '1']
              capabilities: [gpu]
    entrypoint: ["python3.9", "-m", "fastchat.serve.model_worker", "--model-names", "llama2-70b-chat", "--model-path", "/root/model/llama2/Llama-2-70b-chat-hf", "--num-gpus", "2", "--gpus",  "0,1", "--worker-address", "http://fastchat-model-worker:21002", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "21002"]

  fastchat-api-server:
    build:
      context: .
      dockerfile: Dockerfile
    image: fastchat:latest
    ports:
      - "8000:8000"
    entrypoint: ["python3.9", "-m", "fastchat.serve.openai_api_server", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "8000"]

部署成功后,会占用2张A100,每张A100占用约66G显存。
  测试模型是否部署成功:

curl http://localhost:8000/v1/models

输出结果如下:

{
  "object": "list",
  "data": [
    {
      "id": "llama2-70b-chat",
      "object": "model",
      "created": 1691504717,
      "owned_by": "fastchat",
      "root": "llama2-70b-chat",
      "parent": null,
      "permission": [
        {
          "id": "modelperm-3XG6nzMAqfEkwfNqQ52fdv",
          "object": "model_permission",
          "created": 1691504717,
          "allow_create_engine": false,
          "allow_sampling": true,
          "allow_logprobs": true,
          "allow_search_indices": true,
          "allow_view": true,
          "allow_fine_tuning": false,
          "organization": "*",
          "group": null,
          "is_blocking": false
        }
      ]
    }
  ]
}

部署LLaMA-2 70B模型成功!

Prompt token长度计算

  在FastChat的Github开源项目中,项目提供了计算Prompt的token长度的API,文件路径为:fastchat/serve/model_worker.py,调用方法为:

curl --location 'localhost:21002/count_token' \
--header 'Content-Type: application/json' \
--data '{"prompt": "What is your name?"}'

输出结果如下:

{
  "count": 6,
  "error_code": 0
}

Conversation token长度计算

  在FastChat中计算Conversation(对话)的token长度较为麻烦。
  首先我们需要获取LLaMA-2 70B模型的对话配置,调用API如下:

curl --location --request POST 'http://localhost:21002/worker_get_conv_template'

输出结果如下:

{'conv': {'messages': [],
          'name': 'llama-2',
          'offset': 0,
          'roles': ['[INST]', '[/INST]'],
          'sep': ' ',
          'sep2': ' </s><s>',
          'sep_style': 7,
          'stop_str': None,
          'stop_token_ids': [2],
          'system_message': 'You are a helpful, respectful and honest '
                            'assistant. Always answer as helpfully as '
                            'possible, while being safe. Your answers should '
                            'not include any harmful, unethical, racist, '
                            'sexist, toxic, dangerous, or illegal content. '
                            'Please ensure that your responses are socially '
                            'unbiased and positive in nature.\n'
                            '\n'
                            'If a question does not make any sense, or is not '
                            'factually coherent, explain why instead of '
                            "answering something not correct. If you don't "
                            "know the answer to a question, please don't share "
                            'false information.',
          'system_template': '[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n'}}

  在FastChat中的对话文件(fastchat/conversation.py)中,提供了对话加工的代码,这里不再展示,使用时直接复制整个文件即可,该文件不依赖任何第三方模块。
  我们需要将对话按照OpenAI的方式加工成对应的Prompt,输入的对话(messages)如下:

messages = [{“role”: “system”, “content”: “You are Jack, you are 20 years old, answer questions with humor.”}, {“role”: “user”, “content”: “What is your name?”},{“role”: “assistant”, “content”: " Well, well, well! Look who’s asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!“}, {“role”: “user”, “content”: “How old are you?”}, {“role”: “assistant”, “content”: " Oh, you want to know my age? Well, let’s just say I’m older than a bottle of wine but younger than a bottle of whiskey. I’m like a fine cheese, getting better with age, but still young enough to party like it’s 1999!”}, {“role”: “user”, “content”: “Where is your hometown?”}]

Python代码如下:

# -*- coding: utf-8 -*-
# @place: Pudong, Shanghai 
# @file: prompt.py
# @time: 2023/8/8 19:24
from conversation import Conversation, SeparatorStyle

messages = [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999!"}, {"role": "user", "content": "Where is your hometown?"}]

llama2_conv = {"conv":{"name":"llama-2","system_template":"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n","system_message":"You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.","roles":["[INST]","[/INST]"],"messages":[],"offset":0,"sep_style":7,"sep":" ","sep2":" </s><s>","stop_str":None,"stop_token_ids":[2]}}
conv = llama2_conv['conv']

conv = Conversation(
        name=conv["name"],
        system_template=conv["system_template"],
        system_message=conv["system_message"],
        roles=conv["roles"],
        messages=list(conv["messages"]),  # prevent in-place modification
        offset=conv["offset"],
        sep_style=SeparatorStyle(conv["sep_style"]),
        sep=conv["sep"],
        sep2=conv["sep2"],
        stop_str=conv["stop_str"],
        stop_token_ids=conv["stop_token_ids"],
    )

if isinstance(messages, str):
    prompt = messages
else:
    for message in messages:
        msg_role = message["role"]
        if msg_role == "system":
            conv.set_system_message(message["content"])
        elif msg_role == "user":
            conv.append_message(conv.roles[0], message["content"])
        elif msg_role == "assistant":
            conv.append_message(conv.roles[1], message["content"])
        else:
            raise ValueError(f"Unknown role: {msg_role}")

    # Add a blank message for the assistant.
    conv.append_message(conv.roles[1], None)
    prompt = conv.get_prompt()

print(repr(prompt))

加工后的Prompt如下:

"[INST] <<SYS>>\nYou are Jack, you are 20 years old, answer questions with humor.\n<</SYS>>\n\nWhat is your name?[/INST]  Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend! </s><s>[INST] How old are you? [/INST]  Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999! </s><s>[INST] Where is your hometown? [/INST]"

  最后再调用计算Prompt的API(参考上节的Prompt token长度计算),输出该对话的token长度为199.
  我们使用FastChat提供的对话补充接口(v1/chat/completions)验证输入的对话token长度,请求命令为:

curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
    "model": "llama2-70b-chat",
    "messages": [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who'\''s asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let'\''s just say I'\''m older than a bottle of wine but younger than a bottle of whiskey. I'\''m like a fine cheese, getting better with age, but still young enough to party like it'\''s 1999!"}, {"role": "user", "content": "Where is your hometown?"}]
}'

输出结果为:

{
    "id": "chatcmpl-mQxcaQcNSNMFahyHS7pamA",
    "object": "chat.completion",
    "created": 1691506768,
    "model": "llama2-70b-chat",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": " Ha! My hometown? Well, that's a tough one. I'm like a bird, I don't have a nest, I just fly around and land wherever the wind takes me. But if you really want to know, I'm from a place called \"The Internet\". It's a magical land where memes and cat videos roam free, and the Wi-Fi is always strong. It's a beautiful place, you should visit sometime!"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 199,
        "total_tokens": 302,
        "completion_tokens": 103
    }
}

注意,输出的prompt_tokens为199,这与我们刚才计算的对话token长度的结果是一致的!

总结

  本文主要介绍了如何在FastChat中部署LLaMA-2 70B模型,并详细介绍了Prompt token长度计算以及对话(conversation)的token长度计算。希望能对读者有所帮助~
  笔者的一点心得是:阅读源码真的很重要。
  笔者的个人博客网址为:https://percent4.github.io/ ,欢迎大家访问~

参考网址

  1. NLP(五十九)使用FastChat部署百川大模型: https://blog.csdn.net/jclian91/article/details/131650918
  2. FastChat: https://github.com/lm-sys/FastChat

  欢迎关注我的公众号NLP奇幻之旅,原创技术文章第一时间推送。

NLP(六十四)使用FastChat计算LLaMA-2模型的token长度,NLP,自然语言处理,人工智能,LLaMA-2

  欢迎关注我的知识星球“自然语言处理奇幻之旅”,笔者正在努力构建自己的技术社区。文章来源地址https://www.toymoban.com/news/detail-655700.html

NLP(六十四)使用FastChat计算LLaMA-2模型的token长度,NLP,自然语言处理,人工智能,LLaMA-2

到了这里,关于NLP(六十四)使用FastChat计算LLaMA-2模型的token长度的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • [NLP]使用Alpaca-Lora基于llama模型进行微调教程

    Stanford Alpaca 是在 LLaMA 整个模型上微调,即对预训练模型中的所有参数都进行微调(full fine-tuning)。但该方法对于硬件成本要求仍然偏高且训练低效。 [NLP]理解大型语言模型高效微调(PEFT) 因此, Alpaca-Lora 则是利用 Lora 技术,在冻结原模型 LLaMA 参数的情况下,通过往模型中加

    2024年02月15日
    浏览(42)
  • [NLP] 使用Llama.cpp和LangChain在CPU上使用大模型-RAG

    下面是构建这个应用程序时将使用的软件工具: 1.Llama-cpp-python  下载llama-cpp, llama-cpp-python [NLP] Llama2模型运行在Mac机器-CSDN博客 2、LangChain LangChain是一个提供了一组广泛的集成和数据连接器,允许我们链接和编排不同的模块。可以常见聊天机器人、数据分析和文档问答等应用。

    2024年02月04日
    浏览(36)
  • NLP(六十二)HuggingFace中的Datasets使用

       Datasets 库是 HuggingFace 生态系统中一个重要的数据集库,可用于轻松地访问和共享数据集,这些数据集是关于音频、计算机视觉、以及自然语言处理等领域。 Datasets 库可以通过一行来加载一个数据集,并且可以使用 Hugging Face 强大的数据处理方法来快速准备好你的数据集

    2024年02月15日
    浏览(37)
  • 【NLP】Llama & Alpaca大模型

      🔎大家好,我是Sonhhxg_柒,希望你看完之后,能对你有所帮助,不足请指正!共同学习交流🔎 📝个人主页-Sonhhxg_柒的博客_CSDN博客 📃 🎁欢迎各位→点赞👍 + 收藏⭐️ + 留言📝​ 📣系列专栏 - 机器学习【ML】 自然语言处理【NLP】  深度学习【DL】 ​​  🖍foreword

    2024年02月09日
    浏览(32)
  • [NLP] Llama2模型运行在Mac机器

    本文将介绍如何使用llama.cpp在MacBook Pro本地部署运行量化版本的Llama2模型推理,并基于LangChain在本地构建一个简单的文档QA应用。本文实验环境为Apple M1 芯片 + 8GB内存。 Llama2是 Meta AI开发的Llama大语言模型的迭代版本,提供了7B,13B,70B参数的规格。Llama2和Llama相比在对话场景中

    2024年01月16日
    浏览(39)
  • [NLP]LLM---FineTune自己的Llama2模型

    Let’s talk a bit about the parameters we can tune here. First, we want to load a  llama-2-7b-hf  model and train it on the  mlabonne/guanaco-llama2-1k  (1,000 samples), which will produce our fine-tuned model  llama-2-7b-miniguanaco . If you’re interested in how this dataset was created, you can check this notebook. Feel free to change it: there ar

    2024年02月09日
    浏览(36)
  • NLP实践——使用Llama-2进行中文对话

    在之前的博客 NLP实践——Llama-2 多轮对话prompt构建中,介绍了如何构建多轮对话的prompt,本文将介绍如何使用Llama-2进行中文对话。 现有的很多项目,在开源的Llama-2基础上,进行了中文场景的训练,然而Llama-2本身就具有多语种的能力,理论上是可以直接运用于中文场景的。

    2024年02月14日
    浏览(27)
  • 第二百六十四回

    我们在上一章回中介绍了SliverPadding组件相关的内容,本章回中将介绍Sliver综合示例.闲话休提,让我们一起Talk Flutter吧。 我们在前面的章回中介绍了各种Sliver相关的组件:SliverList,SliverGrid,SliverAppBar和SliverPadding,本章回将综合使用它们。下面是示例程序的 运行效果图。不过

    2024年01月18日
    浏览(33)
  • 真题详解(对象)-软件设计(六十四)

    真题详解(DNS)-软件设计(六十三) https://blog.csdn.net/ke1ying/article/details/130448106 TCP和UCP都提供了_____能力。 端口寻址 解析: UDP是不可靠无连接协议,没有连接管理,没有流量控制,没有重试。 面向对象 子类可以从父类那里继承属性和才做为自己的,这叫 继承 。 子类可以用更

    2024年02月02日
    浏览(23)
  • [NLP]LLM--使用LLama2进行离线推理

    本文基于Chinese-LLaMA-Alpaca-2项目代码介绍,使用原生的llama2-hf 克隆好了Chinese-LLaMA-Alpaca-2 项目之后,基于GPU的部署非常简单。下载完成以后的模型参数(Hugging Face 格式)如下: 简单说明一下各个文件的作用 文件名称 示例 说明 config.json { \\\"architectures\\\": [ \\\"LlamaForCausalLM\\\" ], \\\"hidden_si

    2024年02月09日
    浏览(35)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包