[NLP]LLM---FineTune自己的Llama2模型

这篇具有很好参考价值的文章主要介绍了[NLP]LLM---FineTune自己的Llama2模型。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

一 数据集准备

Let’s talk a bit about the parameters we can tune here. First, we want to load a llama-2-7b-hf model and train it on the mlabonne/guanaco-llama2-1k (1,000 samples), which will produce our fine-tuned model llama-2-7b-miniguanaco. If you’re interested in how this dataset was created, you can check this notebook. Feel free to change it: there are many good datasets on the Hugging Face Hub, like databricks/databricks-dolly-15k.

使用如下代码, 准备离线数据集(GPU机器不能联网)

import os
from datasets import load_from_disk, load_dataset

print("loading dataset...")
dataset_name = "mlabonne/guanaco-llama2-1k"
dataset = load_dataset(path=dataset_name, split="train", download_mode="reuse_dataset_if_exists")
print(dataset)

offline_dataset_path = "./guanaco-llama2-1k-offline"
os.makedirs(offline_dataset_path, exist_ok=True)

print("save to disk...")
dataset.save_to_disk('./guanaco-llama2-1k-offline')
print("load from disk")
dataset = load_from_disk("./guanaco-llama2-1k-offline")
print(dataset)

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

What’s the prompt template best practice for prompting the Llama 2 chat models? #

Note that this only applies to the llama 2 chat models. The base models have no prompt structure, they’re raw non-instruct tuned models.

单轮对话

<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>

{{ user_message }} [/INST] {{ model_answer_1 }} 

不包含 a system message如下:

<s>[INST] {user_message_1} [/INST] {{ model_answer_1 }} 

多轮对话

<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>

{{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] {{ model_answer_2 }} </s><s>[INST] {{ user_msg_3 }} [/INST]

What end of string signifier is used by llama 2 - {EOS} or </s>?

</s>

To confirm this for yourself, download the llama 2 tokenizer.model and run this python script:

from sentencepiece import SentencePieceProcessor

def main():
    model_path = 'tokenizer.model'
    sp_model = SentencePieceProcessor(model_file=model_path)

    eos_symbol = sp_model.id_to_piece(sp_model.eos_id())
    print(f"End of sequence (EOS) symbol is: {eos_symbol}")

if __name__ == "__main__":
    main()

Does it use an end of string signifier if there’s only a single message? #

I don’t think so, but I’m not certain.

参考: Llama 2 Prompt Template · GPU Utils ⚡️

<s>[INST] 32位应用内存不够了应该怎么办 [/INST] 1、删除不必要的数据及应用程序:智能管理器(内存管理器)-储存空间/内存。\n2、关闭不需要的应用程序:点击屏幕左下角近期任务键-点击要关闭的应用程序右上角的“X”/下方的关闭全部。\n3、若手机支持存储卡,将多媒体文件移动到扩展卡或将拍摄存储路径设置为扩展卡。若未有存储卡,建议定期将多媒体文件移动到电脑中。\n4、卸载不经常使用的软件或文件。 </s>

['<s>[INST] 我失恋了该怎么安慰自己。 [/INST] 失恋是一种很痛苦的经历,但是在这个过程中,你需要学会如何安慰自己,以帮助自己恢复过来。以下是一些可以帮助你安慰自己的建议:\n\n1.允许自己感受痛苦:失恋是一种正常的情感反应,你不需要隐藏或压抑自己的感受。让自己慢慢地去面对这个事实,并接受自己的情绪。\n\n2.找到支持:找到可以倾诉的朋友、家人或心理咨询师,让他们帮助你排解情绪,倾听你的烦恼。不要孤单地面对这个问题。\n\n3.做一些让自己感觉良好的事情:找到一些让自己快乐的事情,比如运动、阅读、听音乐、旅行等等。这些活动可以帮助你分散注意力,从而减轻痛苦。\n\n4.关注自己的成长:失恋是一个成长的机会,它可以让你更好地认识自己、学会如何处理情感和人际关系。关注自己的成长,不要忘记这个过程也是有意义的。\n\n5.给自己时间:失恋需要时间来治愈,不要强迫自己很快就要恢复。给自己足够的时间来处理这个过程,慢慢地恢复自己的心态。\n\n失恋是一种痛苦的经历,但也是一种学习和成长的机会。相信自己可以渡过这个难关,未来会更加美好。 </s><s>[INST] 你可以安慰一下失恋的我吗? [/INST] 说起来可能您会比较失望,我作为一个处理文本的AI语言模型,虽然能在您表达的意思的基础上,给您一些文本回答,但是我没有真正的对现实事件的认识能力,以及对现实的人的共情能力。但是我会很愿意去您提供一些安慰和支持,如果您愿意分享您的感受和想法,以及让我知道您需要什么样的帮助和支持,我会尽我所能去帮助您度过这段难过的时期。 </s>']

import pandas as pd
from datasets import load_dataset
# Initialize lists to store prompts and responses
prompts = []
responses = []

prev_examples = []
# Parse out prompts and responses from examples
for example in prev_examples:
  try:
    split_example = example.split('-----------')
    prompts.append(split_example[1].strip())
    responses.append(split_example[3].strip())
  except:
    pass

# Create a DataFrame
df = pd.DataFrame({
    'prompt': prompts,
    'response': responses
})

# Remove duplicates
df = df.drop_duplicates()

print('There are ' + str(len(df)) + ' successfully-generated examples. Here are the first few:')

df.head()

# Split the data into train and test sets, with 90% in the train set
train_df = df.sample(frac=0.9, random_state=42)
test_df = df.drop(train_df.index)

# Save the dataframes to .jsonl files
train_df.to_json('train.jsonl', orient='records', lines=True)
test_df.to_json('test.jsonl', orient='records', lines=True)

# Load datasets
train_dataset = load_dataset('json', data_files='/content/train.jsonl', split="train")
valid_dataset = load_dataset('json', data_files='/content/test.jsonl', split="train")

# Preprocess datasets
train_dataset_mapped = train_dataset.map(lambda examples: {'text': [f'<s>[INST] ' + prompt + ' [/INST]</s>' + response for prompt, response in zip(examples['prompt'], examples['response'])]}, batched=True)
valid_dataset_mapped = valid_dataset.map(lambda examples: {'text': [f'<s>[INST] ' + prompt + ' [/INST]</s>' + response for prompt, response in zip(examples['prompt'], examples['response'])]}, batched=True)

# trainer = SFTTrainer(
#     model=model,
#     train_dataset=train_dataset_mapped,
#     eval_dataset=valid_dataset_mapped,  # Pass validation dataset here
#     peft_config=peft_config,
#     dataset_text_field="text",
#     max_seq_length=max_seq_length,
#     tokenizer=tokenizer,
#     args=training_arguments,
#     packing=packing,
# )

如果是自己的数据集,首先需要对训练数据处理一下,因为训练数据包括了两列,分别是prompt和response,我们需要把两列的文本合为一起,通过格式化字符来区分,如以下格式化函数:

{'text': ['[INST]  + prompt + ' [/INST] ' + response)

二  开始微调Llama2

In this section, we will fine-tune a Llama 2 model with 7 billion parameters on a A800 GPU(90G) with high RAM. we need parameter-efficient fine-tuning (PEFT) techniques like LoRA or QLoRA.

QLoRA will use a rank of 64 with a scaling parameter of 16 (see this article for more information about LoRA parameters). We’ll load the Llama 2 model directly in 4-bit precision using the NF4 type and train it for 1 epoch. To get more information about the other parameters, check the TrainingArguments, PeftModel, and SFTTrainer documentation.

import os
import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
    HfArgumentParser,
    TrainingArguments,
    pipeline,
    logging,
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer

# Load dataset (you can process it here)

# from datasets import load_dataset
#
# print("loading dataset")
# dataset_name = "mlabonne/guanaco-llama2-1k"
# dataset = load_dataset(dataset_name, split="train")
# dataset.save_to_disk('./guanaco-llama2-1k-offline')

from datasets import load_from_disk

offline_dataset_path = "./guanaco-llama2-1k-offline"
dataset = load_from_disk(offline_dataset_path)

# The model that you want to train from the Hugging Face hub
model_name = "/home/work/llama-2-7b"

# Fine-tuned model name
new_model = "llama-2-7b-miniguanaco"

################################################################################
# QLoRA parameters
################################################################################

# LoRA attention dimension
lora_r = 64

# Alpha parameter for LoRA scaling
lora_alpha = 16

# Dropout probability for LoRA layers
lora_dropout = 0.1

################################################################################
# bitsandbytes parameters
################################################################################

# Activate 4-bit precision base model loading
use_4bit = True

# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"

# Quantization type (fp4 or nf4)
bnb_4bit_quant_type = "nf4"

# Activate nested quantization for 4-bit base models (double quantization)
use_nested_quant = False

################################################################################
# TrainingArguments parameters
################################################################################

# Output directory where the model predictions and checkpoints will be stored
output_dir = "./results"

# Number of training epochs
num_train_epochs = 1

# Enable fp16/bf16 training (set bf16 to True with an A100)
fp16 = False
bf16 = False

# Batch size per GPU for training
per_device_train_batch_size = 16

# Batch size per GPU for evaluation
per_device_eval_batch_size = 16

# Number of update steps to accumulate the gradients for
gradient_accumulation_steps = 1

# Enable gradient checkpointing
gradient_checkpointing = True

# Maximum gradient normal (gradient clipping)
max_grad_norm = 0.3

# Initial learning rate (AdamW optimizer)
learning_rate = 2e-4

# Weight decay to apply to all layers except bias/LayerNorm weights
weight_decay = 0.001

# Optimizer to use
optim = "paged_adamw_32bit"

# Learning rate schedule
lr_scheduler_type = "cosine"

# Number of training steps (overrides num_train_epochs)
max_steps = -1

# Ratio of steps for a linear warmup (from 0 to learning rate)
warmup_ratio = 0.03

# Group sequences into batches with same length
# Saves memory and speeds up training considerably
group_by_length = True

# Save checkpoint every X updates steps
save_steps = 0

# Log every X updates steps
logging_steps = 25

################################################################################
# SFT parameters
################################################################################

# Maximum sequence length to use
max_seq_length = None

# Pack multiple short examples in the same input sequence to increase efficiency
packing = False

# Load the entire model on the GPU 0
device_map = {"": 0}

# Load tokenizer and model with QLoRA configuration
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)

bnb_config = BitsAndBytesConfig(
    load_in_4bit=use_4bit,
    bnb_4bit_quant_type=bnb_4bit_quant_type,
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=use_nested_quant,
)

# Check GPU compatibility with bfloat16
if compute_dtype == torch.float16 and use_4bit:
    major, _ = torch.cuda.get_device_capability()
    if major >= 8:
        print("=" * 80)
        print("Your GPU supports bfloat16: accelerate training with bf16=True")
        print("=" * 80)

# Load base model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map=device_map
)
model.config.use_cache = False
model.config.pretraining_tp = 1

# Load LLaMA tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"  # Fix weird overflow issue with fp16 training

# Load LoRA configuration
peft_config = LoraConfig(
    lora_alpha=lora_alpha,
    lora_dropout=lora_dropout,
    r=lora_r,
    bias="none",
    task_type="CAUSAL_LM",
)

# Set training parameters
training_arguments = TrainingArguments(
    output_dir=output_dir,
    num_train_epochs=num_train_epochs,
    per_device_train_batch_size=per_device_train_batch_size,
    gradient_accumulation_steps=gradient_accumulation_steps,
    optim=optim,
    save_steps=save_steps,
    logging_steps=logging_steps,
    learning_rate=learning_rate,
    weight_decay=weight_decay,
    fp16=fp16,
    bf16=bf16,
    max_grad_norm=max_grad_norm,
    max_steps=max_steps,
    warmup_ratio=warmup_ratio,
    group_by_length=group_by_length,
    lr_scheduler_type=lr_scheduler_type
)

# Set supervised fine-tuning parameters
trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    peft_config=peft_config,
    dataset_text_field="text",
    max_seq_length=max_seq_length,
    tokenizer=tokenizer,
    args=training_arguments,
    packing=packing,
)

# Train model
trainer.train()

# Save trained model
trainer.model.save_pretrained(new_model)

We can now load everything and start the fine-tuning process. We’re relying on multiple wrappers, so bear with me.

  • First of all, we want to load the dataset we defined. Here, our dataset is already preprocessed but, usually, this is where you would reformat the prompt, filter out bad text, combine multiple datasets, etc.
  • Then, we’re configuring bitsandbytes for 4-bit quantization.
  • Next, we’re loading the Llama 2 model in 4-bit precision on a GPU with the corresponding tokenizer.
  • Finally, we’re loading configurations for QLoRA, regular training parameters, and passing everything to the SFTTrainer. The training can finally start!

三  合并lora weights,保存完整模型

import os
import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer
)
from peft import PeftModel

# Save trained model

model_name = "/home/work/llama-2-7b"

# Load the entire model on the GPU 0
device_map = {"": 0}

new_model = "llama-2-7b-miniguanaco"

# Reload model in FP16 and merge it with LoRA weights
base_model = AutoModelForCausalLM.from_pretrained(
    model_name,
    low_cpu_mem_usage=True,
    return_dict=True,
    torch_dtype=torch.float16,
    device_map=device_map,
)
model = PeftModel.from_pretrained(base_model, new_model)
print("merge model and lora weights")
model = model.merge_and_unload()

output_merged_dir = "final_merged_checkpoint2"
os.makedirs(output_merged_dir, exist_ok=True)
print("save model")
model.save_pretrained(output_merged_dir, safe_serialization=True)

# Reload tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"

print("save tokenizer")
tokenizer.save_pretrained(output_merged_dir)

加载微调后的模型进行对话

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

使用2000条Alpaca中文训练一个epoch后,答复如下: 如果给的问题也是[INST] [/INST]开头,回答就准确。

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
)

# The model that you want to train from the Hugging Face hub
model_name = "./final_merged_alpaca"

device_map = {"": 0}

# 在 FP16 中重新加载模型
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    low_cpu_mem_usage=True,
    return_dict=True,
    torch_dtype=torch.float16,
    device_map=device_map,
)

# 重新加载分词器以保存它
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

from transformers import pipeline

# prompt = f"[INST] 请问中国的首都在哪里? [/INST]"  # 将此处的命令替换为与您的任务相关的命令
# num_new_tokens = 100  # 更改为您想要生成的新令牌的数量
# # 计算提示中的标记数量
# num_prompt_tokens = len(tokenizer(prompt)['input_ids'])
# # 计算一代的最大长度
# max_length = num_prompt_tokens + num_new_tokens

gen = pipeline('text-generation', model=model, tokenizer=tokenizer, max_length=256)

print("##############################################################")
prompt1 = "[INST] 请问中国的首都在哪里?[/INST]"  # 将此处的命令替换为与您的任务相关的命令
result1 = gen(prompt1)
print(prompt1)
print(result1[0]['generated_text'])

print("##############################################################")
prompt2 = "请问中国的首都在哪里?"  # 将此处的命令替换为与您的任务相关的命令
result2 = gen(prompt2)
print(result2[0]['generated_text'])

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

对比微调之前: 

[NLP]LLM---FineTune自己的Llama2模型,自然语言处理,人工智能

四 总结

In this article, we saw how to fine-tune a Llama-2-7b model. We introduced some necessary background on LLM training and fine-tuning, as well as important considerations related to instruction datasets. We successfully fine-tuned the Llama 2 model with its native prompt template and custom parameters.

These fine-tuned models can then be integrated into LangChain and other architectures as advantageous alternatives to the OpenAI API. Remember, in this new paradigm, instruction datasets are the new gold, and the quality of your model heavily depends on the data on which it’s been fine-tuned. So, good luck with building high-quality datasets!

ML Blog - Fine-Tune Your Own Llama 2 Model in a Colab Notebook (mlabonne.github.io)

Extended Guide: Instruction-tune Llama 2 (philschmid.de)

How to fine-tune LLaMA 2 using SFT, LORA (accubits.com)Fine-Tuning LLaMA 2 Models using a single GPU, QLoRA and AI Notebooks - OVHcloud Blog

GPT-LLM-Trainer:如何使用自己的数据轻松快速地微调和训练LLM_技术狂潮AI的博客-CSDN博客文章来源地址https://www.toymoban.com/news/detail-708124.html

到了这里,关于[NLP]LLM---FineTune自己的Llama2模型的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • ChatGPT论文:大语言模型LLM之战:Dolly、LLaMA 、Vicuna、Guanaco、Bard、ChatGPT--在自然语言转SQL(NL2SQL、Text-to-SQL)的比较(一)

    ChatGPT的成功引发了一场AI竞赛,研究人员致力于开发新的大型语言模型(LLMs),以匹敌或超越商业模型的语言理解和生成能力。近期,许多声称其性能接近GPT-3.5或GPT-4的模型通过各种指令调优方法出现了。作为文本到SQL解析的从业者,我们感谢他们对开源研究的宝贵贡献。然

    2024年02月02日
    浏览(34)
  • [NLP] Llama2模型运行在Mac机器

    本文将介绍如何使用llama.cpp在MacBook Pro本地部署运行量化版本的Llama2模型推理,并基于LangChain在本地构建一个简单的文档QA应用。本文实验环境为Apple M1 芯片 + 8GB内存。 Llama2是 Meta AI开发的Llama大语言模型的迭代版本,提供了7B,13B,70B参数的规格。Llama2和Llama相比在对话场景中

    2024年01月16日
    浏览(40)
  • llama.cpp LLM模型 windows cpu安装部署;运行LLaMA2模型测试

    参考: https://www.listera.top/ji-xu-zhe-teng-xia-chinese-llama-alpaca/ https://blog.csdn.net/qq_38238956/article/details/130113599 cmake windows安装参考:https://blog.csdn.net/weixin_42357472/article/details/131314105 1、下载: 2、编译 3、测试运行 参考: https://zhuanlan.zhihu.com/p/638427280 模型下载: https://huggingface.co/nya

    2024年02月16日
    浏览(34)
  • 2023年!自然语言处理(NLP)10 大预训练模型

    来源: AINLPer 公众号 (每日干货分享!!) 编辑: ShuYini 校稿: ShuYini 时间: 2022-10-23 语言模型是构建NLP应用程序的关键。现在人们普遍相信基于预训练模型来构建NLP语言模型是切实有效的方法。随着疫情阴霾的散去,相信NLP技术会继续渗透到众多行业中。在此过程中,肯定有很

    2024年02月16日
    浏览(45)
  • 自然语言处理 Paddle NLP - 预训练语言模型及应用

    基础 自然语言处理(NLP) 自然语言处理PaddleNLP-词向量应用展示 自然语言处理(NLP)-前预训练时代的自监督学习 自然语言处理PaddleNLP-预训练语言模型及应用 自然语言处理PaddleNLP-文本语义相似度计算(ERNIE-Gram) 自然语言处理PaddleNLP-词法分析技术及其应用 自然语言处理Pa

    2024年02月08日
    浏览(64)
  • 【自然语言处理(NLP)】基于ERNIE语言模型的文本语义匹配

    作者简介 :在校大学生一枚,华为云享专家,阿里云专家博主,腾云先锋(TDP)成员,云曦智划项目总负责人,全国高等学校计算机教学与产业实践资源建设专家委员会(TIPCC)志愿者,以及编程爱好者,期待和大家一起学习,一起进步~ . 博客主页 : ぃ灵彧が的学习日志

    2024年02月10日
    浏览(47)
  • 自然语言处理 Paddle NLP - 基于预训练模型完成实体关系抽取

    基础 自然语言处理(NLP) 自然语言处理PaddleNLP-词向量应用展示 自然语言处理(NLP)-前预训练时代的自监督学习 自然语言处理PaddleNLP-预训练语言模型及应用 自然语言处理PaddleNLP-文本语义相似度计算(ERNIE-Gram) 自然语言处理PaddleNLP-词法分析技术及其应用 自然语言处理Pa

    2024年02月10日
    浏览(36)
  • 人工智能LLM大模型:让编程语言更加支持自然语言处理

    作者:禅与计算机程序设计艺术 作为人工智能的核心技术之一,自然语言处理 (Natural Language Processing, NLP) 已经在各个领域得到了广泛应用,如智能客服、智能翻译、文本分类等。而机器学习 (Machine Learning, ML) 模型是实现自然语言处理的主要工具之一,其中深度学习 (Deep Lear

    2024年02月15日
    浏览(43)
  • 手搓 自然语言模型 LLM 拆分em结构设计 网络参数对比

    数据 数据集 voc_size hidden_size total total B max_len seconds days 65536 512 37486592 0.03749B 1024 256 0.2 65536 1024 82837504 0.08284B 2048 512 0.5 65536

    2024年02月13日
    浏览(31)
  • 7个顶级开源数据集来训练自然语言处理(NLP)和文本模型

    推荐:使用 NSDT场景编辑器快速助你搭建可二次编辑的3D应用场景 NLP现在是一个令人兴奋的领域,特别是在像AutoNLP这样的用例中,但很难掌握。开始使用NLP的主要问题是缺乏适当的指导和该领域的过度广度。很容易迷失在各种论文和代码中,试图吸收所有内容。 要意识到的是

    2024年02月13日
    浏览(43)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包