大模型综述论文笔记1-5

这篇具有很好参考价值的文章主要介绍了大模型综述论文笔记1-5。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Keywords

PLMs:pre-trained language models
NLP:natural language processing
LLM:large language models
LM:language modeling
AI:artificial intelligence
SLM:statistical language models
NLM:Neural language models
RNNs:recurrent neural networks
ELMo:Embedding from Language Models
AGI:artificial general intelligence
ICL:In-context learning

Introduction

https://github.com/RUCAIBox/LLMSurvey

SLM

SLM 's basic idea is based on Markov assumption.The SLMs with a fixed context length n are also called n-gram language models.

瓶颈:维度问题,由于指数增长的转换概率需要计算,SLM无法准确估计高位语言模型

衍生:backoff estimation and Good-Tuning estimation, 用于解决数据稀疏的问题

NLM

通过神经网络来表征单词序列的概率问题。开启了用语言模型来做表征建模(representation learning, the beyond is word sequence modeling词序建模)

distributed representation of words
word prediction function conditioned on distributed word vectors
word2vec

PLM

ELMo通过bidirectional LSTM (biLSTM)网络捕获了上下文信息,并可以通过特定的下游任务进行fine-tuning.ELMo简介

BERT可以使用大规模的未标注数据进行特定的预训练任务

LLM

scaling PLMs(scaling model size or data size)

Three differences between PLMs and LLMs:
1.LLMs表现出在更小的PLMs中可能无法观察到的更惊人的能力
2.通过prompting interface来访问LLMs(eg:gpt-4 API)
3.LLMs的发展不需要明确区分以研究或是工程化为目的,LLMs的训练需要大数据处理和并行训练这些更实际的经验。

Backgroud for LLMs

LLMs refers to Transformer language models that contain hundreds of billions (or more) of params, which are trained on massive text data.

Scaling Laws for LLMs

LLMs 可以适配相同结构的transformer 并可以作为小模型的与训练模型

KM scaling law

通过 model size (N), dataset size (D), and the amount of training compute © 三个因素来衡量神经网络模型的表现

The three laws were derived by fitting the model performance with
varied data sizes (22M to 23B tokens), model sizes (768M to 1.5B
non-embedding parameters) and training compute,under some assumptions
(e.g., the analysis of one factor should be not bottlenecked by the
other two factors).

Chinchilla scaling law

.They conducted rigorous experiments by varying a larger range of
model sizes (70M to 16B) and data sizes (5B to 500B tokens) and fitted a similar
scaling law yet with different coefficients

the KM scaling law favors a larger budget allocation in model size
than the data size, while the Chinchilla scaling law argues that the
two sizes should be increased in equal scales

问题:

However, some abilities (e.g., in-context learning) are
unpredictable according to the scaling law, which can be observed only
when the model size exceeds a certain level (as discussed below).

Emergent Abilities of LLMs

emergent abilities of LLMs are formally defined as “the abilities that
are not present in small models but arise in large models”

three typical emergent abilities for LLMs:

In-context learning

https://blog.csdn.net/c9Yv2cf9I06K2A9E/article/details/129311991
ICT玩法大全

assuming that the language model has been provided with a natural
language instruction and/or several task demonstrations, it can
generate the expected output for the test instances by completing the
word sequence of input text, without requiring additional training or
gradient update

Instruction following

By fine-tuning with a mixture of multi-task datasets formatted via
natural language descriptions (called instruction tuning), LLMs are
shown to perform well on unseen tasks that are also described in the
form of instructions

Step-by-step reasoning

with the chain-of-thought (CoT) prompting strategy, LLMs can solve
such tasks by utilizing the prompting mechanism that involves
intermediate reasoning steps for deriving the final answer

Key Techniques for LLMs

Scaling

larger model/data sizes and more training compute typically lead to an improved model capacity

Training

To support distributed training, several optimization frameworks have been released to facilitate the implementation and deployment of parallel algorithms, such as DeepSpeed and Megatron-LM

Ability eliciting

These abilities might not be explicitly exhibited when LLMs perform some specific tasks.As the technical approach, it is useful to design suitable task instructions or specific in-context learning strategies to elicit such abilities

Alignment tuning

they are likely to generate toxic, biased, or even harmful content for humans. It is necessary to align LLMs with human values
InstructGPT designs an effective tuning approach that enables LLMs to follow the expected instructions, which utilizes the technique of reinforcement learning with human feedback

Tools manipulation

For example, LLMs can utilize the calculator for accurate computation and employ search engines to retrieve unknown information文章来源地址https://www.toymoban.com/news/detail-681593.html

到了这里,关于大模型综述论文笔记1-5的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【论文阅读笔记】Mamba模型代码理解

    官方实现:state-spaces/mamba (github.com) 最简化实现:johnma2006/mamba-minimal: Simple, minimal implementation of the Mamba SSM in one file of PyTorch. (github.com) 直接实现:alxndrTL/mamba.py: A simple and efficient Mamba implementation in PyTorch and MLX. (github.com) 官方代码做了大量优化,目录层级较多,对于理解模型含

    2024年04月13日
    浏览(51)
  • 中英双语大模型ChatGLM论文阅读笔记

    论文传送门: [1] GLM: General Language Model Pretraining with Autoregressive Blank Infilling [2] Glm-130b: An open bilingual pre-trained model Github链接: THUDM/ChatGLM-6B GLM-130B 和 GPT-3 175B(davinci) 相比,参数量减少,但性能提升了。 INT4 quantization without post training INT4量化是一种将模型的权重和激活从使用

    2024年02月02日
    浏览(35)
  • 多模态大模型-CogVLm 论文阅读笔记

    论文地址 :https://arxiv.org/pdf/2311.03079.pdf code地址 : https://github.com/THUDM/CogVLM 时间 : 2023-11 机构 : zhipuai,tsinghua : visual language model 效果:(2023-11) :CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W,

    2024年02月03日
    浏览(33)
  • CLIP原理解读——大模型论文阅读笔记一

    通过自然语言处理来的一些监督信号,可以去训练一个迁移效果很好的视觉模型。 论文的作者团队收集了一个超级大的图像文本配对的数据集,有400 million个图片文本的配对, 模型最大用了ViT-large,提出了CLIP(Contrastive Language-Image Pre-training),是一种从自然语言监督中学习

    2024年02月08日
    浏览(31)
  • 论文阅读---联邦忘却学习研究综述

    论文:联邦忘却学习研究综述 federated unlearning-联邦忘却学习 摘要 联邦忘却学习撤销用户数据对联邦学习模型的训练更新,可以进一步保护联邦学习用户的数据安全。 联邦忘却学习在联邦学习框架的基础上,通过迭代训练,直接删除等方式,撤销用户本地局部模型对全局模型

    2024年03月12日
    浏览(91)
  • LLM之幻觉(二):大语言模型LLM幻觉缓减技术综述

           LLM幻觉缓减技术分为两大主流, 梯度方法 和 非梯度方法 。梯度方法是指对基本LLM进行微调;而非梯度方法主要是在推理时使用Prompt工程技术。LLM幻觉缓减技术,如下图所示: LLM幻觉缓减技术值得注意的是: 检索增强生成(RAG) 知识检索(https://arxiv.org/abs/2307.039

    2024年01月18日
    浏览(35)
  • MiniGPT-4原理解读——大模型论文阅读笔记三

    论文:https://arxiv.org/pdf/2304.10592v1.pdf 代码:https://github.com/vision-cair/minigpt-4 GPT-4展示了非凡的多模态能力,比如直接从手写文本生成网站,以及识别图像中的幽默元素。这些特性在以前的视觉语言模型中很少见。我们认为GPT-4具有先进的多模态生成能力的主要原因在于利用了更

    2024年02月11日
    浏览(27)
  • BLIP2原理解读——大模型论文阅读笔记二

    论文:https://arxiv.org/abs/2301.12597 代码:https://github.com/salesforce/LAVIS/tree/main/projects/blip2 端到端训练视觉语言模型需要大尺度模型及大规模数据,该过程成本大,本文提出方法基于现有高质量视觉模型及语言大模型进行联合训练,为减少计算量及防止遗忘,作者对预训练模型进行

    2024年02月09日
    浏览(30)
  • Visual ChatGPT原理解读——大模型论文阅读笔记四

    论文:https://arxiv.org/abs/2303.04671 代码:https://github.com/microsoft/TaskMatrix 如图所示,用户上传一张黄花的图像并输入一个复杂的语言指令“请根据该图像的预测深度生成一朵红花,然后逐步使其像卡通一样”。 在交互管理器的帮助下,Visual ChatGPT 开始了相关视觉基础模型的执行

    2024年02月09日
    浏览(31)
  • 论文阅读笔记AI篇 —— Transformer模型理论+实战 (四)

    参考文章或视频链接 [1] 《论文阅读笔记AI篇 —— Transformer模型理论+实战 (一)》- CSDN [2] 《论文阅读笔记AI篇 —— Transformer模型理论+实战 (二)》- CSDN [3] 《论文阅读笔记AI篇 —— Transformer模型理论+实战 (三)》- CSDN 如果说钢铁侠中的 J.A.R.V.I.S. (贾维斯)是一个AGI通用人工智能的

    2024年01月24日
    浏览(35)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包