lmg_Model Links and Torrents

这篇具有很好参考价值的文章主要介绍了lmg_Model Links and Torrents。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

/lmg/ Model Links and Torrents

  1. Changelog (MDY)
  2. 4-bit GPU Model Requirements
  3. 4-bit CPU/llama.cpp RAM Requirements
  4. LLaMA 16-bit Weights
  5. LLaMA 4-bit Weights
  6. BluemoonRP 13B (05/07/2023)
  7. Vicuna 13B Cocktail (05/07/2023)
  8. GPT4-x-AlpacaDente2-30B (05/05/2023)
  9. Vicuna 13B Free v1.1 (05/01/2023)
  10. Pygmalion/Metharme 7B (04/30/2023)
  11. GPT4-X-Alpasta 30B (04/29/2023)
  12. OpenAssistant LLaMa 30B SFT 6 (04/23/2023)
  13. SuperCOT (04/22/2023)
  14. Previous Model List

Changelog (MDY)

[05-07-2023] - Added Vicuna 13B Cocktail, bluemoonrp-13b & AlpacaDente2
[05-05-2023] - Added CPU quantization variation links
[05-02-2023] - Initial Rentry

4-bit GPU Model Requirements

VRAM Required takes full context (2048) into account. You may be able to load the model on GPU’s with slightly lower VRAM, but you will not be able to run at full context. If you do not have enough RAM to load model, it will load into swap. Groupsize models will increase VRAM usage, as will running a LoRA alongside the model.

Model Parameters VRAM Required GPU Examples RAM to Load
7B 8GB RTX 1660, 2060, AMD 5700xt, RTX 3050, RTX 3060, RTX 3070 6 GB
13B 12GB AMD 6900xt, RTX 2060 12GB, 3060 12GB, 3080 12GB, A2000 12GB
30B 24GB RTX 3090, RTX 4090, A4500, A5000, 6000, Tesla V100 32GB
65B 42GB A100 80GB, NVIDIA Quadro RTX 8000, Quadro RTX A6000 64GB

4-bit CPU/llama.cpp RAM Requirements

5bit to 8bit Quantized models are becoming more common, and will obviously require more RAM. Will update these with the numbers when I have them.

Model 4-bit 5-bit 8-bit
7B 3.9 GB
13B 7.8 GB
30B 19.5 GB
65B 38.5 GB

LLaMA 16-bit Weights

The original LLaMA weights converted to Transformers @ 16bit. A torrent is available as well, but it uses outdated configuration files that will need to be updated. Note that these aren’t for general use, as the VRAM requirements are beyond consumer scope.

Filtering Status : None

Model Type Download
7B 16bit HF Format HuggingFace
13B 16bit HF Format HuggingFace
30B 16bit HF Format HuggingFace
65B 16bit HF Format HuggingFace
All the above HF Format [Torrent Magnet](magnet:?xt=urn:btih:8d634925911a03f787d9f68ac075a9b24281573a&dn=Safe-LLaMA-HF-v2 (4-04-23)&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce)

LLaMA 4-bit Weights

The original LLaMA weights quantized to 4-bit. The GPU CUDA versions have outdated tokenizer and configuration files. It is recommended to either update them with this or use the universal LLaMA tokenizer.

Filtering Status : None

Model Type Download
7B, 13B, 30B, 65B CPU Torrent Magnet
7B, 13B, 30B, 65B GPU CUDA (no groupsize) Torrent Magnet
7B, 13B, 30B, 65B GPU CUDA (128gs) Torrent Magnet
7B, 13B, 30B, 65B GPU Triton Neko Institute of Science HF page

BluemoonRP 13B (05/07/2023)

An RP/ERP focused finetune of LLaMA 13B finetuned on BluemoonRP logs. It is designed to simulate a 2-person RP session. Two versions are provided; a standard 13B with 2K context and an experimental 13B with 4K context. It has a non-standard format (LEAD/ASSOCIATE), so ensure that you read the model card and use the correct syntax.

Filtering Status : Very light

Model Type Download
13B GPU & CPU https://huggingface.co/reeducator/bluemoonrp-13b

Vicuna 13B Cocktail (05/07/2023)

Vicuna 1.1 13B finetune incorporating various datasets in addition to the unfiltered ShareGPT. This is an experiment attempting to enhance the creativity of the Vicuna 1.1, while also reducing censorship as much as possible. All datasets have been cleaned. Additionally, only the “instruct” portion of GPTeacher has been used. It has a non-standard format (USER/ASSOCIATE), so ensure that you read the model card and use the correct syntax.

Filtering Status : Very light

Model Type Download
13B GPU & CPU https://huggingface.co/reeducator/vicuna-13b-cocktail

GPT4-x-AlpacaDente2-30B (05/05/2023)

ChanSung’s Alpaca-LoRA-30B-elina merged with Open Assistant’s second Finetune. Testing in progress.

Filtering Status : Medium

Model Type Download
30B GGML CPU Q5
30B GPU Q4 CUDA

https://huggingface.co/askmyteapot/GPT4-x-AlpacaDente2-30b-4bit

Vicuna 13B Free v1.1 (05/01/2023)

A work-in-progress, community driven attempt to make an unfiltered version of Vicuna. It currently has an early stopping bug, and a partial workaround has been posted on the repo’s model card.

Filtering Status : Very light

Model Type Download
13B GPU & CPU https://huggingface.co/reeducator/vicuna-13b-free

Pygmalion/Metharme 7B (04/30/2023)

Pygmalion 7B is a dialogue model that uses LLaMA-7B as a base. The dataset includes RP/ERP content. Metharme 7B is an experimental instruct-tuned variation, which can be guided using natural language like other instruct models.

PygmalionAI intend to use the same dataset on the higher parameter LLaMA models. No ETA as of yet.

Filtering Status : None

Model Type Download
7B Pygmalion/Metharme XOR https://huggingface.co/PygmalionAI/
7B Pygmalion GGML CPU Q4, Q5, Q8
7B Metharme GGML CPU Q4, Q5
7B Pygmalion GPU Q4 Triton, Q4 CUDA 128gs
7B Metharme GPU Q4 Triton, Q4 CUDA

GPT4-X-Alpasta 30B (04/29/2023)

An attempt at improving Open Assistant’s performance as an instruct while retaining its excellent prose. The merge consists of Chansung’s GPT4-Alpaca Lora and Open Assistant’s native fine-tune.

It is an extremely coherent model for logic based instruct outputs. And while the prose is generally very good, it does suffer from the “Assistant” personality bleedthrough that plagues the OpenAssistant dataset, which can give you dry dialogue for creative writing/chatbot purposes. However, several accounts claim it’s nowhere near as bad as OA’s finetunes, and that the prose and coherence gains makes up for it.

Filtering Status : Medium

Model Type Download
30B 4bit CPU & GPU CUDA https://huggingface.co/MetaIX/GPT4-X-Alpasta-30b-4bit

OpenAssistant LLaMa 30B SFT 6 (04/23/2023)

An open-source alternative to OpenAI’s ChatGPT/GPT 3.5 Turbo. However, it seems to suffer from overfitting and is heavily filtered. Not recommended for creative writing or chat bots, given the “assistant” personality constantly bleeds through, giving you dry dialogue.

Filtering Status : Heavy

Model Type Download
30B XOR https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor
30B GGML CPU Q4
30B GPU Q4 CUDA, Q4 CUDA 128gs

SuperCOT (04/22/2023)

SuperCOT is a LoRA trained with the aim of making LLaMa follow prompts for Langchain better, by infusing chain-of-thought datasets, code explanations and instructions, snippets, logical deductions and Alpaca GPT-4 prompts.

Though designed to improve Langchain, it’s quite versatile and works very well for other tasks like creative writing and chatbots. The author also pruned a number of filters from the datasets. As of early May 2023, it’s the most recommended model on /lmg/

Filtering Status : Very Light文章来源地址https://www.toymoban.com/news/detail-442575.html

Model Type Download
Original LoRA LoRA https://huggingface.co/kaiokendev/SuperCOT-LoRA
13B GGML CPU Q4, Q8
30B GGML CPU Q4, Q5, Q8
13B GPU Q4 CUDA 128gs
30B GPU Q4 CUDA, Q4 CUDA 128gs

到了这里,关于lmg_Model Links and Torrents的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents

    本文是LLM系列文章,针对《TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents》的翻译。 随着自然语言处理的最新进展,大型语言模型(LLM)已成为各种现实世界应用程序的强大工具。尽管LLM的能力很强,但其内在的生成能力可能不足以处理复杂的任务,而复杂的任务

    2024年02月09日
    浏览(76)
  • 机器人控制算法——TEB算法—Obstacle Avoidance and Robot Footprint Model(避障与机器人足迹模型)

    1.1处罚条款 避障是作为整体轨迹优化的一部分来实现的。显然,优化涉及到找到指定成本函数(目标函数)的最小成本解(轨迹)。简单地说:如果一个计划的(未来)姿势违反了与障碍物的期望分离,那么成本函数的成本必须增加。理想情况下,在这些情况下,成本函数值

    2024年02月06日
    浏览(46)
  • 03-25 周一 论文阅读 Train Large, Then Compress: Rethinking Model Size for Effcient Trainning and Inference

    03-25 周一 论文阅读 Train Large, Then Compress: Rethinking Model Size for Effcient Trainning and Inference of Transformers 时间 版本 修改人 描述 V0.1 宋全恒 新建文档  Lizhuohan是单位是UC Berkeley(加州大学伯克利分校)。这可以从文献的作者信息中得到确认,其中提到了 “1UC Berkeley” 作为其隶属单

    2024年04月27日
    浏览(38)
  • 双向循环链表、dancing links

    目录 十字交叉双向循环链表(dancing links) 精确覆盖问题 dancing links X算法(V1递归版) POJ 3740 Easy Finding 数独 X算法优化 X算法(V2非递归版) X算法(V3非递归版) X算法(V4递归版) X算法(V5非递归版) X算法加速(V6非递归版) X算法(V7基于尾递归的非递归版) X算法(V8最

    2024年02月13日
    浏览(74)
  • iOS-配置Universal Links通用链接

    1、开启Associated Domains服务 登录苹果开发者网站,在 Certificates, Identifiers Profiles 页面左侧选择 Identifiers ,右侧选择对应的 App ID ,点击进入配置详情页,开启 Associated Domains 服务; 2、更新Profile文件(配置文件) 在 Certificates, Identifiers Profiles 页面左侧选择 Profiles ,右侧选择对

    2024年02月11日
    浏览(35)
  • 论文解读《Learning Deep Network for Detecting 3D Object Keypoints and 6D Poses》 无需位姿标注的model-free 6D位姿估计

    论文:《Learning Deep Network for Detecting 3D Object Keypoints and 6D Poses》 摘要: 解决问题:标注困难且没有CAD模型。 开发了一种基于关键点的6D对象姿态检测方法,Object Keypoint based POSe Estimation (OK-POSE)。通过使用大量具有多视点之间的 相对变换信息 的图像对(相对变换信息可以很容

    2024年02月04日
    浏览(48)
  • FlinkSQL ChangeLog

    登录sql-client,创建一个upsert-kafka的sql作业(注意,这里发送给kafka的消息必须带key,普通只有value的消息无法解析,这里的key即是主键的值) 发送消息带key和消费消息显示key方式如下 作业的DAG图如下 DAG图中有一个ChangelogNormalize,代码中搜索到对应的类是StreamPhysicalChangelogNo

    2024年03月28日
    浏览(50)
  • iOS Universal Links(通用链接)详细教程

    一:Universal Links是用来做什么的? iOS9.0推出的用于应用之间跳转的一种机, 通过一个https的链接启动app。如果手机有安装需要启动的app,可实现无缝跳转。如果没有安装,会打开网页。 实现场景:微信链接无缝跳转App, 网页链接无缝跳转App 移动端iOS实现: 1:找到app的Bu

    2024年03月15日
    浏览(39)
  • flutter开发实战-Universal Links配置及flutter微信分享实现

    flutter开发实战-Universal Links配置及flutter微信分享实现 在最近开发中碰到了需要实现微信分享,在iOS端需要配置UniversalLink,在分享使用fluwx插件来实现微信分享功能。 1.1、什么是UniversalLink Universal link 是Apple在iOS9推出的一种能够方便的通过传统HTTPS链接来启动APP的功能,可以使

    2024年01月19日
    浏览(48)
  • Springboot启动出现Waiting for changelog lock...问题

    今天在开发的时候,Springboot启动的时候出现Waiting for changelog lock…问题. 问题原因 :该问题就是发生了数据库的死锁问题,可能是由于一个杀死的liquibase进程没有释放它对DATABASECHANGELOGLOCK表的锁定,导致服务启动失败,解决办法如下: 解决方案 :我们先用如下的sql语句查询

    2024年04月12日
    浏览(31)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包