下载transformers的预训练模型时,使用bert-base-cased等模型在AutoTokenizer和AutoModel时并不会有太多问题。但在下载deberta-v3-base时可能会发生很多报错。
首先,
from transformers import AutoTokneizer, AutoModel, AutoConfig
checkpoint = 'microsoft/deberta-v3-base'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
此时会发生报错,提示
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
解决方法是文章来源:https://www.toymoban.com/news/detail-703868.html
pip install transformers sentencepiece
继续导入tokenizer,又会有如下报错文章来源地址https://www.toymoban.com/news/detail-703868.html
ImportError:
DeberetaV2Converter requires the protobuf library but it was not f
到了这里,关于Huggingface Transformers Deberta-v3-base安装踩坑记录的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!