Windows 下amd显卡训练transformer 模型。安装方法参见 : Windows下用amd显卡训练 : Pytorch-directml 重大升级,改为pytorch插件形式,兼容更好_amd显卡 pytorch_znsoft的博客-CSDN博客 文章来源:https://www.toymoban.com/news/detail-524704.html
import os
import imp
try:
imp.find_module('torch_directml')
found_directml = True
import torch_directml
except ImportError:
found_directml = False
import torch
from transformers import RobertaTokenizer, RobertaConfig, RobertaModel, RobertaForMaskedLM,pipeline
DIR="E:/transformers"
MODEL_NAME="microsoft/codebert-base"
from transformers import AutoTokenizer, AutoModel
if found_directml:
device=torch_directml.device()
else:
device=torch.device("cpu")
# tokenizer = AutoTokenizer.from_pretrained(DIR+os.sep+MODEL_NAME)
# model = AutoModel.from_pretrained(DIR+os.sep+MODEL_NAME).to(device)
# nl_tokens=tokenizer.tokenize("return maximum value")
# code_tokens=tokenizer.tokenize("def max(a,b): if a>b: return a else return b")
# tokens=[tokenizer.cls_token]+nl_tokens+[tokenizer.sep_token]+code_tokens+[tokenizer.eos_token]
# tokens_ids=tokenizer.convert_tokens_to_ids(tokens)
# tokens_ids=torch.tensor(tokens_ids)[None,:]
# tokens_ids.to(device)
# context_embeddings=model()[0]
# print(context_embeddings)
MODEL_NAME="microsoft/codebert-base-mlm"
model = RobertaForMaskedLM.from_pretrained(DIR+os.sep+MODEL_NAME)
tokenizer = RobertaTokenizer.from_pretrained(DIR+os.sep+MODEL_NAME)
model.to(device)
CODE = "if (x is not None) <mask> (x>1)"
code=tokenizer(CODE)
#.to(device)
input_ids=torch.tensor([code["input_ids"]]).to(device)
attention_mask=torch.tensor([code["attention_mask"]]).to(device)
for i in range(1000):
out=model(input_ids=input_ids,attention_mask=attention_mask)
print(out)
注意,如果直接使用pipeline可能会有问题,应该是pipeline不兼容导致的。只需要自己编写具体代码,避开pipeline即可。 amd GPU占用率能上去。文章来源地址https://www.toymoban.com/news/detail-524704.html
到了这里,关于Windows 下 AMD显卡训练模型有救了:pytorch_directml 下运行Transformers的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!