摘要
The landscape of image generation has been forever changed by open vocabulary diffusion models.
However, at their core these models use transformers, which makes generation slow. Better implementations to increase the throughput of these transformers have emerged, but they still evaluate the entire model.
In this paper, we instead speed up diffusion models by exploiting natural redundancy in generated images by merging redundant tokens.
After making some diffusion-specific improvements to Token Merging (ToMe), our ToMe for Stable Diffusion can reduce the number of tokens in an existing Stable Diffusion model by up to 60% while still producing high quality images without any extra training.
In the process, we speed up image generation by up to 2× and reduce memory consumption by up to 5.6×.
Furthermore, this speed-up stacks with effi- cient implementations such as xFormers, mini文章来源地址https://www.toymoban.com/news/detail-490092.html
文章来源:https://www.toymoban.com/news/detail-490092.html
到了这里,关于读论文--Token Merging for Fast Stable Diffusion(用于快速Diffusion模型的tome技术)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!