Unifying Large Language Models and Knowledge Graphs: A Roadmap 论文阅读笔记

这篇具有很好参考价值的文章主要介绍了Unifying Large Language Models and Knowledge Graphs: A Roadmap 论文阅读笔记。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Key Words: 

NLP, LLM, Generative Pre-training, KGs, Roadmap, Bidirectional Reasoning

Abstract:

LLMs are black models and can't capture and access factual knowledge. KGs are structured knowledge models that explicitly store rich factual knowledge. The combinations of KGs and LLMs have three frameworks, 

  1. KG-enhanced LLMs, pre-training and inference stages to provide external knowledge, used for analyzing LLMs and providing interpretability.

  2. LLM - augmented KGs, KG embedding, KG completion, KG construction, KG-to text generation, KGQA.

  3. Synergized LLMs+KGs, enhance performance in knowledge representation and reasoning.

Background

Introduction of LLMs

Encoder-only LLMs

Use the encoder to encode the sentence and understand the relationships between words.

Predict the mask words in an input sentence. Text classification, named entity recognition.

Encoder-decoder LLMs

Adopt both encoder and decoder modules. The encoder module works for encoding the input into a hidden-space, and the decoder is used to generate the target output text. Summarization, translation, question answering.

Decoder-only LLMs

Adopt the decoder module to generate target output text.

Prompt Engineering

Prompt is a sequence of natural language inputs for LLMs that specified for the task, including:

  1. Instruction: instructs the model to do a specific task.

  2. Context: provides the context for the input text or few-shot examples.

  3. Input text: the text that needs to be processed by the model.

Improve the capacity of LLMs in deverse complex tasks. CoT prompt enables complex reasoning capabilities throught intermediate reasoning steps.

Introduction of KGs

Roadmap

KG-enhanced LLMs

  • Pre-training stage

    • Integrating KGs into Training Objective

    • Integrating KGs into LLMs Input

    • KGs Instruction-tuning

  • Inference stage

    • Retrieval-Augmented Knowledge Fusion

      • RAG

    • KGs Prompting

  • Interpretability

    • KGs for LLM probing

    • KGs for LLM Analysis

LLM-augmented KGs

Knowledge Graph embedding aims to map each entity and relation into a low-dimensional vector space.

  • Text encoders for KG-related tasks

  • LLM processes the original corpus and entities for KG construction.

    • End-to-End KG Construction

    • Distilling Knowledge Graphs from LLMs

  • KG prompt, KG completion and KG reasoning.

    • PaE (LLM as Encoders)

    • PaG (LLM as Generators)

  • LLM-augmented KG-to-text Generation

    • Leveraging Knowledge from LLMs

    • Constructing large weakly KG-text aligned Corpus

  • LLM-augmented KG Question Answering

    • LLMs as Entity/relation Extractors

    • LLMs as Answer Reasoners

Synergized LLMs + KGs

Unifying Large Language Models and Knowledge Graphs: A Roadmap 论文阅读笔记,语言模型,知识图谱,论文阅读

Synergized Knowledge Representation

Aims to design a synergized model can represent knowledge from both LLMs and KGs.

Synergized Reasoning
  • LLM-KG Fusion Reasoning文章来源地址https://www.toymoban.com/news/detail-826761.html

到了这里,关于Unifying Large Language Models and Knowledge Graphs: A Roadmap 论文阅读笔记的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 论文笔记:A Simple and Effective Pruning Approach for Large Language Models

    iclr 2024 reviewer 评分 5668 大模型网络剪枝的paper 在努力保持性能的同时,舍弃网络权重的一个子集 现有方法 要么需要重新训练 这对于十亿级别的LLMs来说往往不现实 要么需要解决依赖于二阶信息的权重重建问题 这同样可能带来高昂的计算成本 ——引入了一种新颖、简单且有

    2024年04月17日
    浏览(31)
  • 【论文笔记】A Survey of Large Language Models in Medicine - Progress, Application, and Challenges

    将LLMs应用于医学,以协助医生和病人护理,成为人工智能和临床医学领域的一个有前景的研究方向。为此, 本综述提供了医学中LLMs当前进展、应用和面临挑战的全面概述 。 具体来说,旨在回答以下问题: 1)什么是LLMs,如何构建医学LLMs? 2)医学LLMs的下游表现如何? 3)

    2024年02月03日
    浏览(29)
  • Why Large Language Models Hallucinate and How to solve this//LLM为什么产生幻觉以及如何应对

    \\\" Large language models (LLMs) can generate fluent and coherent text on various topics and domains, but they are also prone to hallucinations or generating plausible sounding nonsense. This can range from minor inconsistencies to completely fabricated or contradictory statements. The causes of hallucinations are related to data quality, generation methods an

    2024年02月11日
    浏览(29)
  • 【论文阅读】Reachability Queries with Label and Substructure Constraints on Knowledge Graphs

    Wan X, Wang H. Reachability Queries With Label and Substructure Constraints on Knowledge Graphs[J]. IEEE Transactions on Knowledge and Data Engineering, 2022. 由于知识图(KGs)描述和建模了现实世界中实体和概念之间的关系,因此对KGs的推理通常对应于具有标签和实体的可达性查询穿刺约束(LSCR)。特别地,对

    2024年02月04日
    浏览(33)
  • Language Models as Knowledge Embeddings:语言模型用作知识嵌入 IJCAI 2022

    1)基于结构的知识嵌入 进一步分成基于翻译的模型和基于语义匹配的模型 基于翻译的模型采用基于距离的评分函数,TransE把实体和关系嵌入到一个维度为d的共享向量空间中;TransH,TransR,RotatE. 语义匹配模型采用基于相似性的评分函数,RESCAL,DistMult,CoKE. 2)基于描述的知识嵌入

    2024年02月07日
    浏览(34)
  • KILM: Knowledge Injection into Encoder-Decoder Language Models

    本文是LLM系列文章,针对《KILM: Knowledge Injection into Encoder-Decoder Language Models》的翻译。 大型预训练语言模型(PLMs)已被证明在其参数内保留隐含知识。为了增强这种隐性知识,我们提出了知识注入语言模型(KILM),这是一种通过持续预训练生成知识填充目标将实体相关知识注入编

    2024年02月07日
    浏览(24)
  • Automatically Correcting Large Language Models

    本文是大模型相关领域的系列文章,针对《Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies》的翻译。 大型语言模型(LLM)在一系列NLP任务中表现出了卓越的性能。然而,它们的功效被不受欢迎和不一致的行为所破坏,包括幻觉、不忠实的

    2024年02月12日
    浏览(42)
  • 文献阅读:Large Language Models as Optimizers

    文献阅读:Large Language Models as Optimizers 1. 文章简介 2. 方法介绍 1. OPRO框架说明 2. Demo验证 1. 线性回归问题 2. 旅行推销员问题(TSP问题) 3. Prompt Optimizer 3. 实验考察 结论 1. 实验设置 2. 基础实验结果 1. GSM8K 2. BBH 3. 泛化性 3. 消融实验 1. meta-prompt 2. 生成prompt的数目 3. 起始点 4.

    2024年01月19日
    浏览(27)
  • A Survey of Large Language Models

    本文是LLM系列的第一篇文章,针对《A Survey of Large Language Models》的翻译。 自从20世纪50年代提出图灵测试以来,人类一直在探索通过机器掌握语言智能。语言本质上是一个由语法规则控制的复杂的人类表达系统。开发能够理解和掌握语言的人工智能算法是一个重大挑战。在过

    2024年02月09日
    浏览(51)
  • A Survey of Knowledge-Enhanced Pre-trained Language Models

    本文是LLM系列的文章,针对《A Survey of Knowledge-Enhanced Pre-trained Language Models》的翻译。 预训练语言模型(PLM)通过自监督学习方法在大文本语料库上进行训练,在自然语言处理(NLP)的各种任务中都取得了良好的性能。然而,尽管具有巨大参数的PLM可以有效地拥有从大量训练

    2024年02月09日
    浏览(29)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包