首页 > 其他分享 >Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models

Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models

时间:2024-03-18 11:29:21浏览次数:14  
标签:Knowledge Language 模型 知识 编辑 医学 LLM Editing 适配器

本文是LLM系列文章,针对《Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models》的翻译。

医学大语言模型的编辑事实知识与解释能力

摘要

模型编辑旨在精确地修改大型语言模型(LLM)对特定知识的行为,同时保持无关知识不变。它已被证明能有效解决LLM中的幻觉和过期问题。因此,它可以促进LLM在许多关键领域(如医学领域)的应用,在这些领域,幻觉是不可容忍的。在本文中,我们提出了两个模型编辑研究,并在医学领域对其进行了验证:(1)直接编辑事实医学知识和(2)编辑对事实的解释。同时,我们观察到,当前的模型编辑方法与医学知识的专业化和复杂性作斗争。因此,我们提出了MedLaSA,一种用于医学模型编辑的新型分层可扩展适配器策略。它采用因果追踪来识别神经元中知识的精确位置,然后将可扩展适配器引入LLM的密集层。基于相应的

标签:Knowledge,Language,模型,知识,编辑,医学,LLM,Editing,适配器
From: https://blog.csdn.net/c_cpp_csharp/article/details/136803829

相关文章

  • Jailbreaking Large Language Models in Few Queries via Disguise and Reconstructio
    本文是LLM系列文章,针对《MakingThemAskandAnswer:JailbreakingLargeLanguageModelsinFewQueriesviaDisguiseandReconstruction》的翻译。让他们问答:通过伪装和重建在少数查询中打破大型语言模型的牢笼摘要1引言2背景和问题陈述3LLM微调中的安全偏......
  • KGAT Knowledge Graph Attention Network for Recommendation
    目录概符号说明KGATEmbeddingLayerAttentiveEmbeddingPropagationLayers代码WangX.,HeX.,CaoY.,LiuM.andChuaT.KGAT:Knowledgegraphattentionnetworkforrecommendation.KDD,2019.概知识图谱for推荐系统.符号说明\(\mathcal{G}_1=\{(u,y_{ui}......
  • Coursera自然语言处理专项课程01:Natural Language Processing with Classification an
    NaturalLanguageProcessingwithClassificationandVectorSpacesCourseCertificate本文是NaturalLanguageProcessingwithClassificationandVectorSpaces这门课的学习笔记,仅供个人学习使用,如有侵权,请联系删除。文章目录NaturalLanguageProcessingwi......
  • 蒸馏网络中的bias是指什么? —— 论文《Distilling the Knowledge in a Neural Network
    论文地址:https://arxiv.org/pdf/1503.02531.pdf在蒸馏网络中会遇到手动调整bias的说法,但是这个bias在论文中又没有明细说明是怎么个bias,具体论文出处:Ifthisbiasisincreasedby3.5查询Gemini,得到回答:Assumingyou'rereferringtotheprevioussentenceaboutl......
  • A. Learning Languages
    https://codeforces.com/problemset/problem/277/AItpresentsaproblemthatweneedtomakeallelementconnected,itcanbesolvedbyusingdsu.Ididn'tusemydsumodelandwriteasimpleversionofDsu.classDSU{public:DSU(intm):size_(m){......
  • 理解LLMOps: Large Language Model Operations
    理解LLMOps:LargeLanguageModelOperations对于像我一样的小白来说,本文是一篇非常不错的LLMs入门介绍文档。来自:UnderstandingLLMOps:LargeLanguageModelOperations本文首先解释了新术语"LLMOps"及其背景,然后讨论使用LLMs和传统ML模型构建AI产品的不同之处,并基于这些......
  • Towards Foundation Models for Knowledge Graph Reasoning
    目录概符号说明ULTRA(amethodforUnified,Learnable,andTRAnsferableKGrepresentations)RelationGraphConstructionConditionalRelationRepresentations代码GalkinM.,YuanX.,MostafaH.,TangJ.andZhuZ.Towardsfoundationmodelsforknowledgegraphrea......
  • P9184 [USACO23OPEN] Moo Language B 题解
    恶♂趣♂味♂大♂模♂拟♂。首先是构造语句部分:开始肯定是尽可能地多用上不及物语句和及物语句;接着,因为及物语句的单词数量一定比不及物语句多,所以贪心地尽可能多地将不及物语句改为及物语句;然后,为了增加语句长度,再次贪心地在及物语句中尽可能多地添加名词和逗号即可。......
  • CF1915D Unnatural Language Processing 题解
    容易发现音节的划分不仅要求子串形如\(\texttt{CV}\)或\(\texttt{CVC}\),并且接下来的两个字符也必须是\(\texttt{CV}\),不然会导致无法划分下去。于是我们遍历字符串,找出所有满足上述条件的子串,记录需要输出\(\texttt{.}\)的位置即可。实现:intn;strings,ans,t="";cin>......
  • A Literature Survey about Why Is Prompt Tuning for Vision-Language Models Robust
    I.SummaryOverviewBackground:Avision-languagemodelcanbeadaptedtoanewclassificationtaskthroughfew-shotprompttuning.Wefindthatsuchaprompttuningprocessishighlyrobusttolabelnoises.Interest:Studyingthekeyreasonscontributing......