首页 > 其他分享 >4. langgraph实现高级RAG (Corrective RAG)

4. langgraph实现高级RAG (Corrective RAG)

时间:2024-11-30 16:32:36浏览次数:8  
标签:RAG search documents Corrective web langgraph question --- state

数据准备

from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma

urls = [
    "https://lilianweng.github.io/posts/2023-06-23-agent/",
    "https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
    "https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]

docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]

text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
    chunk_size=250, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)

from langchain_community.embeddings import ZhipuAIEmbeddings
embed = ZhipuAIEmbeddings(
    model="Embedding-3",
    api_key="your api key",
)

# Add to vectorDB
batch_size = 10
for i in range(0, len(doc_splits), batch_size):
    # 确保切片不会超出数组边界
    batch = doc_splits[i:min(i + batch_size, len(doc_splits))]
    
    vectorstore = Chroma.from_documents(
        documents=batch,
        collection_name="rag-chroma",
        embedding=embed,
        persist_directory="./chroma_db"
    )

retriever = vectorstore.as_retriever()

retrieval_grader llm 模型

### Retrieval Grader

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

from pydantic import BaseModel, Field


# Data model
class GradeDocuments(BaseModel):
    """Binary score for relevance check on retrieved documents."""

    binary_score: str = Field(
        description="Documents are relevant to the question, 'yes' or 'no'"
    )

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

structured_llm_grader = llm.with_structured_output(GradeDocuments)

# Prompt
system = """You are a grader assessing relevance of a retrieved document to a user question. \n 
    If the document contains keyword(s) or semantic meaning related to the question, grade it as relevant. \n
    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""
grade_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
    ]
)

retrieval_grader = grade_prompt | structured_llm_grader

测试:

question = "agent memory"
docs = retriever.invoke(question)
doc_txt = docs[1].page_content
print(retrieval_grader.invoke({"question": question, "document": doc_txt}))
binary_score='yes'

Generate llm 模型

### Generate

from langchain import hub
from langchain_core.output_parsers import StrOutputParser

# Prompt
prompt = hub.pull("rlm/rag-prompt")

# LLM
llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
# Post-processing
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


# Chain
rag_chain = prompt | llm | StrOutputParser()

测试:

generation = rag_chain.invoke({"context": docs, "question": question})
print(generation)
d:\soft\anaconda\envs\langchain\Lib\site-packages\langsmith\client.py:354: LangSmithMissingAPIKeyWarning: API key must be provided when using hosted LangSmith API
  warnings.warn(
In a LLM-powered autonomous agent system, memory is a key component, encompassing various types of memory and utilizing techniques like Maximum Inner Product Search (MIPS) to enhance the agent's functionality. This memory component complements the LLM, which acts as the agent's brain, enabling it to perform complex tasks and problem-solving. The integration of memory is crucial for the agent's ability to learn, adapt, and make informed decisions.

question_rewriter llm 模型

### Question Re-writer

# LLM
llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

# Prompt
system = """You a question re-writer that converts an input question to a better version that is optimized \n 
     for web search. Look at the input and try to reason about the underlying semantic intent / meaning."""
re_write_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        (
            "human",
            "Here is the initial question: \n\n {question} \n Formulate an improved question.",
        ),
    ]
)

class Question(BaseModel):
    """an improved question"""
    question: str = Field(
        description="Improved Question"
    )
question_rewriter = re_write_prompt | llm.with_structured_output(Question)

测试:

qq= question_rewriter.invoke({"question": question})
print(qq)
question='What is an agent memory in the context of artificial intelligence and machine learning?'

检索工具

### Search
import os
from langchain_community.tools.tavily_search import TavilySearchResults
os.environ["TAVILY_API_KEY"] = "your api key"

web_search_tool = TavilySearchResults(k=3)

graph中的State定义

from typing import List

from typing_extensions import TypedDict


class GraphState(TypedDict):
    """
    Represents the state of our graph.

    Attributes:
        question: question
        generation: LLM generation
        web_search: whether to add search
        documents: list of documents
    """

    question: str
    generation: str
    web_search: str
    documents: List[str]
from langchain.schema import Document


def retrieve(state):
    """
    Retrieve documents

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): New key added to state, documents, that contains retrieved documents
    """
    print("---RETRIEVE---")
    question = state["question"]

    # Retrieval
    documents = retriever.invoke(question)
    return {"documents": documents, "question": question}


def generate(state):
    """
    Generate answer

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): New key added to state, generation, that contains LLM generation
    """
    print("---GENERATE---")
    question = state["question"]
    documents = state["documents"]

    # RAG generation
    generation = rag_chain.invoke({"context": documents, "question": question})
    return {"documents": documents, "question": question, "generation": generation}


def grade_documents(state):
    """
    Determines whether the retrieved documents are relevant to the question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates documents key with only filtered relevant documents
    """

    print("---CHECK DOCUMENT RELEVANCE TO QUESTION---")
    question = state["question"]
    documents = state["documents"]

    # Score each doc
    filtered_docs = []
    web_search = "No"
    for d in documents:
        score = retrieval_grader.invoke(
            {"question": question, "document": d.page_content}
        )
        grade = score.binary_score
        if grade == "yes":
            print("---GRADE: DOCUMENT RELEVANT---")
            filtered_docs.append(d)
        else:
            print("---GRADE: DOCUMENT NOT RELEVANT---")
            web_search = "Yes"
            continue
    return {"documents": filtered_docs, "question": question, "web_search": web_search}


def transform_query(state):
    """
    Transform the query to produce a better question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates question key with a re-phrased question
    """

    print("---TRANSFORM QUERY---")
    question = state["question"]
    documents = state["documents"]

    # Re-write question
    better_question = question_rewriter.invoke({"question": question})
    print(better_question)
    return {"documents": documents, "question": better_question.question}


def web_search(state):
    """
    Web search based on the re-phrased question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates documents key with appended web results
    """

    print("---WEB SEARCH---")
    question = state["question"]
    documents = state["documents"]
    
    # Web search
    docs = web_search_tool.invoke({"query": question})


    web_results = "\n".join([d["content"] for d in docs])
    web_results = Document(page_content=web_results)
    documents.append(web_results)
    return {"documents": documents, "question": question}


### Edges


def decide_to_generate(state):
    """
    Determines whether to generate an answer, or re-generate a question.

    Args:
        state (dict): The current graph state

    Returns:
        str: Binary decision for next node to call
    """

    print("---ASSESS GRADED DOCUMENTS---")
    state["question"]
    web_search = state["web_search"]
    state["documents"]

    if web_search == "Yes":
        # All documents have been filtered check_relevance
        # We will re-generate a new query
        print(
            "---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY---"
        )
        return "transform_query"
    else:
        # We have relevant documents, so generate answer
        print("---DECISION: GENERATE---")
        return "generate"

graph 流程图

from langgraph.graph import END, StateGraph, START

workflow = StateGraph(GraphState)

# Define the nodes
workflow.add_node("retrieve", retrieve)  # retrieve
workflow.add_node("grade_documents", grade_documents)  # grade documents
workflow.add_node("generate", generate)  # generatae
workflow.add_node("transform_query", transform_query)  # transform_query
workflow.add_node("web_search_node", web_search)  # web search

# Build graph
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
    "grade_documents",
    decide_to_generate,
    {
        "transform_query": "transform_query",
        "generate": "generate",
    },
)
workflow.add_edge("transform_query", "web_search_node")
workflow.add_edge("web_search_node", "generate")
workflow.add_edge("generate", END)

# Compile
app = workflow.compile()

graph可视化

from IPython.display import Image, display

try:
    display(Image(app.get_graph(xray=True).draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

请添加图片描述

相关文档检索

from pprint import pprint

# Run
inputs = {"question": "What is the Chain of thought?"}
for output in app.stream(inputs):
    for key, value in output.items():
        # Node
        pprint(f"Node '{key}':")

    pprint("\n---\n")

# Final generation
pprint(value["generation"])
---RETRIEVE---
"Node 'retrieve':"
'\n---\n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'\n---\n'
---GENERATE---
"Node 'generate':"
'\n---\n'
('The Chain of Thought (CoT) is a reasoning method used in language models to '
 'improve their problem-solving abilities by breaking down complex tasks into '
 'a series of logical steps. It involves generating intermediate reasoning '
 'steps before arriving at a final answer, enhancing transparency and '
 'accuracy. This technique is particularly useful in tasks requiring '
 'multi-step reasoning and is often employed in conjunction with prompt '
 "engineering to guide the model's behavior effectively.")

非相关文档检索

from pprint import pprint

# Run
inputs = {"question": "How does the AlphaCodium paper work?"}
for output in app.stream(inputs):
    for key, value in output.items():
        # Node
        pprint(f"Node '{key}':")

    pprint("\n---\n")

# Final generation
pprint(value["generation"])
---RETRIEVE---
"Node 'retrieve':"
'\n---\n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY---
"Node 'grade_documents':"
'\n---\n'
---TRANSFORM QUERY---
question="What is the mechanism behind AlphaCodium's functionality as described in the research paper?"
"Node 'transform_query':"
'\n---\n'
---WEB SEARCH---
"Node 'web_search_node':"
'\n---\n'
---GENERATE---
"Node 'generate':"
'\n---\n'
('AlphaCodium functions through a test-based, multi-stage, code-oriented '
 'iterative flow, repeatedly running and fixing generated code against '
 'input-output tests. It enhances the process by generating additional data '
 'like problem reflection and test reasoning to aid iterations. This approach '
 'significantly improves the performance of LLMs on code problems, as '
 'demonstrated on the CodeContests dataset.')

注意事项:

官网中的原代码只让模型输出简单的字符串,对chatglm模型来说,容易运行失败:

question_rewriter = re_write_prompt | llm | StrOutputParser()

在这里插入图片描述
把重写的部分代码改为特定类型输出:

class Question(BaseModel):
    """an improved question"""
    question: str = Field(
        description="Improved Question"
    )
question_rewriter = re_write_prompt | llm.with_structured_output(Question)

并在transform_query类中把better_question改为better_question.question

def transform_query(state):
    """
    Transform the query to produce a better question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates question key with a re-phrased question
    """

    print("---TRANSFORM QUERY---")
    question = state["question"]
    documents = state["documents"]

    # Re-write question
    better_question = question_rewriter.invoke({"question": question})
    print(better_question)
    return {"documents": documents, "question": better_question.question}

即可运行成功。
参考链接:https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_crag/#llms
如果有任何问题,欢迎在评论区提问。

标签:RAG,search,documents,Corrective,web,langgraph,question,---,state
From: https://blog.csdn.net/qq_41472205/article/details/144156315

相关文章

  • 一文读懂大模型RAG(非常详细),零基础入门到精通,看这一篇就够了
    文章目录一、RAG介绍1)局限性2)通过检索增强生成二、RAG系统的基本搭建流程1)搭建流程简介2)文档的加载和切割3)检索引擎4)LLM接口封装零基础入门AI大模型1.学习路线图2.视频教程3.技术文档和电子书4.LLM面试题和面经合集5.免费获取一、RAG介绍1)局限性1、LLM的知识......
  • 对代码中涉及 `localStorage` 的部分的详细注释和讲解
    在你提供的React代码中,localStorage被用来存储和恢复树节点的展开/收起状态。以下是对代码中涉及localStorage的部分的详细注释和讲解,帮助更好地理解程序的逻辑://递归渲染树结构constCubeOutlineTree=({cube,initialTree})=>{//从localStorage获取树的展......
  • 在pinia中使用SecureLS将数据加密后存储到localStorage中,获取的时候解密使用
    第一步对secure-ls进行安装:npminstallsecure-ls第二步:secure-ls的引入:importSecureLSfrom"secure-ls";点击查看代码import{ref}from"vue";import{defineStore}from"pinia";importtype{StorageLike}from"pinia-plugin-pers......
  • 实施语义缓存以改进 RAG 系统
    实施语义缓存以改进RAG系统1.缓存介绍在本笔记本中,我们将探索一个典型的RAG解决方案,其中我们将使用开源模型和向量数据库ChromaDB。但是,我们将集成一个语义缓存系统,该系统将存储各种用户查询,并决定是否生成包含来自向量数据库或缓存的信息的提示。语义缓存系统旨在识......
  • RAG实验:块大小分割实验、矢量存储;FAISS 与 Chroma、向量存储和 Top k、向量存储中的距
    比较RAG第1部分:块大小分割实验我探索了RAG模型中的各种块大小,并使用专为评估检索器组件而设计的RAGAS评估器对其进行了评估。如您所知,检索器部分会生成随后输入到语言模型(LLM)中的“上下文”。在这个实验中,我采用了BGE作为嵌入技术(它在HuggingFace的排行榜上得分......
  • Retrieval Augmented Generation (RAG) (1/2),RAG介紹
    介紹RetrievalAugmentedGeneration(RAG)其實就是優化大型語言模型(LLM)輸出的過程。在生成回應之前,RAG會參考訓練資料以外的權威知識庫。大型語言模型是在大量資料上訓練出來的,擁有數十億個參數,可以回答問題、翻譯語言、完成句子等等。RAG讓這些本來就很強大的模型可以使用特......
  • 【拥抱AI】RAG如何通过分析反馈、识别问题来提高命中率
    分析用户反馈并识别问题是持续优化RAG系统的重要步骤。这不仅可以帮助你了解系统的当前表现,还可以指导未来的改进方向。直接进入正题,1.收集用户反馈方法问卷调查:设计问卷,让用户填写他们对系统输出的满意度、易用性等方面的评价。用户访谈:通过一对一的访谈,深入了解用户的......
  • 【RAG】基于 RAG 的知识库问答系统设计与实现
    基于RAG的知识库问答系统设计与实现1.系统介绍2.技术与方法3.核心功能代码片段3.1知识库创建3.2知识对话问答3.3知识库清空4.系统运行效果截图4.1文件上传与知识库创建4.2知识库问答4.3文件删除与知识库清空总结项目代码地址:https://github.com/AI-Meet......
  • 【RAG 项目实战 08】为 RAG 添加历史对话能力
    【RAG项目实战08】为RAG添加历史对话能力NLPGithub项目:NLP项目实践:fasterai/nlp-project-practice介绍:该仓库围绕着NLP任务模型的设计、训练、优化、部署和应用,分享大模型算法工程师的日常工作和实战经验AI藏经阁:https://gitee.com/fasterai/ai-e-book介绍:该......
  • 【RAG 项目实战 07】替换 ConversationalRetrievalChain(单轮问答)
    【RAG项目实战07】替换ConversationalRetrievalChain(单轮问答)NLPGithub项目:NLP项目实践:fasterai/nlp-project-practice介绍:该仓库围绕着NLP任务模型的设计、训练、优化、部署和应用,分享大模型算法工程师的日常工作和实战经验AI藏经阁:https://gitee.com/fasterai......