我在 conda 笔记本上遇到以下错误
File ~\.conda\envs\LLMS\lib\site-packages\langchain\agents\conversational\output_parser.py:26, in ConvoOutputParser.parse(self, text) 24 match = re.search(regex, text) 25 if not match: ---> 26 raise OutputParserException(f"Could not parse LLM output:
{text}`")
27 动作 = match.group(1)
28 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `
Answer: "Hello, good morning. I am a helpful assistant.
Have a normal`
morning")`
我还检查了 https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors
我是尝试使用ConversationalAgent与initialize_agent,这对我的目的有一些限制。
这是我尝试过的代码 `
import os
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import pandas as pd
from utils import *
llm_hf = HuggingFaceEndpoint(
endpoint_url="https://xxx",
huggingfacehub_api_token="xxx", task="text-generation"
)
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
# Connect to the SQLite database (it will create a new one if it doesn't exist)
conn = sqlite3.connect('doctors.db')
# Replace 'table_name' with the name of the table you want to create in the database
table_name = 'Doctors'
# Use the `to_sql` method to save the DataFrame to the database
clean_df.to_sql(table_name, conn, if_exists='replace', index=False)
llm = llm_hf
db = SQLDatabase.from_uri("sqlite:///doctors.db")
db.get_table_names()
toolkit = SQLDatabaseToolkit(db=db,
llm=llm,
)
sql_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
top_k = 10000,
#agent_executor_kwargs={"return_intermediate_steps": True},
)
from langchain.agents import initialize_agent, Tool
tools = [
Tool(
name="Doctors Database System",
func=sql_executor.run,
return_direct=True,
description="useful to give information about doctors names, specialities and locations. Input should be a fully formed question.",
)
]
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor, ConversationalAgent
from langchain.memory import ConversationBufferWindowMemory
from langchain.llms import OpenAI
from langchain.chains import LLMChain
prefix = """You are a helpful assistant.
Have a normal conversation with a human.
You can offer to answer questions about a database with doctor information.
You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ConversationalAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferWindowMemory(memory_key="chat_history", k = 5)
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = ConversationalAgent(llm_chain=llm_chain,
#output_parser= lambda **kwargs : dict(kwargs),
verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
agent_chain.run("Hello, good `
非常感激。任何建议谢谢!
The error you're encountering, "OutputParserException: Could not parse LLM output", stems from the Conversational Agent's inability to find the expected structure in the LLM's response. Let's break down why this happens and how to fix it:
Understanding the Problem
- Conversational Agent Expectations: Conversational Agents in LangChain are designed to engage in back-and-forth dialogue. They rely on a specific format in the LLM's output to determine the next action (e.g., call a tool, provide information).
-
Output Parser:
The
OutputParser
is responsible for looking for patterns in the LLM's response to extract this action and any relevant data. -
Your Error:
The error message means the regular expression defined within the default
ConvoOutputParser
couldn't find a match in the LLM's output ("Hello, good morning. I am a helpful assistant. Have a normal morning"
). This likely means your LLM isn't structuring its response in a way the agent expects.
Solution: Guiding the LLM's Output
The key is to provide clearer instructions within your prompt to guide the LLM into generating responses that the Conversational Agent can understand. Here's a modified approach:
import os
import sqlite3
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.agents import create_sql_agent, Tool, AgentExecutor, ConversationalAgent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.memory import ConversationBufferWindowMemory
from langchain.chains import LLMChain
from langchain.agents import AgentType
# ... (Your existing code for database setup, llm_hf, clean_df, etc.)
# Modified Prompt for Structured Output
prefix = """You are a helpful assistant.
Have a normal conversation with a human.
You can offer to answer questions about a database with doctor information.
If the user asks a question that requires accessing the Doctors Database System, respond using the following format:
```tool_code
{{
"tool": "Doctors Database System",
"query": "YOUR_SQL_QUERY_HERE"
}}
Otherwise, respond naturally.
You have access to the following tools:"""
suffix = """Begin!
{chat_history} Question: {input} {agent_scratchpad}"""
prompt = ConversationalAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"], ) memory = ConversationBufferWindowMemory(memory_key="chat_history", k=5)
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = ConversationalAgent( llm_chain=llm_chain, verbose=True )
agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory )
agent_chain.run("Hello, good morning! Can you tell me about doctors specializing in cardiology?") ```
Explanation of Changes:
-
Structured Output Instructions: The updated prompt explicitly instructs the LLM to format its responses in a specific way when the user asks a database-related question. The LLM should now output a JSON-like structure with
tool
andquery
keys. -
Tool Code Block: Using backticks (
``) around
tool_code` can further help the LLM distinguish the structured output from its regular responses.
How it Works:
-
When the user asks a database-related question, the LLM will now, ideally, generate a response similar to:
json { "tool": "Doctors Database System", "query": "SELECT * FROM Doctors WHERE speciality='Cardiology'" }
-
The Conversational Agent's
OutputParser
can then easily extract the "tool" and "query" values to execute the desired action.
Important Notes:
- LLM Behavior: It's crucial to remember that LLMs can be unpredictable. While the updated prompt provides strong guidance, you may need to refine it further if the LLM doesn't consistently follow the desired output format.
-
Debugging:
Use the
verbose=True
setting in your agent and agent chain to closely examine the LLM's output and understand how it interacts with the parsing logic.
Let me know if you have any more questions or if you encounter further issues!
标签:python,langchain From: 77190232