需求
Langchain是使用闭源LLM实现agent搭建的,Langgraph官网给的例子是基于Claude,其他一些agent例子也是基于OPENAI的,但是对于很多私有化场景,使用本地LLM搭建agent是非常重要的。但是网上并没有相关的教程,捣鼓了两天,捣鼓出来Ollama+Langgraph实现的基于本地LLM的agent搭建
模型部署
具体的模型部署可以看我之前的文章,Ollama部署
inference_server_url = "http://localhost:11434/v1"
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model="qwen2.5:14b",
openai_api_key="none",
openai_api_base=inference_server_url,
max_tokens=500,
temperature=1,
)
核心代码
from typing import Literal
from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, END
from langgraph.graph import END, START, StateGraph, MessagesState
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [search]
tool_node = ToolNode(tools)
model_with_tools = model.bind_tools(tools)
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state['messages']
last_message = messages[-1]
# If there is no function call, then we finish
if last_message.tool_calls:
return "tools"
# Otherwise, we stop (reply to the user)
return END
# Define the function that calls the model
def call_model(state):
messages = state['messages']
response = model_with_tools.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge('tools', 'agent')
app = workflow.compile()
response = app.invoke(
{"messages": ["what is the weather in sf"]},
config={"configurable": {"thread_id": 42}}
)
print(response['messages'][-1].content)
print(response)
标签:workflow,Langgraph,messages,agent,add,LLM,model,tools
From: https://blog.csdn.net/xdg15294969271/article/details/143342832