主要用来方便测试接口。
gr.ChatInterface() 是比 gr.Chatbot() 更高一级的封装,
如果只是需要一个纯文字聊天的窗口(见下图),完全满足需求。
如果需要更多定制化的功能,比如定义prompt,显示图片等,那么就要使用gr.Chatbot() 开发。
修改为自己的接口,只需要修改predict_stream 或者predict函数,输入参数不变,输出为返回对话结果,str格式。
ps: 花费了几个小时研究gr.Chatbot(),其实gr.ChatInterface() 一行代码就够了。
更多需求请参考:creating-a-chatbot-fast
import os
import gradio as gr
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope SDK的base_url
)
MODEL = "qwen-max"
#输入参数必须是 “用户输入”和“历史记录”,输出为字符串,变量名随便。
def predict_stream(message, history):
history_openai_format = []
for human, assistant in history:
history_openai_format.append({"role": "user", "content": human})
history_openai_format.append({"role": "assistant", "content": assistant})
history_openai_format.append({"role": "user", "content": message})
response = client.chat.completions.create(
model=MODEL, messages=history_openai_format, temperature=1.0, stream=True
)
partial_message = ""
for chunk in response:
if chunk.choices[0].delta.content is not None:
partial_message = partial_message + chunk.choices[0].delta.content
yield partial_message
def predict(message, history):
history_openai_format = []
for human, assistant in history:
history_openai_format.append({"role": "user", "content": human})
history_openai_format.append({"role": "assistant", "content": assistant})
history_openai_format.append({"role": "user", "content": message})
response = client.chat.completions.create(
model=MODEL, messages=history_openai_format, temperature=1.0, stream=False
)
return response.choices[0].message.content
gr.ChatInterface(predict).launch(server_name="127.0.0.1", server_port=8000, inbrowser=True)
标签:极简,gr,format,chatbot,content,openai,message,history
From: https://blog.csdn.net/zhilaizhiwang/article/details/141269973