首页 > 其他分享 >千问AI agent qwan_agent使用

千问AI agent qwan_agent使用

时间:2024-06-07 09:56:02浏览次数:25  
标签:function content 千问 AI agent current weather role location

代码:

# Reference: https://platform.openai.com/docs/guides/function-calling
import json
import os

# DASHSCOPE_API_KEY

from qwen_agent.llm import get_chat_model


# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit='fahrenheit'):
    """Get the current weather in a given location"""
    if 'tokyo' in location.lower():
        return json.dumps({'location': 'Tokyo', 'temperature': '10', 'unit': 'celsius'})
    elif 'san francisco' in location.lower():
        return json.dumps({'location': 'San Francisco', 'temperature': '72', 'unit': 'fahrenheit'})
    elif 'paris' in location.lower():
        return json.dumps({'location': 'Paris', 'temperature': '22', 'unit': 'celsius'})
    else:
        return json.dumps({'location': location, 'temperature': 'unknown'})


def test():
    llm = get_chat_model({
        # Use the model service provided by DashScope:
        # 'model': 'qwen-max',
        'model': 'qwen-plus',
        'model_server': 'dashscope',
        'api_key':  'sk-c78替换下c8',#os.getenv('DASHSCOPE_API_KEY'),

        # Use the model service provided by Together.AI:
        # 'model': 'Qwen/Qwen1.5-14B-Chat',
        # 'model_server': 'https://api.together.xyz',  # api_base
        # 'api_key': os.getenv('TOGETHER_API_KEY'),

        # Use your own model service compatible with OpenAI API:
        # 'model': 'Qwen/Qwen1.5-72B-Chat',
        # 'model_server': 'http://localhost:8000/v1',  # api_base
        # 'api_key': 'EMPTY',
    })

    # Step 1: send the conversation and available functions to the model
    messages = [{'role': 'user', 'content': "What's the weather like in San Francisco?"}]
    functions = [{
        'name': 'get_current_weather',
        'description': 'Get the current weather in a given location',
        'parameters': {
            'type': 'object',
            'properties': {
                'location': {
                    'type': 'string',
                    'description': 'The city and state, e.g. San Francisco, CA',
                },
                'unit': {
                    'type': 'string',
                    'enum': ['celsius', 'fahrenheit']
                },
            },
            'required': ['location'],
        },
    }]

    print('# Assistant Response 1:')
    responses = []
    for responses in llm.chat(messages=messages, functions=functions, stream=True):
        print(responses)

    messages.extend(responses)  # extend conversation with assistant's reply

    # Step 2: check if the model wanted to call a function
    last_response = messages[-1]
    print("*"*88)
    print(last_response)
    print("*"*88)
    if last_response.get('function_call', None):

        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            'get_current_weather': get_current_weather,
        }  # only one function in this example, but you can have multiple
        function_name = last_response['function_call']['name']
        function_to_call = available_functions[function_name]
        function_args = json.loads(last_response['function_call']['arguments'])
        function_response = function_to_call(
            location=function_args.get('location'),
            unit=function_args.get('unit'),
        )
        print('# Function Response:')
        print(function_response)

        # Step 4: send the info for each function call and function response to the model
        messages.append({
            'role': 'function',
            'name': function_name,
            'content': function_response,
        })  # extend conversation with function response

        print('# Assistant Response 2:')
        for responses in llm.chat(
                messages=messages,
                functions=functions,
                stream=True,
        ):  # get a new response from the model where it can see the function response
            print(responses)


if __name__ == '__main__':
    test()

  

运行效果:

# Assistant Response 1:
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': ''}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': ''}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location'}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco'}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco, CA",'}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco, CA",\n  "unit": "'}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco, CA",\n  "unit": "celsius"\n}'}}]
[{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco, CA",\n  "unit": "celsius"\n}'}}]
****************************************************************************************
{'role': 'assistant', 'content': '', 'function_call': {'name': 'get_current_weather', 'arguments': '{\n  "location": "San Francisco, CA",\n  "unit": "celsius"\n}'}}
****************************************************************************************
# Function Response:
{"location": "San Francisco", "temperature": "72", "unit": "fahrenheit"}
# Assistant Response 2:
[{'role': 'assistant', 'content': 'The'}]
[{'role': 'assistant', 'content': 'The current'}]
[{'role': 'assistant', 'content': 'The current weather'}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco,'}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco, California is 7'}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco, California is 72 degrees Fahrenheit ('}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco, California is 72 degrees Fahrenheit (approximately 22'}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco, California is 72 degrees Fahrenheit (approximately 22.2 degrees Celsius'}]
[{'role': 'assistant', 'content': 'The current weather in San Francisco, California is 72 degrees Fahrenheit (approximately 22.2 degrees Celsius).'}]

  

标签:function,content,千问,AI,agent,current,weather,role,location
From: https://www.cnblogs.com/bonelee/p/18236578

相关文章

  • 为什么Java中的main方法必须是public static void的?
    当我们创建main方法时,首先都是public、都是static,返回值都是void,方法名都是main,入参都是一个字符串数组。在以上的方法声明中,唯一可以改变的部分就是方法的参数名,我们可以吧args改成任意我们想要使用的名字。main方法时JVM执行的入口,为了方便JVM的调用,所以需要将他的访问权限......
  • spring security 指定了 failureForwardUrl 的请求接口 但是没有效果
    springboot版本:3.3.0springsecurity版本:3.3.0代码如下:springsecurity配置类importorg.springframework.context.annotation.Bean;importorg.springframework.context.annotation.Configuration;importorg.springframework.security.config.annotation.web.builders......
  • 非常可靠,手把手教你本地部署AI大模型-llama3:70b
    Meta公司一直致力于这样一个理念:“thatopensourcenotonlyprovidesgreattechnologyfordevelopers,butalsobringsthebestoutinpeople”,翻译过来就是开源不仅为开发人员提供了出色的技术,而且还将给人们带来更好的。但是前几天李彦宏说开源模型没有未来?我们的......
  • 使用 Unity Sentis 实现AI换脸
    前言使用UnitySentis和ComputeShader,det_10g.onnx进行高效人脸五官定位-CSDN博客需要用到该篇文章中的五个关键点信息进行人脸对齐。模型分析实现ai换脸的核心模型是inswapper_128.onnx;它的输入值有两个target和source,target是目标面部图片尺寸为(128*128*3),source是......
  • 基于Python的实验室管理系统的设计与实现(论文+源码)_kaic
    摘 要随着实验室设备越来越多,实验室及其设备管理工作变得越来越繁重,还存在些管理模式仍旧处于手工管理模式和一些抢占实验室的不文明现象,传统的手工模式已经满足不了日益增长的管理需求,而本系统摒弃传统模式,开启新式的实验室管理模式。在需求进行分析的基础上,采用Python语言......
  • 基于Python的街区医院管理系统的设计与实现(论文+源码)_kaic
    基于Python的街区医院管理系统的设计与实现摘 要采用Python语言、Mysql数据库,在IDEA平台下实现了街区医院管理系统,利用街道医疗机构的管理系统,不仅能够有效地进行信息管理,促进各部门之间的有序合作,还能够大幅改善医疗环境,极大地改善病人的就诊体验,并且能够更加有效地满......
  • BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and
    Motivation&Abs端到端大规模视觉语言预训练的开销极大。为此,本文提出了BLIP2,利用现成的冻住的imageencoder以及LLM引导视觉语言预训练。模态差距:通过两阶段训练的轻量级的QueryTransformer(Q-Former)弥补。第一阶段:从冻结的imageencoder引导VL学习;第二阶段:从冻结的LLM引导视......
  • autotrain学习-环境搭建、模型和数据集下载、训练全过程
    autotrain学习-环境搭建、模型和数据集下载、训练全过程1.参考链接2.创建容器3.安装autotrain4.解决没有真实权值的问题(不下载真实的权值)5.下载SFT微调数据集6.下载opt-125m模型(忽略权值文件)7.下载后的目录结构8.SFT训练A.生成配置文件(使用之前下载好的模型和数据集......
  • ChatGPT Prompt技术全攻略-入门篇:AI提示工程基础
    系列篇章......
  • AI全自动批量剪辑软件,一天剪辑3000条原创视频不是梦【剪辑软件+全套教程】
    创建一个AI全自动批量剪辑软件的简易程序涉及较为复杂的视频处理和机器学习技术,而且由于这是一个相当高级的任务,通常需要大量的代码以及深度学习框架支持。不过,我可以为您提供一个非常基础版本的程序示例,它会用Python的moviepy库批量剪辑一组视频,每个视频裁剪前10秒作为示例......