首页 > 其他分享 >crewAI-examples

crewAI-examples

时间:2024-09-16 23:24:07浏览次数:1  
标签:tasks crewAI markdown agents examples import output config

crewAI-examples

https://github.com/crewAIInc/crewAI-examples/tree/main

https://docs.crewai.com/getting-started/Start-a-New-CrewAI-Project-Template-Method/#annotations-include

 

markdown_validator

https://github.com/fanqingsong/crewAI-examples/tree/main/markdown_validator

import sys
from crewai import Agent, Task
import os
from dotenv import load_dotenv
from langchain.tools import tool
from langchain.chat_models.openai import ChatOpenAI
from pymarkdown.api import PyMarkdownApi, PyMarkdownApiException
from MarkdownTools import markdown_validation_tool

load_dotenv()

defalut_llm = ChatOpenAI(openai_api_base=os.environ.get("OPENAI_API_BASE_URL", "https://api.openai.com/v1"),
                        openai_api_key=os.environ.get("OPENAI_API_KEY"),
                        temperature=0.1,                        
                        model_name=os.environ.get("MODEL_NAME", "gpt-3.5-turbo"),
                        top_p=0.3)



def process_markdown_document(filename):
    """
    Processes a markdown document by reviewing its syntax validation 
    results and providing feedback on necessary changes.

    Args:
        filename (str): The path to the markdown file to be processed.

    Returns:
        str: The list of recommended changes to make to the document.

    """

    # Define general agent
    general_agent  = Agent(role='Requirements Manager',
                    goal="""Provide a detailed list of the markdown 
                            linting results. Give a summary with actionable 
                            tasks to address the validation results. Write your 
                            response as if you were handing it to a developer 
                            to fix the issues.
                            DO NOT provide examples of how to fix the issues or
                            recommend other tools to use.""",
                    backstory="""You are an expert business analyst 
                    and software QA specialist. You provide high quality, 
                    thorough, insightful and actionable feedback via 
                    detailed list of changes and actionable tasks.""",
                    allow_delegation=False, 
                    verbose=True,
                    tools=[markdown_validation_tool],
                    llm=defalut_llm)


    # Define Tasks Using Crew Tools
    syntax_review_task = Task(description=f"""
            Use the markdown_validation_tool to review 
            the file(s) at this path: {filename}
            
            Be sure to pass only the file path to the markdown_validation_tool.
            Use the following format to call the markdown_validation_tool:
            Do I need to use a tool? Yes
            Action: markdown_validation_tool
            Action Input: {filename}

            Get the validation results from the tool 
            and then summarize it into a list of changes
            the developer should make to the document.
            DO NOT recommend ways to update the document.
            DO NOT change any of the content of the document or
            add content to it. It is critical to your task to
            only respond with a list of changes.
            
            If you already know the answer or if you do not need 
            to use a tool, return it as your Final Answer.""",
            agent=general_agent)
    
    updated_markdown = syntax_review_task.execute()

    return updated_markdown

# If called directly from the command line take the first argument as the filename
if __name__ == "__main__":

    if len(sys.argv) > 1:
        filename = sys.argv[1]
        processed_document = process_markdown_document(filename)
        print(processed_document)

 

screenplay_writer

https://github.com/fanqingsong/crewAI-examples/tree/main/screenplay_writer

import re, os
import yaml
from pathlib import Path
from crewai import Agent, Task, Crew, Process
from dotenv import load_dotenv
# from langchain.chat_models.openai import ChatOpenAI
from langchain_community.chat_models.openai import ChatOpenAI



load_dotenv()


defalut_llm = ChatOpenAI(openai_api_base=os.environ.get("OPENAI_API_BASE_URL", "https://api.openai.com/v1"),
                        openai_api_key=os.environ.get("OPENAI_API_KEY"),
                        temperature=0.1,                        
                        model_name=os.environ.get("MODEL_NAME", "gpt-3.5-turbo"),
                        top_p=0.3)


# Use Path for file locations
current_dir = Path.cwd()
agents_config_path = current_dir / "config" / "agents.yaml"
tasks_config_path = current_dir / "config" / "tasks.yaml"

# Load YAML configuration files
with open(agents_config_path, "r") as file:
    agents_config = yaml.safe_load(file)

with open(tasks_config_path, "r") as file:
    tasks_config = yaml.safe_load(file)

## Define Agents
spamfilter = Agent( 
    role=agents_config["spamfilter"]['role'], 
    goal=agents_config["spamfilter"]['goal'], 
    backstory=agents_config["spamfilter"]['backstory'], 
    allow_delegation=False, 
    verbose=True, 
    llm=defalut_llm
)

analyst = Agent( 
    role=agents_config["analyst"]['role'], 
    goal=agents_config["analyst"]['goal'], 
    backstory=agents_config["analyst"]['backstory'], 
    allow_delegation=False, 
    verbose=True, 
    llm=defalut_llm
)

scriptwriter = Agent( 
    role=agents_config["scriptwriter"]['role'], 
    goal=agents_config["scriptwriter"]['goal'], 
    backstory=agents_config["scriptwriter"]['backstory'], 
    allow_delegation=False, 
    verbose=True, 
    llm=defalut_llm
)

formatter = Agent( 
    role=agents_config["formatter"]['role'], 
    goal=agents_config["formatter"]['goal'], 
    backstory=agents_config["formatter"]['backstory'], 
    allow_delegation=False, 
    verbose=True, 
    llm=defalut_llm
)

scorer = Agent( 
    role=agents_config["scorer"]['role'], 
    goal=agents_config["scorer"]['goal'], 
    backstory=agents_config["scorer"]['backstory'], 
    allow_delegation=False, 
    verbose=True, 
    llm=defalut_llm
)



# this is one example of a public post in the newsgroup alt.atheism
# try it out yourself by replacing this with your own email thread or text or ...
discussion = """From: [email protected] (Keith Allan Schneider)
Subject: Re: <Political Atheists?
Organization: California Institute of Technology, Pasadena
Lines: 50
NNTP-Posting-Host: punisher.caltech.edu

[email protected] (Robert Beauchaine) writes:

>>I think that about 70% (or so) people approve of the
>>death penalty, even realizing all of its shortcomings.  Doesn't this make
>>it reasonable?  Or are *you* the sole judge of reasonability?
>Aside from revenge, what merits do you find in capital punishment?

Are we talking about me, or the majority of the people that support it?
Anyway, I think that "revenge" or "fairness" is why most people are in
favor of the punishment.  If a murderer is going to be punished, people
that think that he should "get what he deserves."  Most people wouldn't
think it would be fair for the murderer to live, while his victim died.

>Revenge?  Petty and pathetic.

Perhaps you think that it is petty and pathetic, but your views are in the
minority.


keith
"""

oo_discussion = """From: [email protected] (Keith Allan Schneider)
Subject: Re: <Political Atheists?
Organization: California Institute of Technology, Pasadena
Lines: 50
NNTP-Posting-Host: punisher.caltech.edu

[email protected] (Robert Beauchaine) writes:

>>I think that about 70% (or so) people approve of the
>>death penalty, even realizing all of its shortcomings.  Doesn't this make
>>it reasonable?  Or are *you* the sole judge of reasonability?
>Aside from revenge, what merits do you find in capital punishment?

Are we talking about me, or the majority of the people that support it?
Anyway, I think that "revenge" or "fairness" is why most people are in
favor of the punishment.  If a murderer is going to be punished, people
that think that he should "get what he deserves."  Most people wouldn't
think it would be fair for the murderer to live, while his victim died.

>Revenge?  Petty and pathetic.

Perhaps you think that it is petty and pathetic, but your views are in the
minority.

>We have a local televised hot topic talk show that very recently
>did a segment on capital punishment.  Each and every advocate of
>the use of this portion of our system of "jurisprudence" cited the
>main reason for supporting it:  "That bastard deserved it".  True
>human compassion, forgiveness, and sympathy.

Where are we required to have compassion, forgiveness, and sympathy?  If
someone wrongs me, I will take great lengths to make sure that his advantage
is removed, or a similar situation is forced upon him.  If someone kills
another, then we can apply the golden rule and kill this person in turn.
Is not our entire moral system based on such a concept?

Or, are you stating that human life is sacred, somehow, and that it should
never be violated?  This would sound like some sort of religious view.
 
>>I mean, how reasonable is imprisonment, really, when you think about it?
>>Sure, the person could be released if found innocent, but you still
>>can't undo the imiprisonment that was served.  Perhaps we shouldn't
>>imprision people if we could watch them closely instead.  The cost would
>>probably be similar, especially if we just implanted some sort of
>>electronic device.
>Would you rather be alive in prison or dead in the chair?  

Once a criminal has committed a murder, his desires are irrelevant.

And, you still have not answered my question.  If you are concerned about
the death penalty due to the possibility of the execution of an innocent,
then why isn't this same concern shared with imprisonment.  Shouldn't we,
by your logic, administer as minimum as punishment as possible, to avoid
violating the liberty or happiness of an innocent person?

keith
"""


# Filter out spam and vulgar posts
task0 = Task(
    description=tasks_config["task0"]["description"].format(discussion=discussion),
    expected_output=tasks_config["task0"]["expected_output"],
    agent=spamfilter,
)

crew = Crew(
    agents=[spamfilter],
    tasks=[task0],
    verbose=True,  # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
    process=Process.sequential,  # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)

inputs = {'discussion': discussion}
result = crew.kickoff(inputs)
# result = crew.kickoff()

print("===================== end result from crew ===================================")
print(result)

# Accessing the task output
task_output = task0.output

print(f"Task Description: {task_output.description}")
print(f"Task Summary: {task_output.summary}")
print(f"Raw Output: {task_output.raw}")
if task_output.json_dict:
    print(f"JSON Output: {json.dumps(task_output.json_dict, indent=2)}")
if task_output.pydantic:
    print(f"Pydantic Output: {task_output.pydantic}")


# process post with a crew of agents, ultimately delivering a well formatted dialogue
task1 = Task(
    description=tasks_config["task1"]["description"].format(discussion=discussion),
    expected_output=tasks_config["task1"]["expected_output"],
    agent=analyst,
)

task2 = Task(
    description=tasks_config["task2"]["description"],
    expected_output=tasks_config["task2"]["expected_output"],
    agent=scriptwriter,
)

task3 = Task(
    description=tasks_config["task3"]["description"],
    expected_output=tasks_config["task3"]["expected_output"],
    agent=formatter,
)
crew = Crew(
    agents=[analyst, scriptwriter, formatter],
    tasks=[task1, task2, task3],
    verbose=True,  # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
    process=Process.sequential,  # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)

inputs = {'discussion': discussion}
result = crew.kickoff(inputs)

print("===================== end result from crew ===================================")
print(result)



# print("===================== score ==================================================")
# task4 = Task(
#     description=tasks_config["task4"]["description"],
#     expected_output=tasks_config["task4"]["expected_output"],
#     agent=scorer,
# )

# score = task4.execute()
# score = score.split("\n")[0]  # sometimes an explanation comes after score, ignore
# print(f"Scoring the dialogue as: {score}/10")

 

 

 

LLM之Agent(十一)| 多智能体框架CrewAI与AutoGen相比

https://zhuanlan.zhihu.com/p/681218725

CrewAI可以应用在生成环境中。它在发言人的反应和编排上牺牲了一点灵活性和随机性,但在代理人的能力、任务和发言转向上获得了更多的确定性。到目前为止,唯一的编排策略是“sequential”,未来的发布计划是“consensual”和“hierarchical”。

当我们在下一章中深入研究这个框架及其代码时,我们会发现确保任务由相关代理并按定义的顺序处理非常容易。你肯定不会在CrewAI中看到智能体之间的任何生动互动,比如一个智能体纠正另一个智能体,一个智能体的多次讲话。这些交互有利于实验或演示,但对需要高效、确定性和成本效益

高的任务完成的真实LLM产品用处不大。因此,CrewAI优先考虑精简和可靠的方法,在一个强大的群聊中,每个人工智能代理都准确地知道该做什么以及他们的目标。

在我看来,另一个也是最关键的优势是它蓬勃发展的工具和支持丰富的资源,可以用来构建代理和任务,这源于它是基于LangChain设计的智能体。LangChain是一个成熟的LLM框架,已经为LLM应用程序开发

人员提供了丰富的工具和外围设备来增强语言模型的功能。

 CrewAI被证明适合熟悉LangChain的LLM应用程序开发人员,或者已经在其上构建应用程序的开发人员。对他们来说,将现有的单独代理集成到CrewAI框架中可以相对容易地实现。相比之下,AutoGen的学习曲线可能更陡峭,需要更多的时间来了解其用法并有效地集成代理。

 

标签:tasks,crewAI,markdown,agents,examples,import,output,config
From: https://www.cnblogs.com/lightsong/p/18416751

相关文章

  • autoGPT metagpt crewAI langgraph autogen camel 哪些框架适用于多模态场景?(文心一言)
    autoGPTmetagptcrewAIlanggraphautogencamel哪些框架适用于多模态场景?特点:CrewAI是一个专门用于创建多模态代理的技术,能够同时处理文本、图像和音频数据。它提供了构建多模态代理所需的工具和库,使得开发者能够更容易地集成不同模型以处理多种数据类型。应用场景:适用于......
  • 《开源大模型食用指南》,一杯奶茶速通大模型!新增Examples最佳实践!
    01「Example系列的前世今生」我们希望成为LLM与普罗大众的阶梯,以自由、平等的开源精神,拥抱更恢弘而辽阔的LLM世界。Self-llm开源项目是一个围绕开源大模型、针对国内初学者、适合中国宝宝的专属大模型教程,针对各类开源大模型提供包括环境配置、本地部署、高效微调......
  • airflow DAG/PIPELINE examples reference
    data-pipelines-with-apache-airflowhttps://github.com/BasPH/data-pipelines-with-apache-airflowCodeforDataPipelineswithApacheAirflowhttps://www.manning.com/books/data-pipelines-with-apache-airflowAsuccessfulpipelinemovesdataefficiently,mi......
  • 关键错误:“工具”。 CrewAI 在制作自定义工具时出错?
    我开发了一个团队来从不同的URL获取一些信息。到目前为止总共有大约3个URL,所以我创建了5个代理。1是编辑器(经理),1是其他3个带到表中的所有数据的编译器。如果这有帮助的话,这是我的文件夹结构university_scraper/│├──src/│├──__init__.py│......
  • Fundamentals of Machine Learning for Predictive Data Analytics Algorithms, Worke
    主要内容:本书介绍了机器学习在预测数据分析中的基本原理、算法、实例和案例研究,涵盖了从数据到决策的整个过程。书中涉及机器学习项目生命周期的各个方面,包括数据准备、特征设计和模型部署。结构:本书分为五个部分,共计14章和若干附录:引言(IntroductiontoMachineLearn......
  • Prompt Selection and Augmentation for Few Examples Code Generation in Large Lang
    本文是LLM系列文章,针对《PromptSelectionandAugmentationforFewExamplesCodeGenerationinLargeLanguageModelanditsApplicationinRoboticsControl》的翻译。大语言模型中少数示例代码生成的提示选择与增强及其在机器人控制中的应用摘要1引言2相......
  • Advanced Data Analytics Using Python_ With Machine Learning, Deep Learning and N
    本书提供了使用Python进行高级数据分析的方法,涵盖了机器学习、深度学习和自然语言处理的应用实例。书中详细讲解了如何在不同的数据库环境中进行数据提取、转换和加载(ETL),并探讨了监督学习、无监督学习、深度学习、时间序列分析以及大规模数据分析的相关内容。目录简介为......
  • 063篇 - 案例研究与示例(Case Studies and Examples)
    大家好,我是元壤教育的张涛,一名知识博主,专注于生成式人工智能(AIGC)各领域的研究与实践。我喜欢用简单的方法,帮助大家轻松掌握AIGC应用技术。我的愿景是通过我的文章和教程,帮助1000万人学好AIGC,用好AIGC。在本章中,我们将探讨提示工程项目的实际案例研究和示例,展示基于提示的......
  • 12k star 项目 cmake-examples 阅读和点评
    12kstar项目cmake-examples阅读和点评Author:ChrisZZTime:2024.06.17目录12kstar项目cmake-examples阅读和点评项目概要01-basicA-hello-cmakeB-hello-headersC-static-libraryD-shared-libraryE-installingF-build-typeG-compile-flagsH-third-party-libraryI-compi......
  • zero-shot-learning-definition-examples-comparison
    1Zero-shotlearning零样本学习。1.1任务定义利用训练集数据训练模型,使得模型能够对测试集的对象进行分类,但是训练集类别和测试集类别之间没有交集;期间需要借助类别的描述,来建立训练集和测试集之间的联系,从而使得模型有效。Zero-shotlearning就是希望我们的模型能够对其从......