首页 > 其他分享 >BabyAGI 的原理和代码实现分析

BabyAGI 的原理和代码实现分析

时间:2024-05-23 12:07:28浏览次数:30  
标签:tasks BabyAGI 代码 task result 033 print 原理 0m

BabyAGI 是一个人工智能驱动的任务管理系统,它使用 OpenAI 和 Pinecone API (向量数据库)来创建任务、确定优先级和执行任务。

该系统使用 OpenAI 和矢量数据库(例如 Chroma 或 Weaviate)来创建、确定优先级和执行任务。该系统背后的主要思想是,它根据先前任务的结果和预定义的目标创建任务。然后,该脚本使用 OpenAI 的自然语言处理 (NLP) 功能根据目标创建新任务,并使用 Chroma/Weaviate 存储和检索上下文的任务结果。这是原始任务驱动自治代理的精简版本(2023 年 3 月 28 日)。

 

怎么运行的

该脚本通过运行无限循环来工作,该循环执行以下步骤:

  1. 从任务列表中拉取第一个任务。
  2. 将任务发送给执行代理,执行代理使用 OpenAI 的 API 根据上下文完成任务。
  3. 丰富结果并将其存储在Chroma / Weaviate中。
  4. 创建新任务并根据目标和上一个任务的结果重新排列任务列表的优先级。 </br>

图像

execution_agent()函数是使用 OpenAI API 的地方。它需要两个参数:目标和任务。然后它向 OpenAI 的 API 发送提示,后者返回任务的结果。提示包括 AI 系统任务、目标和任务本身的描述。然后以字符串形式返回结果。

task_creation_agent()函数是 OpenAI 的 API 用于根据目标和前一个任务的结果创建新任务的地方。该函数有四个参数:目标、上一个任务的结果、任务描述和当前任务列表。然后,它向 OpenAI 的 API 发送提示,该 API 以字符串形式返回新任务列表。然后,该函数以字典列表的形式返回新任务,其中每个字典都包含任务的名称。

prioritization_agent()函数是使用 OpenAI 的 API 重新排列任务列表优先级的地方。该函数接受一个参数,即当前任务的 ID。它向 OpenAI 的 API 发送提示,后者将重新排列优先级的任务列表作为编号列表返回。

最后,脚本使用 Chroma/Weaviate 存储和检索任务结果以提供上下文。脚本根据 TABLE_NAME 变量中指定的表名创建一个 Chroma/Weaviate 集合。然后使用 Chroma/Weaviate 将任务结果以及任务名称和任何其他元数据存储在集合中。

 

核心代码:

#!/usr/bin/env python3
from dotenv import load_dotenv

# Load default environment variables (.env)
load_dotenv()

import os
import time
import logging
from collections import deque
from typing import Dict, List
import importlib
import openai
import chromadb
import tiktoken as tiktoken
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from chromadb.api.types import Documents, EmbeddingFunction, Embeddings
import re

# default opt out of chromadb telemetry.
from chromadb.config import Settings

client = chromadb.Client(Settings(anonymized_telemetry=False))

# Engine configuration

# Model: GPT, LLAMA, HUMAN, etc.
LLM_MODEL = os.getenv("LLM_MODEL", os.getenv("OPENAI_API_MODEL", "gpt-3.5-turbo")).lower()

# API Keys
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
if not (LLM_MODEL.startswith("llama") or LLM_MODEL.startswith("human")):
    assert OPENAI_API_KEY, "\033[91m\033[1m" + "OPENAI_API_KEY environment variable is missing from .env" + "\033[0m\033[0m"

# Table config
RESULTS_STORE_NAME = os.getenv("RESULTS_STORE_NAME", os.getenv("TABLE_NAME", ""))
assert RESULTS_STORE_NAME, "\033[91m\033[1m" + "RESULTS_STORE_NAME environment variable is missing from .env" + "\033[0m\033[0m"

# Run configuration
INSTANCE_NAME = os.getenv("INSTANCE_NAME", os.getenv("BABY_NAME", "BabyAGI"))
COOPERATIVE_MODE = "none"
JOIN_EXISTING_OBJECTIVE = False

# Goal configuration
OBJECTIVE = os.getenv("OBJECTIVE", "")
INITIAL_TASK = os.getenv("INITIAL_TASK", os.getenv("FIRST_TASK", ""))

# Model configuration
OPENAI_TEMPERATURE = float(os.getenv("OPENAI_TEMPERATURE", 0.0))


# Extensions support begin

def can_import(module_name):
    try:
        importlib.import_module(module_name)
        return True
    except ImportError:
        return False


DOTENV_EXTENSIONS = os.getenv("DOTENV_EXTENSIONS", "").split(" ")

# Command line arguments extension
# Can override any of the above environment variables
ENABLE_COMMAND_LINE_ARGS = (
        os.getenv("ENABLE_COMMAND_LINE_ARGS", "false").lower() == "true"
)
if ENABLE_COMMAND_LINE_ARGS:
    if can_import("extensions.argparseext"):
        from extensions.argparseext import parse_arguments

        OBJECTIVE, INITIAL_TASK, LLM_MODEL, DOTENV_EXTENSIONS, INSTANCE_NAME, COOPERATIVE_MODE, JOIN_EXISTING_OBJECTIVE = parse_arguments()

# Human mode extension
# Gives human input to babyagi
if LLM_MODEL.startswith("human"):
    if can_import("extensions.human_mode"):
        from extensions.human_mode import user_input_await

# Load additional environment variables for enabled extensions
# TODO: This might override the following command line arguments as well:
#    OBJECTIVE, INITIAL_TASK, LLM_MODEL, INSTANCE_NAME, COOPERATIVE_MODE, JOIN_EXISTING_OBJECTIVE
if DOTENV_EXTENSIONS:
    if can_import("extensions.dotenvext"):
        from extensions.dotenvext import load_dotenv_extensions

        load_dotenv_extensions(DOTENV_EXTENSIONS)

# TODO: There's still work to be done here to enable people to get
# defaults from dotenv extensions, but also provide command line
# arguments to override them

# Extensions support end

print("\033[95m\033[1m" + "\n*****CONFIGURATION*****\n" + "\033[0m\033[0m")
print(f"Name  : {INSTANCE_NAME}")
print(f"Mode  : {'alone' if COOPERATIVE_MODE in ['n', 'none'] else 'local' if COOPERATIVE_MODE in ['l', 'local'] else 'distributed' if COOPERATIVE_MODE in ['d', 'distributed'] else 'undefined'}")
print(f"LLM   : {LLM_MODEL}")


# Check if we know what we are doing
assert OBJECTIVE, "\033[91m\033[1m" + "OBJECTIVE environment variable is missing from .env" + "\033[0m\033[0m"
assert INITIAL_TASK, "\033[91m\033[1m" + "INITIAL_TASK environment variable is missing from .env" + "\033[0m\033[0m"

LLAMA_MODEL_PATH = os.getenv("LLAMA_MODEL_PATH", "models/llama-13B/ggml-model.bin")
if LLM_MODEL.startswith("llama"):
    if can_import("llama_cpp"):
        from llama_cpp import Llama

        print(f"LLAMA : {LLAMA_MODEL_PATH}" + "\n")
        assert os.path.exists(LLAMA_MODEL_PATH), "\033[91m\033[1m" + f"Model can't be found." + "\033[0m\033[0m"

        CTX_MAX = 1024
        LLAMA_THREADS_NUM = int(os.getenv("LLAMA_THREADS_NUM", 8))

        print('Initialize model for evaluation')
        llm = Llama(
            model_path=LLAMA_MODEL_PATH,
            n_ctx=CTX_MAX,
            n_threads=LLAMA_THREADS_NUM,
            n_batch=512,
            use_mlock=False,
        )

        print('\nInitialize model for embedding')
        llm_embed = Llama(
            model_path=LLAMA_MODEL_PATH,
            n_ctx=CTX_MAX,
            n_threads=LLAMA_THREADS_NUM,
            n_batch=512,
            embedding=True,
            use_mlock=False,
        )

        print(
            "\033[91m\033[1m"
            + "\n*****USING LLAMA.CPP. POTENTIALLY SLOW.*****"
            + "\033[0m\033[0m"
        )
    else:
        print(
            "\033[91m\033[1m"
            + "\nLlama LLM requires package llama-cpp. Falling back to GPT-3.5-turbo."
            + "\033[0m\033[0m"
        )
        LLM_MODEL = "gpt-3.5-turbo"

if LLM_MODEL.startswith("gpt-4"):
    print(
        "\033[91m\033[1m"
        + "\n*****USING GPT-4. POTENTIALLY EXPENSIVE. MONITOR YOUR COSTS*****"
        + "\033[0m\033[0m"
    )

if LLM_MODEL.startswith("human"):
    print(
        "\033[91m\033[1m"
        + "\n*****USING HUMAN INPUT*****"
        + "\033[0m\033[0m"
    )

print("\033[94m\033[1m" + "\n*****OBJECTIVE*****\n" + "\033[0m\033[0m")
print(f"{OBJECTIVE}")

if not JOIN_EXISTING_OBJECTIVE:
    print("\033[93m\033[1m" + "\nInitial task:" + "\033[0m\033[0m" + f" {INITIAL_TASK}")
else:
    print("\033[93m\033[1m" + f"\nJoining to help the objective" + "\033[0m\033[0m")

# Configure OpenAI
openai.api_key = OPENAI_API_KEY


# Llama embedding function
class LlamaEmbeddingFunction(EmbeddingFunction):
    def __init__(self):
        return


    def __call__(self, texts: Documents) -> Embeddings:
        embeddings = []
        for t in texts:
            e = llm_embed.embed(t)
            embeddings.append(e)
        return embeddings


# Results storage using local ChromaDB
class DefaultResultsStorage:
    def __init__(self):
        logging.getLogger('chromadb').setLevel(logging.ERROR)
        # Create Chroma collection
        chroma_persist_dir = "chroma"
        chroma_client = chromadb.PersistentClient(
            settings=chromadb.config.Settings(
                persist_directory=chroma_persist_dir,
            )
        )

        metric = "cosine"
        if LLM_MODEL.startswith("llama"):
            embedding_function = LlamaEmbeddingFunction()
        else:
            embedding_function = OpenAIEmbeddingFunction(api_key=OPENAI_API_KEY)
        self.collection = chroma_client.get_or_create_collection(
            name=RESULTS_STORE_NAME,
            metadata={"hnsw:space": metric},
            embedding_function=embedding_function,
        )

    def add(self, task: Dict, result: str, result_id: str):

        # Break the function if LLM_MODEL starts with "human" (case-insensitive)
        if LLM_MODEL.startswith("human"):
            return
        # Continue with the rest of the function

        embeddings = llm_embed.embed(result) if LLM_MODEL.startswith("llama") else None
        if (
                len(self.collection.get(ids=[result_id], include=[])["ids"]) > 0
        ):  # Check if the result already exists
            self.collection.update(
                ids=result_id,
                embeddings=embeddings,
                documents=result,
                metadatas={"task": task["task_name"], "result": result},
            )
        else:
            self.collection.add(
                ids=result_id,
                embeddings=embeddings,
                documents=result,
                metadatas={"task": task["task_name"], "result": result},
            )

    def query(self, query: str, top_results_num: int) -> List[dict]:
        count: int = self.collection.count()
        if count == 0:
            return []
        results = self.collection.query(
            query_texts=query,
            n_results=min(top_results_num, count),
            include=["metadatas"]
        )
        return [item["task"] for item in results["metadatas"][0]]


# Initialize results storage
def try_weaviate():
    WEAVIATE_URL = os.getenv("WEAVIATE_URL", "")
    WEAVIATE_USE_EMBEDDED = os.getenv("WEAVIATE_USE_EMBEDDED", "False").lower() == "true"
    if (WEAVIATE_URL or WEAVIATE_USE_EMBEDDED) and can_import("extensions.weaviate_storage"):
        WEAVIATE_API_KEY = os.getenv("WEAVIATE_API_KEY", "")
        from extensions.weaviate_storage import WeaviateResultsStorage
        print("\nUsing results storage: " + "\033[93m\033[1m" + "Weaviate" + "\033[0m\033[0m")
        return WeaviateResultsStorage(OPENAI_API_KEY, WEAVIATE_URL, WEAVIATE_API_KEY, WEAVIATE_USE_EMBEDDED, LLM_MODEL, LLAMA_MODEL_PATH, RESULTS_STORE_NAME, OBJECTIVE)
    return None

def try_pinecone():
    PINECONE_API_KEY = os.getenv("PINECONE_API_KEY", "")
    if PINECONE_API_KEY and can_import("extensions.pinecone_storage"):
        PINECONE_ENVIRONMENT = os.getenv("PINECONE_ENVIRONMENT", "")
        assert (
            PINECONE_ENVIRONMENT
        ), "\033[91m\033[1m" + "PINECONE_ENVIRONMENT environment variable is missing from .env" + "\033[0m\033[0m"
        from extensions.pinecone_storage import PineconeResultsStorage
        print("\nUsing results storage: " + "\033[93m\033[1m" + "Pinecone" + "\033[0m\033[0m")
        return PineconeResultsStorage(OPENAI_API_KEY, PINECONE_API_KEY, PINECONE_ENVIRONMENT, LLM_MODEL, LLAMA_MODEL_PATH, RESULTS_STORE_NAME, OBJECTIVE)
    return None

def use_chroma():
    print("\nUsing results storage: " + "\033[93m\033[1m" + "Chroma (Default)" + "\033[0m\033[0m")
    return DefaultResultsStorage()

results_storage = try_weaviate() or try_pinecone() or use_chroma()

# Task storage supporting only a single instance of BabyAGI
class SingleTaskListStorage:
    def __init__(self):
        self.tasks = deque([])
        self.task_id_counter = 0

    def append(self, task: Dict):
        self.tasks.append(task)

    def replace(self, tasks: List[Dict]):
        self.tasks = deque(tasks)

    def popleft(self):
        return self.tasks.popleft()

    def is_empty(self):
        return False if self.tasks else True

    def next_task_id(self):
        self.task_id_counter += 1
        return self.task_id_counter

    def get_task_names(self):
        return [t["task_name"] for t in self.tasks]


# Initialize tasks storage
tasks_storage = SingleTaskListStorage()
if COOPERATIVE_MODE in ['l', 'local']:
    if can_import("extensions.ray_tasks"):
        import sys
        from pathlib import Path

        sys.path.append(str(Path(__file__).resolve().parent))
        from extensions.ray_tasks import CooperativeTaskListStorage

        tasks_storage = CooperativeTaskListStorage(OBJECTIVE)
        print("\nReplacing tasks storage: " + "\033[93m\033[1m" + "Ray" + "\033[0m\033[0m")
elif COOPERATIVE_MODE in ['d', 'distributed']:
    pass


def limit_tokens_from_string(string: str, model: str, limit: int) -> str:
    """Limits the string to a number of tokens (estimated)."""

    try:
        encoding = tiktoken.encoding_for_model(model)
    except:
        encoding = tiktoken.encoding_for_model('gpt2')  # Fallback for others.

    encoded = encoding.encode(string)

    return encoding.decode(encoded[:limit])


def openai_call(
    prompt: str,
    model: str = LLM_MODEL,
    temperature: float = OPENAI_TEMPERATURE,
    max_tokens: int = 100,
):
    while True:
        try:
            if model.lower().startswith("llama"):
                result = llm(prompt[:CTX_MAX],
                             stop=["### Human"],
                             echo=False,
                             temperature=0.2,
                             top_k=40,
                             top_p=0.95,
                             repeat_penalty=1.05,
                             max_tokens=200)
                # print('\n*****RESULT JSON DUMP*****\n')
                # print(json.dumps(result))
                # print('\n')
                return result['choices'][0]['text'].strip()
            elif model.lower().startswith("human"):
                return user_input_await(prompt)
            elif not model.lower().startswith("gpt-"):
                # Use completion API
                response = openai.Completion.create(
                    engine=model,
                    prompt=prompt,
                    temperature=temperature,
                    max_tokens=max_tokens,
                    top_p=1,
                    frequency_penalty=0,
                    presence_penalty=0,
                )
                return response.choices[0].text.strip()
            else:
                # Use 4000 instead of the real limit (4097) to give a bit of wiggle room for the encoding of roles.
                # TODO: different limits for different models.

                trimmed_prompt = limit_tokens_from_string(prompt, model, 4000 - max_tokens)

                # Use chat completion API
                messages = [{"role": "system", "content": trimmed_prompt}]
                response = openai.ChatCompletion.create(
                    model=model,
                    messages=messages,
                    temperature=temperature,
                    max_tokens=max_tokens,
                    n=1,
                    stop=None,
                )
                return response.choices[0].message.content.strip()
        except openai.error.RateLimitError:
            print(
                "   *** The OpenAI API rate limit has been exceeded. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        except openai.error.Timeout:
            print(
                "   *** OpenAI API timeout occurred. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        except openai.error.APIError:
            print(
                "   *** OpenAI API error occurred. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        except openai.error.APIConnectionError:
            print(
                "   *** OpenAI API connection error occurred. Check your network settings, proxy configuration, SSL certificates, or firewall rules. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        except openai.error.InvalidRequestError:
            print(
                "   *** OpenAI API invalid request. Check the documentation for the specific API method you are calling and make sure you are sending valid and complete parameters. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        except openai.error.ServiceUnavailableError:
            print(
                "   *** OpenAI API service unavailable. Waiting 10 seconds and trying again. ***"
            )
            time.sleep(10)  # Wait 10 seconds and try again
        else:
            break


def task_creation_agent(
        objective: str, result: Dict, task_description: str, task_list: List[str]
):
    prompt = f"""
You are to use the result from an execution agent to create new tasks with the following objective: {objective}.
The last completed task has the result: \n{result["data"]}
This result was based on this task description: {task_description}.\n"""

    if task_list:
        prompt += f"These are incomplete tasks: {', '.join(task_list)}\n"
    prompt += "Based on the result, return a list of tasks to be completed in order to meet the objective. "
    if task_list:
        prompt += "These new tasks must not overlap with incomplete tasks. "

    prompt += """
Return one task per line in your response. The result must be a numbered list in the format:

#. First task
#. Second task

The number of each entry must be followed by a period. If your list is empty, write "There are no tasks to add at this time."
Unless your list is empty, do not include any headers before your numbered list or follow your numbered list with any other output."""

    print(f'\n*****TASK CREATION AGENT PROMPT****\n{prompt}\n')
    response = openai_call(prompt, max_tokens=2000)
    print(f'\n****TASK CREATION AGENT RESPONSE****\n{response}\n')
    new_tasks = response.split('\n')
    new_tasks_list = []
    for task_string in new_tasks:
        task_parts = task_string.strip().split(".", 1)
        if len(task_parts) == 2:
            task_id = ''.join(s for s in task_parts[0] if s.isnumeric())
            task_name = re.sub(r'[^\w\s_]+', '', task_parts[1]).strip()
            if task_name.strip() and task_id.isnumeric():
                new_tasks_list.append(task_name)
            # print('New task created: ' + task_name)

    out = [{"task_name": task_name} for task_name in new_tasks_list]
    return out


def prioritization_agent():
    task_names = tasks_storage.get_task_names()
    bullet_string = '\n'

    prompt = f"""
You are tasked with prioritizing the following tasks: {bullet_string + bullet_string.join(task_names)}
Consider the ultimate objective of your team: {OBJECTIVE}.
Tasks should be sorted from highest to lowest priority, where higher-priority tasks are those that act as pre-requisites or are more essential for meeting the objective.
Do not remove any tasks. Return the ranked tasks as a numbered list in the format:

#. First task
#. Second task

The entries must be consecutively numbered, starting with 1. The number of each entry must be followed by a period.
Do not include any headers before your ranked list or follow your list with any other output."""

    print(f'\n****TASK PRIORITIZATION AGENT PROMPT****\n{prompt}\n')
    response = openai_call(prompt, max_tokens=2000)
    print(f'\n****TASK PRIORITIZATION AGENT RESPONSE****\n{response}\n')
    if not response:
        print('Received empty response from priotritization agent. Keeping task list unchanged.')
        return
    new_tasks = response.split("\n") if "\n" in response else [response]
    new_tasks_list = []
    for task_string in new_tasks:
        task_parts = task_string.strip().split(".", 1)
        if len(task_parts) == 2:
            task_id = ''.join(s for s in task_parts[0] if s.isnumeric())
            task_name = re.sub(r'[^\w\s_]+', '', task_parts[1]).strip()
            if task_name.strip():
                new_tasks_list.append({"task_id": task_id, "task_name": task_name})

    return new_tasks_list


# Execute a task based on the objective and five previous tasks
def execution_agent(objective: str, task: str) -> str:
    """
    Executes a task based on the given objective and previous context.

    Args:
        objective (str): The objective or goal for the AI to perform the task.
        task (str): The task to be executed by the AI.

    Returns:
        str: The response generated by the AI for the given task.

    """

    context = context_agent(query=objective, top_results_num=5)
    # print("\n****RELEVANT CONTEXT****\n")
    # print(context)
    # print('')
    prompt = f'Perform one task based on the following objective: {objective}.\n'
    if context:
        prompt += 'Take into account these previously completed tasks:' + '\n'.join(context)
    prompt += f'\nYour task: {task}\nResponse:'
    return openai_call(prompt, max_tokens=2000)


# Get the top n completed tasks for the objective
def context_agent(query: str, top_results_num: int):
    """
    Retrieves context for a given query from an index of tasks.

    Args:
        query (str): The query or objective for retrieving context.
        top_results_num (int): The number of top results to retrieve.

    Returns:
        list: A list of tasks as context for the given query, sorted by relevance.

    """
    results = results_storage.query(query=query, top_results_num=top_results_num)
    # print("****RESULTS****")
    # print(results)
    return results


# Add the initial task if starting new objective
if not JOIN_EXISTING_OBJECTIVE:
    initial_task = {
        "task_id": tasks_storage.next_task_id(),
        "task_name": INITIAL_TASK
    }
    tasks_storage.append(initial_task)


def main():
    loop = True
    while loop:
        # As long as there are tasks in the storage...
        if not tasks_storage.is_empty():
            # Print the task list
            print("\033[95m\033[1m" + "\n*****TASK LIST*****\n" + "\033[0m\033[0m")
            for t in tasks_storage.get_task_names():
                print(" • " + str(t))

            # Step 1: Pull the first incomplete task
            task = tasks_storage.popleft()
            print("\033[92m\033[1m" + "\n*****NEXT TASK*****\n" + "\033[0m\033[0m")
            print(str(task["task_name"]))

            # Send to execution function to complete the task based on the context
            result = execution_agent(OBJECTIVE, str(task["task_name"]))
            print("\033[93m\033[1m" + "\n*****TASK RESULT*****\n" + "\033[0m\033[0m")
            print(result)

            # Step 2: Enrich result and store in the results storage
            # This is where you should enrich the result if needed
            enriched_result = {
                "data": result
            }
            # extract the actual result from the dictionary
            # since we don't do enrichment currently
            # vector = enriched_result["data"]

            result_id = f"result_{task['task_id']}"

            results_storage.add(task, result, result_id)

            # Step 3: Create new tasks and re-prioritize task list
            # only the main instance in cooperative mode does that
            new_tasks = task_creation_agent(
                OBJECTIVE,
                enriched_result,
                task["task_name"],
                tasks_storage.get_task_names(),
            )

            print('Adding new tasks to task_storage')
            for new_task in new_tasks:
                new_task.update({"task_id": tasks_storage.next_task_id()})
                print(str(new_task))
                tasks_storage.append(new_task)

            if not JOIN_EXISTING_OBJECTIVE:
                prioritized_tasks = prioritization_agent()
                if prioritized_tasks:
                    tasks_storage.replace(prioritized_tasks)

            # Sleep a bit before checking the task list again
            time.sleep(5)
        else:
            print('Done.')
            loop = False


if __name__ == "__main__":
    main()

  

代码分析:

这段代码主要用于配置和运行一个基于环境变量和命令行参数的任务执行和结果存储系统。以下是代码的主要功能和组件:

1. 环境变量加载与配置:

  • 使用dotenv库加载.env文件中的环境变量。
  • 配置多个关键变量,如模型类型(LLM_MODEL)、API密钥(OPENAI_API_KEY)、结果存储名称(RESULTS_STORE_NAME)等。

2. 条件导入与扩展支持:

  • 动态检查并导入模块,如extensions.argparseext和extensions.human_mode,以支持命令行参数解析和人类模式输入。
  • 加载额外的环境变量扩展。

3. 模型配置:

  • 根据环境变量配置不同的语言模型,如GPT-3.5-turbo、LLAMA或人类输入模式。
  • 对于LLAMA模型,还配置了模型路径、线程数等,并进行了模型的初始化。

4. 结果存储:

  • 根据配置初始化默认的结果存储或尝试使用Weaviate和Pinecone等其他存储解决方案。
  • 定义了如何添加和查询结果数据。

5. 任务存储与管理:

  • 初始化任务存储,支持单实例或协作模式。
  • 提供了添加、替换、弹出和查询任务的方法。

6. 任务执行与管理:

  • 定义了执行任务、创建新任务、优先级排序和上下文管理的函数。
  • 使用OpenAI API执行任务,并根据结果生成新的任务或对现有任务重新排序。

7. 主函数:

  • 主循环处理任务执行、结果存储、任务创建和优先级排序。
  • 根据任务存储的状态决定是否继续执行或停止。

整体上,这个脚本是一个基于环境配置和动态模块加载的自动化任务处理系统,能够根据不同的配置和输入执行特定的任务,并管理任务的存储和执行结果。

 

看下agent的提示:

    prompt = f"""
You are to use the result from an execution agent to create new tasks with the following objective: {objective}.
The last completed task has the result: \n{result["data"]}
This result was based on this task description: {task_description}.\n"""

    if task_list:
        prompt += f"These are incomplete tasks: {', '.join(task_list)}\n"
    prompt += "Based on the result, return a list of tasks to be completed in order to meet the objective. "
    if task_list:
        prompt += "These new tasks must not overlap with incomplete tasks. "

    prompt += """
Return one task per line in your response. The result must be a numbered list in the format:

#. First task
#. Second task

The number of each entry must be followed by a period. If your list is empty, write "There are no tasks to add at this time."
Unless your list is empty, do not include any headers before your numbered list or follow your numbered list with any other output."""

 

翻译下:

prompt = f"""
您将使用执行代理的结果来创建新任务,其目标如下:{objective}。
上一个已完成的任务的结果为:\n{result["data"]}
这个结果是基于以下任务描述的:{task_description}。\n"""

if task_list:
    prompt += f"这些是未完成的任务:{', '.join(task_list)}\n"
prompt += "根据结果,返回一个任务列表,以便按顺序完成目标。"
if task_list:
    prompt += "这些新任务不能与未完成的任务重叠。"

prompt += """
在您的响应中每行返回一个任务。结果必须是按以下格式编号的列表:

#. 第一个任务
#. 第二个任务

每个条目的编号后必须跟着一个句点。如果您的列表为空,请写“目前没有任务可添加。”
除非您的列表为空,否则请不要在编号列表之前包含任何标题,也不要在编号列表之后跟随任何其他输出。"""

  

  

    prompt = f""" You are tasked with prioritizing the following tasks: {bullet_string + bullet_string.join(task_names)} Consider the ultimate objective of your team: {OBJECTIVE}. Tasks should be sorted from highest to lowest priority, where higher-priority tasks are those that act as pre-requisites or are more essential for meeting the objective. Do not remove any tasks. Return the ranked tasks as a numbered list in the format:
#. First task #. Second task
The entries must be consecutively numbered, starting with 1. The number of each entry must be followed by a period. Do not include any headers before your ranked list or follow your list with any other output."""   翻译下:
prompt = f"""
您被要求对以下任务进行优先排序:{bullet_string + bullet_string.join(task_names)}
考虑您团队的最终目标:{OBJECTIVE}。
任务应按优先级从高到低排序,其中优先级较高的任务是那些作为前提条件或对实现目标更为重要的任务。
请勿删除任何任务。返回按以下格式编号的排序后的任务列表:

#. 第一个任务
#. 第二个任务

条目必须按顺序编号,从1开始。每个条目的编号后必须跟着一个句点。
在您的排序列表之前请不要包含任何标题,也请不要在您的列表之后跟随任何其他输出。"""

  

 

标签:tasks,BabyAGI,代码,task,result,033,print,原理,0m
From: https://www.cnblogs.com/bonelee/p/18208142

相关文章

  • shader 代码 分享:粒子的 缩放 和 位移(贝塞尔曲线控制不同粒子的位置) -- shader
    原始代码:shader_typeparticles;uniformintamount:hint_range(1,1000)=8;uniformfloatmax_rope_distance:hint_range(1.0,1000.0)=20.0;uniformfloatscale:hint_range(0.0,10.0)=1.0;uniformvec3start=vec3(-5.0,5.0,0.0);uniformvec3end......
  • 没有 Git,如何下载 Gitee 代码?
    目录没有Git,如何下载Gitee代码?注册Gitee账号下载代码压缩包没有Git,如何下载Gitee代码?鉴于看我博客的人很多都是大学本科生、非CS专业,大部分人都不会使用Git,也很少使用和浏览GitHub。但是我又经常拿Gitee分享代码,所以我写个文章,介绍一下没有Git怎么下载Gitee平......
  • Vue搭建移动端h5项目(已开源,附带git地址)Vant+Vue Router+Vuex+axios封装+案例交互+部分
    一、项目介绍以及项目地址             项目介绍:vue2搭建。项目通过amfe-flexible与postcss-pxtorem实现移动端适配;通过Vantui作为项目的组件库;通过Vuex管理数据状态,进行模块化管理;通过VueRouter配置项目路由,进行模块化管理;封装axios进行数据的请求,以及一些页......
  • DeepMTS深度学习神经网络多元时间序列预测宏观经济数据可视化|附数据代码
    原文链接:https://tecdat.cn/?p=36237原文出处:拓端数据部落公众号在数据科学领域,时间序列分析一直是一个至关重要的研究方向,尤其在金融、气象、医学以及许多其他科学和工业领域中,准确的时间序列预测对于制定策略、政策规划以及资源管理都具有极其重要的意义。随着技术的不断进步,......
  • R语言聚类、文本挖掘分析虚假电商评论数据:K-Means(K-均值)、层次聚类、词云可视化|附
    全文链接:http://tecdat.cn/?p=32540原文出处:拓端数据部落公众号聚类分析是一种常见的数据挖掘方法,已经广泛地应用在模式识别、图像处理分析、地理研究以及市场需求分析。本文主要研究聚类分析算法K-means在电商评论数据中的应用,挖掘出虚假的评论数据。本文主要帮助客户研究聚类......
  • 代码随想录算法训练营第第15天 | 层序遍历10道题 、226.翻转二叉树 、101. 对称二叉树
    层序遍历看完本篇可以一口气刷十道题,试一试,层序遍历并不难,大家可以很快刷了十道题。题目链接/文章讲解/视频讲解:https://programmercarl.com/0102.二叉树的层序遍历.html层序遍历,10道题,一一通过,比较简单226.翻转二叉树(优先掌握递归)这道题目一些做过的同学理解的也不够深......
  • Seata原理 与 AT模式
     Seata是一个开源的分布式事务解决方案,它提供了简单高效的分布式事务服务,以确保分布式环境下的数据一致性。Seata的AT模式(AutomaticTransaction)是其中一种事务模式,它通过两阶段提交(2PC)来实现分布式事务管理。Seata的原理和AT模式的详细讲解: 1.Seata组件Sea......
  • 2022-06-28-《机构投资者抱团与股价崩盘风险》数据处理代码
    主要参考文献为:《机构投资者抱团与股价崩盘风险》(吴晓晖,郭晓冬,乔政)。这篇文献发表在中国工业经济杂志上,但作者只提供了处理过的投资机构团体持股数据,具体处理的代码则并未展示。这里根据这篇文献,依据如下步骤进行复原:1、下载国泰安中机构团体持股比例数据,合并为一个文件,约475万......
  • 团体机构持股比例代码更新
    之前做的上市公司团体机构持股比例的代码,重新写了一遍,现在可以直接利用国泰安下载的数据进行运算。分别执行三个py文件,即可得到团体机构持股比例,最大团团体机构持股比例两个指标,可以直接导入stata进行回归等计算,方便快捷。需要的同学可以联系微信canglang12002往期推文:空间马......
  • 2022-07-03-含有非期望产出的sbm模型python代码
    传统的径向DEA模型无法考虑“松弛变量”对效率值的影响,也没有考虑同时使期望产出增加,非期望产出减少的技术变化,以此度量的效率值是不准确或有偏的,为了解决这一问题,Tone(2001)提出了基于投入产出松弛变量的环境效率评价模型,简称SBM模型,在此基础上,他进一步提出了SBM的拓展模型,从而实......