Giter Club home page Giter Club logo

microsoft / autogen Goto Github PK

View Code? Open in Web Editor NEW
25.3K 338.0 3.6K 985.26 MB

A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap

Home Page: https://microsoft.github.io/autogen/

License: Creative Commons Attribution 4.0 International

JavaScript 0.22% CSS 0.05% Dockerfile 0.05% Python 14.77% Jupyter Notebook 77.53% MDX 2.31% PowerShell 0.02% Shell 0.01% C# 5.03%
agent-based-framework agent-oriented-programming chat chat-application chatbot chatgpt gpt gpt-35-turbo gpt-4 llm-agent

autogen's Introduction

PyPI version Build Python Version Downloads Discord Twitter

AutoGen

📚 Cite paper.

🔥 Apr 17, 2024: Andrew Ng cited AutoGen in The Batch newsletter and What's next for AI agentic workflows at Sequoia Capital's AI Ascent (Mar 26).

🔥 Mar 3, 2024: What's new in AutoGen? 📰Blog; 📺Youtube.

🔥 Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging GAIA benchmark achieved the No. 1 accuracy in all the three levels.

🎉 Jan 30, 2024: AutoGen is highlighted by Peter Lee in Microsoft Research Forum Keynote.

🎉 Dec 31, 2023: AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework is selected by TheSequence: My Five Favorite AI Papers of 2023.

🎉 Nov 8, 2023: AutoGen is selected into Open100: Top 100 Open Source achievements 35 days after spinoff.

🎉 Nov 6, 2023: AutoGen is mentioned by Satya Nadella in a fireside chat.

🎉 Nov 1, 2023: AutoGen is the top trending repo on GitHub in October 2023.

🎉 Oct 03, 2023: AutoGen spins off from FLAML on GitHub and has a major paper update (first version on Aug 16).

🎉 Mar 29, 2023: AutoGen is first created in FLAML.

↑ Back to Top ↑

What is AutoGen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

AutoGen Overview

  • AutoGen enables building next-gen LLM applications based on multi-agent conversations with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
  • It supports diverse conversation patterns for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, the number of agents, and agent conversation topology.
  • It provides a collection of working systems with different complexities. These systems span a wide range of applications from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
  • AutoGen provides enhanced LLM inference. It offers utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

AutoGen is powered by collaborative research studies from Microsoft, Penn State University, and the University of Washington.

↑ Back to Top ↑

Roadmaps

To see what we are working on and what we plan to work on, please check our Roadmap Issues.

↑ Back to Top ↑

Quickstart

The easiest way to start playing is

  1. Click below to use the GitHub Codespace

    Open in GitHub Codespaces

  2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.

  3. Start playing with the notebooks!

NOTE: OAI_CONFIG_LIST_sample lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.

↑ Back to Top ↑

Option 1. Install and Run AutoGen in Docker

Find detailed instructions for users here, and for developers here.

Option 2. Install AutoGen Locally

AutoGen requires Python version >= 3.8, < 3.13. It can be installed from pip:

pip install pyautogen

Minimal dependencies are installed without extra options. You can install extra options based on the feature you need.

Find more options in Installation.

Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform code execution in docker. Find more instructions and how to change the default behaviour here.

For LLM inference configurations, check the FAQs.

↑ Back to Top ↑

Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:

  • Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
  • Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
  • Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

For example,

from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
# See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '<your OpenAI API key here>'},]
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
# This initiates an automated chat between the two agents to solve the task

This example can be run with

python test/twoagent.py

After the repo is cloned. The figure below shows an example conversation flow with AutoGen. Agent Chat Example

Alternatively, the sample code here allows a user to chat with an AutoGen agent in ChatGPT style. Please find more code examples for this feature.

↑ Back to Top ↑

Enhanced LLM Inferences

Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like caching, error handling, multi-config inference and templating.

↑ Back to Top ↑

Documentation

You can find detailed documentation about AutoGen here.

In addition, you can find:

↑ Back to Top ↑

Related Papers

AutoGen

@inproceedings{wu2023autogen,
      title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
      author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang},
      year={2023},
      eprint={2308.08155},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

EcoOptiGen

@inproceedings{wang2023EcoOptiGen,
    title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},
    author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},
    year={2023},
    booktitle={AutoML'23},
}

MathChat

@inproceedings{wu2023empirical,
    title={An Empirical Study on Challenging Math Problem Solving with GPT-4},
    author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang},
    year={2023},
    booktitle={ArXiv preprint arXiv:2306.01337},
}

↑ Back to Top ↑

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

If you are new to GitHub, here is a detailed help source on getting involved with development on GitHub.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

↑ Back to Top ↑

Contributors Wall

↑ Back to Top ↑

Legal Notices

Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.

Microsoft, Windows, Microsoft Azure, and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel, or otherwise.

↑ Back to Top ↑

autogen's People

Contributors

afourney avatar anonymous-submission-repo avatar beibinli avatar cheng-tan avatar davorrunje avatar dependabot[bot] avatar ekzhu avatar eltociear avatar gagb avatar gianpd avatar ianthereal avatar int-chaos avatar jackgerrits avatar jingdong00 avatar joshkyh avatar linxins97 avatar littlelittlecloud avatar liususan091219 avatar maxim-saplin avatar olaoluwasalami avatar olgavrou avatar qingyun-wu avatar royninja avatar shruti222patel avatar skzhang1 avatar sonichi avatar thinkall avatar victordibia avatar waelkarkoub avatar yiranwu0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogen's Issues

Azure OpenAI API requires 'content' field despite documentation stating otherwise

Description:
When interacting with the Azure OpenAI API, the 'content' field is mandated even though the official API documentation mentions it as optional. This inconsistency leads to an error message: 'content' is a required property - 'messages.2'.

Steps to Reproduce:

  1. Set up Azure OpenAI API integration as per the documentation.
  2. Make a request without the 'content' field.
  3. Observe the error message.

Expected Behavior:
The API should accept requests without the 'content' field, as per the documentation.

Actual Behavior:
The API returns an error message: 'content' is a required property - 'messages.2'.

Reference:
Error can be traced back to conversible_agent.py:274.

Environment:

  • Azure OpenAI version: 2023-08-01-preview

RAG

Folks,

I am testing AutoGen for Retrieval Augmented Code Generation and Question Answering by following the notebook posted here, https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_RetrieveChat.ipynb

I have a few follow-up questions,

  1. How do i specify which embedding function, e.g., openai.Embedding or HuggingFaceEmbedding, can be used in RetrieveUserProxyAgent?
  2. how do i configure the CharacterTextSplitter/RecursiveCharacterTextSplitter in RetrieveUserProxyAgent ?
  3. I encountered this error when i was going through the example notebook.
File ~/venv/GPT_venv/lib/python3.10/site-packages/autogen/retrieve_utils.py:220, in create_vector_db_from_dir(dir_path, max_tokens, client, db_path, collection_name, get_or_create, chunk_mode, must_break_at_empty_line, embedding_model)
    212     for i in range(0, len(chunks), 40000):
    213         collection.upsert(
    214             documents=chunks[
    215                 i : i + 40000
    216             ],  # we handle tokenization, embedding, and indexing automatically. You can skip that and add your own embeddings as well
    217             ids=[f"doc_{i}" for i in range(i, i + 40000)],  # unique for each doc
    218         )
    219     collection.upsert(
--> 220         documents=chunks[i : len(chunks)],
    221         ids=[f"doc_{i}" for i in range(i, len(chunks))],  # unique for each doc
    222     )
    223 except ValueError as e:
    224     logger.warning(f"{e}")

UnboundLocalError: local variable 'i' referenced before assignment

add citation information to readme?

add citation information to readme?

we can have two subsections there, one for the library: AutoGen & EcoOptiGen; and one for applications: MathChat (maybe this is also part of the library?)

Integrate opensource LLMs into autogen

Currently, autogen.oai only supports OpenAI models. I am planning to integrate open source LLMs into autogen. This would require Hugging Face transformers library.

Some of the open source LLMs I have in mind include LLaMa, Alpaca, Falcon, Vicuna, WizardLM, Starcoder, Guanaco. Most of their inference codes are integrated into Hugging Face transformers library, which can be unified into something like this.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "<name-of-model>"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
inputs = tokenizer("The world is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))

Tasks

  1. jtongxin
  2. enhancement
  3. enhancement good first issue

TypeError: can only concatenate str (not "bytes") to str

Hi Team,

try some notebook in windows VSC.. i got the below error. Anyone seeing before?


TypeError Traceback (most recent call last)
c:\My Work\17. AI learning\autogen\notebook\agentchat_groupchat_vis.ipynb Cell 10 line 1
----> 1 user_proxy.initiate_chat(manager, message="download data from https://raw.githubusercontent.com/uwdata/draco/master/data/cars.csv and plot a visualization that tells us about the relationship between weight and horsepower. Save the plot to a file. Print the fields in a dataset before visualizing it.")
2 # type exit to terminate the chat

File c:\My Work\17. AI learning\autogen.conda\lib\site-packages\autogen\agentchat\conversable_agent.py:521, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
507 """Initiate a chat with the recipient agent.
508
509 Reset the consecutive auto reply counter.
(...)
518 "message" needs to be provided if the generate_init_message method is not overridden.
519 """
520 self._prepare_chat(recipient, clear_history)
--> 521 self.send(self.generate_init_message(**context), recipient, silent=silent)

File c:\My Work\17. AI learning\autogen.conda\lib\site-packages\autogen\agentchat\conversable_agent.py:324, in ConversableAgent.send(self, message, recipient, request_reply, silent)
322 valid = self._append_oai_message(message, "assistant", recipient)
323 if valid:
--> 324 recipient.receive(message, self, request_reply, silent)
325 else:
326 raise ValueError(
327 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
328 )
...
--> 903 logs_all += "\n" + logs
904 if exitcode != 0:
905 return exitcode, logs_all

TypeError: can only concatenate str (not "bytes") to str
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

Issues with Adding Custom APIs in Auto Generation

When using the custom ChatCompletion API (refer to https://github.com/lhenault/simpleAI), the oai.ChatCompletion.create and oai.ChatCompletion.tune functions both produce errors.
One example code is

    response = oai.ChatCompletion.create(
    messages=[{"role": "user", "content": "hi"}],
    config_list=[
        {
            "model": "stablelm-open-assistant",
            "api_key": os.environ.get("OPENAI_API_KEY"),
            "api_base": "http://127.0.0.1:8080",
            "api_version": None,
        },    
    )

Error Details:

During handling of the above exception, another exception occurred:

APIError                                  Traceback (most recent call last)
File ~/XXX/FLAML/flaml/autogen/oai/completion.py:196, in Completion._get_response(cls, config, eval_only, use_cache)
    195     else:
--> 196         response = openai_completion.create(request_timeout=request_timeout, **config)
    197 except (
    198     ServiceUnavailableError,
    199     APIConnectionError,
    200 ):
    201     # transient error

File ~/.conda/envs/icl3.9/lib/python3.9/site-packages/openai/api_resources/completion.py:25, in Completion.create(cls, *args, **kwargs)
     24 try:
---> 25     return super().create(*args, **kwargs)
     26 except TryAgain as e:
...
--> 205     error_code = err and err.json_body and err.json_body.get("error")
    206     error_code = error_code and error_code.get("code")
    207     if error_code == "content_filter":

AttributeError: 'str' object has no attribute 'get'

To address the issue temporarily, I added the name of my custom model, i.e. "stablelm-open-assistant", to the chat_models in the completion.py file. This modification enabled the oai.ChatCompletion.create function to work without any problems. Based on this experience, I suspect that the issue may not affect the completion API.

However, when I tried to use the oai.ChatCompletion.tune, I encountered another error. The error information is

File ~/ExampleSelection/FLAML/flaml/autogen/oai/completion.py:661, in Completion.tune(cls, data, metric, mode, eval_func, log_file_name, inference_budget, optimization_budget, num_samples, logging_level, **config)
    659 logger.setLevel(logging_level)
...
--> 404 query_cost = (price_input * n_input_tokens + price_output * n_output_tokens) / 1000
    405 cls._total_cost += query_cost
    406 cost += query_cost

TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

This error is caused by the new model not having any prices specified in the price1K dictionary in completion.py. To address this, I added my custom model's name to the price1K dictionary along with a default price (slightly > 0.0). With this modification in place, both the oai.ChatCompletion.create and oai.ChatCompletion.tune functions work without any further issues.

Add ability to force a TERMINATE from assistant agent

Why is this needed?

Autogen TERMINATE mode and max iterations is very useful but sometimes it gets stuck in a loop and allowing the user to use a keyboard short cut (or some other way) to intervene and force a TERMINATE can be useful.

Add parallel tunning for openAI integration

It would be useful to support parallel tunning in openAI integration to accelerate the tunning process. Plus, it would be better to support multi-apikeys to mitigate the API call ratio limitation of openAI.

Use appropriate wait time for retry based on the error message.

[flaml.autogen.oai.completion: 05-11 00:50:35] {217} INFO - retrying in 10 seconds...
Traceback (most recent call last):
  File "[...\.venv\Lib\site-packages\flaml\autogen\oai\completion.py]", line 193, in _get_response
    response = openai_completion.create(request_timeout=request_timeout, **config)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "[...\.venv\Lib\site-packages\openai\api_resources\completion.py]", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "[...\.venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py]", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "[...\.venv\Lib\site-packages\openai\api_requestor.py]", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "[...\.venv\Lib\site-packages\openai\api_requestor.py]", line 620, in _interpret_response
    self._interpret_response_line(
  File "[...\.venv\Lib\site-packages\openai\api_requestor.py]", line 683, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: Requests to the Completions_Create Operation under Azure OpenAI API version 2022-12-01 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 59 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.
[flaml.autogen.oai.completion: 05-11 00:50:45] {217} INFO - retrying in 10 seconds...

The error message says "Please retry after 59 seconds", but FLAML keeps retrying in 10-second intervals.

TypeError: 'NoneType' object is not subscriptable

While working on the chess problem i got the following error:

player_black.initiate_chat(player_white

Player black (to Player white):

Your turn.

--------------------------------------------------------------------------------
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-15-bc9baf5cdb2b>](https://localhost:8080/#) in <cell line: 1>()
----> 1 player_black.initiate_chat(player_white, message="Your turn.")

49 frames
[<ipython-input-12-c8aaff81ea65>](https://localhost:8080/#) in _generate_board_reply(self, messages, sender, config)
     30         # extract a UCI move from player's message
     31         reply = self.generate_reply(self.correct_move_messages[sender] + [message], sender, exclude=[BoardAgent._generate_board_reply])
---> 32         uci_move = reply if isinstance(reply, str) else str(reply["content"])
     33         try:
     34             self.board.push_uci(uci_move)

TypeError: 'NoneType' object is not subscriptable

Here is notebook for reference: https://colab.research.google.com/drive/1REr58SgqSGDLLC34CvGvQzjzGvTqsDBW?usp=sharing

Best Regards,
Andy

Group Chat Termination When Calling RAGProxyAgent via Admin Agent

Main Issue:

When posing a question to the Admin agent that involves RAGProxyAgent in a group chat context (e.g., "Provide me with some data," whether explicitly or implicitly stated), the group chat session terminates unexpectedly without any error messages.


Potential Cause:

This may be because the agent is set to terminate as soon as the code execution succeeds. (Suggested by @sonichi )


Additional good-to-have:

A notebook example showcasing how the RAG agent operates within a group chat.

Claude support

Hi Team,

Can you add support for Claude 1/Claude 2?

Thanks.

Sander

Repeatedly seeing 'trap not found' on Windows 11

After installing the new release, every time I run anything I see

  Microsoft Windows [Version 10.0.22621.2134]
  (c) Microsoft Corporation. All rights reserved.
  C:\Users\Chris>
  C:\Users\Chris>trap 'echo "An error occurred on line $LINENO"; exit' ERR
  'trap' is not recognized as an internal or external command,
  operable program or batch file.
  C:\Users\Chris>set -E
  Environment variable -E not defined
  C:\Users\Chris>git --version
  git version 2.40.0.windows.1


  Great, git is installed on your machine. Now, let's move to the next step.

  Next, I will clone the 'autogen' repository into the current directory. Let's execute the git clone command. This
  may take a few minutes depending on the internet connection.

Trying twoagent.py in Getting Started and it does not run on Windows. Gives WARNING:root:SIGALRM is not supported on Windows.

I cannot get the twoagent.py script to run under Windows in VSCode (via the "Run | StartDebugging" command). I followed the instructions in the Getting Started docs but get the following error:

ModuleNotFoundError: No module named 'pandas'

Also, I get the SIGALRM is not supported waring; here is the console output:

`user_proxy (to assistant):

Plot a chart of NVDA and TESLA stock price change YTD.


assistant (to user_proxy):

To plot the chart of NVDA and TESLA stock price change Year-to-Date (YTD), we need to first collect the stock price data
for both NVDA and TESLA. We can fetch the data from a financial API such as Alpha Vantage. Here is an example of how you
can plot the chart using Python:

# Required Libraries
import pandas as pd
import matplotlib.pyplot as plt

<edited for brevity>

plt.show()

Please note that you need to replace <your_api_key> with your API key from Alpha Vantage. Make sure you have the required libraries installed (pandas, matplotlib, seaborn, and requests) before executing the code.

Let me know if you need any further assistance.


Provide feedback to assistant. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:

NO HUMAN INPUT RECEIVED.

USING AUTO REPLY...

EXECUTING CODE BLOCK 0 (inferred language is python)...
WARNING:root:SIGALRM is not supported on Windows. No timeout will be enforced.`

Note: of course, I can install pandas, etal. but that defeats the auto-troubleshooting of the UserProxyAgent.
I suspect this is somewhat similar to the problems mentioned in #17, #19 and #38.
Does AutoGen support running on Windows?

pyautogen version: 0.1.6 (from 10/2/2023 build)
Windows specs: Windows 10 Home (64-bit), OS build: 19045.3448
Running in virtual python environment from VSCode.
NOT running in docker (yet).

Third-Party APIs

I have an LLM API hosted on a remote server, and this is the way it functions: I send the query "Hello" using a fetch request to the API, and in response, I receive the message "Hello! How can I assist you today?". Is it possible to use AutoGen with APIs of this nature, where you submit a question and receive a response? This is example of response from the API
[{"role": "system", "content": "Knowledge cutoff: 2021-09-01 Current date: 2023-09-29"}, {"role": "user", "content": "Hello", "token": "7284194572548942422"}, {"role": "assistant", "content": "Hello! How can I assist you today?\n", "token": "7284194572548942422"}]

InvalidRequestError when giving the autogen.AssistantAgent tools via llm_config.functions

folks,

i am following the tutorial listed in https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb

  1. I completed the set up for the ask_planer function without issues
planner_user (to planner):

hello

--------------------------------------------------------------------------------
planner (to planner_user):

Hello! How can I help you today? Do you need assistance with a coding project or reasoning about a certain topic? Please provide me with some details so I can suggest the appropriate steps to accomplish the task.

--------------------------------------------------------------------------------
'Hello! How can I help you today? Do you need assistance with a coding project or reasoning about a certain topic? Please provide me with some details so I can suggest the appropriate steps to accomplish the task.'
  1. i got the following error when i tried to run
user_proxy.initiate_chat(
    assistant,
    message="""what is 1+1""",
)

InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
if i remove the llm_config['functions'] then the problem goes away; however, that prevents me to connecting the agents
do you know what could be causing this problem? and how i can resolve this?

Bring AutoGen to dotnet

any insights on how to introduce AutoGen to the world of dotnet? It would be truly exciting to witness the potential of multi-agent collaboration in helping dotnet developers to create highly intelligent applications.

@sonichi @luisquintanilla @JakeRadMSFT

Update on 2024/02/12

Nightly build are available to public.

checkout the following link on how to consume night build packages

Update on 2024/02/08

Temporarily suspend publishing to github package until PR merged in main

Update on 2024/01/31

AutoGen nightly build is also available on github package
feed url: https://nuget.pkg.github.com/microsoft/index.json

To consume github package, please refer to the following link
- https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry#installing-a-package

Update on 2024/01/23

AutoGen for dotnet preview is available internally. We are working on the process of publishing to a public feed for external customers. I'll update the progress once it's available

Doc (preview)

nightly feed (update for every new commit): https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json

Migrate from AgentChat

  • Replace AgentChat.Core with AutoGen.
  • If you are using AgentChat.DotnetInteractive, replace it with AutoGen.DotnetInteractive
  • Adding reference to AutoGen.SourceGenerator for type-safe function call
  • Fixing build errors. API mapping from AgentChat to AutoGen should be fairly straightforward, if you encounter any issues please let me know.

Update on 2023/12/04

Start porting C# AutoGen

PR: #924

Update on 2023/11/13

Support OpenAI Assistant API

Update on 2023/10/22

I'm going to answer my own question here.

I have recently developed a dotnet-version AutoGen, with both code and packages available on GitHub at the following repository: [AgentChat](https://github.com/LittleLittleCloud/AgentChat.

To utilize the package, simply add the nightly build feed from the following URL:

https://www.myget.org/F/agentchat/api/v3/index.json

The features currently supported include:

  • Connection to OpenAI models
  • Two-agent chat
  • Group chat
  • Function calling

However, some functionalities are not yet available, such as:

  • Python code execution
  • Support for LLMs other than those mentioned

Moreover, AgentChat provides some unique functionalities thanks to dotnet feature

  • typed FunctionCall using source generator: automatically generate FunctionDefinition based on structural comments.
  • Dotnet code execution: run dotnet script on dotnet interactive.

Here's how to add the package:

<ItemGroup>
    <PackageReference Include="AgentChat.Core" />
    <!-- Adding AgentChat.GPT will enable connection to OpenAI models -->
    <PackageReference Include="AgentChat.OpenAI" />
</ItemGroup>

Further Notes

  • Please note that this SDK is still in its nascent stage and as such, there's no stable version released yet.
  • The following steps will involve introducing additional notebook examples, and expanding support for other LLMs.
  • Collaborative contributions are always welcome.

Adoption of Agent Protocol

https://github.com/AI-Engineer-Foundation/agent-protocol, and join our discord: https://discord.gg/cXKrtegQrZ

Curious if you'd consider using the Agent Protocol we have developed. This OSS project resides under a non-profit called "AI Engineer Foundation". The mission is to establish industry standards so that people do not have to re-invent the wheels, so that we can collaborate more.

By implementing Agent Protocol, you'd be able to communicate with all other Agents out there that also conform to the same protocol as well as being able to be benchmarked.

AutoGPT is running a hackathon and they have created a benchmarking tool that is built with AgentProtocol. Other projects that have adopted us other than AutoGPT is: GPTEngineer, PolyAGI, Beebot, E2B... the list is growing.

Excited to hear back from you!

AzureML Compute Instance Error: OperationalError: database is locked

When running agents on AzureML Compute Instance, sqlite db is throwing error. Seems this is a known issue with sqllite3 db on vm, work around is to move db to a different location, please provide that capability is necessary.

File /anaconda/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:521, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
507 """Initiate a chat with the recipient agent.
508
509 Reset the consecutive auto reply counter.
(...)
518 "message" needs to be provided if the generate_init_message method is not overridden.
519 """
520 self._prepare_chat(recipient, clear_history)
--> 521 self.send(self.generate_init_message(**context), recipient, silent=silent)

File /anaconda/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:324, in ConversableAgent.send(self, message, recipient, request_reply, silent)
322 valid = self._append_oai_message(message, "assistant", recipient)
323 if valid:
--> 324 recipient.receive(message, self, request_reply, silent)
325 else:
326 raise ValueError(
...
-> 2438 sql('PRAGMA %s = %s' % (pragma, value)).fetchall()
2439 break
2440 except sqlite3.OperationalError as exc:

OperationalError: database is locked

Gettig Error while running "autogen.UserProxyAgent"

The first response is coming right but getting an error during the code execution part

ERROR::

EXECUTING CODE BLOCK 0 (inferred language is sh)...
WARNING:root: SIGALRM is not supported on Windows. No timeout will be enforced.


FileNotFoundError Traceback (most recent call last)
Cell In[14], line 22
11 user_proxy = autogen.UserProxyAgent(
12 name="user_proxy",
13 human_input_mode="NEVER",
(...)
19 },
20 )
21 # the assistant receives a message from the user_proxy, which contains the task description
---> 22 user_proxy.initiate_chat(
23 assistant,
24 message="""build a aws cloud arctechture""",
25 )

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:521, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
507 """Initiate a chat with the recipient agent.
508
509 Reset the consecutive auto reply counter.
(...)
518 "message" needs to be provided if the generate_init_message method is not overridden.
519 """
520 self._prepare_chat(recipient, clear_history)
--> 521 self.send(self.generate_init_message(**context), recipient, silent=silent)

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:324, in ConversableAgent.send(self, message, recipient, request_reply, silent)
322 valid = self._append_oai_message(message, "assistant", recipient)
323 if valid:
--> 324 recipient.receive(message, self, request_reply, silent)
325 else:
326 raise ValueError(
327 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
328 )

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:454, in ConversableAgent.receive(self, message, sender, request_reply, silent)
452 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
453 if reply is not None:
--> 454 self.send(reply, sender, silent=silent)

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:324, in ConversableAgent.send(self, message, recipient, request_reply, silent)
322 valid = self._append_oai_message(message, "assistant", recipient)
323 if valid:
--> 324 recipient.receive(message, self, request_reply, silent)
325 else:
326 raise ValueError(
327 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
328 )

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:452, in ConversableAgent.receive(self, message, sender, request_reply, silent)
450 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
451 return
--> 452 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
453 if reply is not None:
454 self.send(reply, sender, silent=silent)

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:764, in ConversableAgent.generate_reply(self, messages, sender, exclude)
762 continue
763 if self._match_trigger(reply_func_tuple["trigger"], sender):
--> 764 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
765 if final:
766 return reply

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:628, in ConversableAgent.generate_code_execution_reply(self, messages, sender, config)
623 continue
624 # code_blocks, _ = find_code(messages, sys_msg=self._oai_system_message, **self.llm_config)
625 # if len(code_blocks) == 1 and code_blocks[0][0] == UNKNOWN:
626 # return code_blocks[0][1]
627 # try to execute the code
--> 628 exitcode, logs = self.execute_code_blocks(code_blocks)
629 exitcode2str = "execution succeeded" if exitcode == 0 else "execution failed"
630 break

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:881, in ConversableAgent.execute_code_blocks(self, code_blocks)
873 print(
874 colored(
875 f"\n>>>>>>>> EXECUTING CODE BLOCK {i} (inferred language is {lang})...",
(...)
878 flush=True,
879 )
880 if lang in ["bash", "shell", "sh"]:
--> 881 exitcode, logs, image = self.run_code(code, lang=lang, **self._code_execution_config)
882 elif lang in ["python", "Python"]:
883 if code.startswith("# filename: "):

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\agentchat\conversable_agent.py:864, in ConversableAgent.run_code(self, code, **kwargs)
850 def run_code(self, code, **kwargs):
851 """Run the code and return the result.
852
853 Override this function to modify the way to run the code.
(...)
862 image (str or None): the docker image used for the code execution.
863 """
--> 864 return execute_code(code, **kwargs)

File ~\AppData\Roaming\Python\Python310\site-packages\autogen\code_utils.py:245, in execute_code(code, timeout, filename, work_dir, use_docker, lang)
243 if sys.platform == "win32":
244 logging.warning("SIGALRM is not supported on Windows. No timeout will be enforced.")
--> 245 result = subprocess.run(
246 cmd,
247 cwd=work_dir,
248 capture_output=True,
249 )
250 else:
251 signal.signal(signal.SIGALRM, timeout_handler)

File C:\ProgramData\anaconda3\lib\subprocess.py:503, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
500 kwargs['stdout'] = PIPE
501 kwargs['stderr'] = PIPE
--> 503 with Popen(*popenargs, **kwargs) as process:
504 try:
505 stdout, stderr = process.communicate(input, timeout=timeout)

File C:\ProgramData\anaconda3\lib\subprocess.py:971, in Popen.init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize)
967 if self.text_mode:
968 self.stderr = io.TextIOWrapper(self.stderr,
969 encoding=encoding, errors=errors)
--> 971 self._execute_child(args, executable, preexec_fn, close_fds,
972 pass_fds, cwd, env,
973 startupinfo, creationflags, shell,
974 p2cread, p2cwrite,
975 c2pread, c2pwrite,
976 errread, errwrite,
977 restore_signals,
978 gid, gids, uid, umask,
979 start_new_session)
980 except:
981 # Cleanup if the child failed starting.
982 for f in filter(None, (self.stdin, self.stdout, self.stderr)):

File C:\ProgramData\anaconda3\lib\subprocess.py:1440, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session)
1438 # Start the process
1439 try:
-> 1440 hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
1441 # no special security
1442 None, None,
1443 int(not close_fds),
1444 creationflags,
1445 env,
1446 cwd,
1447 startupinfo)
1448 finally:
1449 # Child is launched. Close the parent's copy of those pipe
1450 # handles that only the child should have open. You need
(...)
1453 # pipe will not close when the child process exits and the
1454 # ReadFile will hang.
1455 self._close_pipe_fds(p2cread, p2cwrite,
1456 c2pread, c2pwrite,
1457 errread, errwrite)

FileNotFoundError: [WinError 2] The system cannot find the file specified

TypeError: a bytes-like object is required, not 'str'

code_utils error line:

logs = logs.replace(str(abs_path), "").replace(filename, "")


I was trying to see if I could get a small model to run with some if any good results on a potato pc, the model I trying it with is a 1.1b model called TinyLlama-1.1B but while trying to test for the year-to-date(Automated Task Solving with Code Generation, Execution & Debugging) task, i ran into the following error, here is the full output:

python .\autogen_tinyllama.py
user_proxy (to assistant):

What date is today? Compare the year-to-date gain for META and TESLA.


assistant (to user_proxy):

import pandas as pd
from datetime import datetime

today = datetime.now().strftime('%Y-%m-%d')

# Get the closing prices of META and TESLA
meta_df = pd.DataFrame(index=[today])
tesla_df = pd.DataFrame(index=[today])

for stock in ('META', 'TESLA'):
   # Get the closing prices for each stock
   stock_df = pd.read_csv('https://finance.google.com/finance?q=symbol:' + stock + '&numrows=100')

   # Merge the two dataframes
   meta_df = meta_df.append(stock_df, ignore_index=True)
   tesla_df = tesla_df.append(stock_df, ignore_index=True)

# Print the result
print('Metainfo: ' + str(meta_df))
print('Tesla info: ' + str(tesla_df))

EXECUTING CODE BLOCK 0 (inferred language is python)...
WARNING:root:SIGALRM is not supported on Windows. No timeout will be enforced.
Traceback (most recent call last):
File "E:\Software Dev\Python\Large Language Models\TinyLlama 1.1B\autogen_tinyllama.py", line 41, in
user_proxy.initiate_chat(
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 521, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 324, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 454, in receive
self.send(reply, sender, silent=silent)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 324, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 452, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 764, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 628, in generate_code_execution_reply
exitcode, logs = self.execute_code_blocks(code_blocks)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 887, in execute_code_blocks
exitcode, logs, image = self.run_code(
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\agentchat\conversable_agent.py", line 864, in run_code
return execute_code(code, **kwargs)
File "C:\Users\yas19sin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\autogen\code_utils.py", line 272, in execute_code
logs = logs.replace(str(abs_path), "").replace(filename, "")
TypeError: a bytes-like object is required, not 'str'


System: Windows 10 x64
Tried on CMD and PowerShell


Edit:
I just updated the library('pip install -U pyautogen') and got the latest one and it seems to work with no problem, so solved I guess.

Please add a SQL agent for autogen .

Add a Sql agent in autogen and if we can do it with the userproxyagent or Assistantagent , then do provide the walk through to connect an agent with the sql database and perform complex queries like joins and all.

Tasks

No tasks being tracked yet.

Trouble Running Project on Windows 11 with Python 3.11.5

Hi, I'm trying to get this project running on my machine but I'm a little lost.

What I have done:

  • I have cloned this repository locally.

  • I did 'pip install autogen'. (saw some warning about something being out of the "PATH")

  • I renamed OAI_CONFIG_LIST_sample to OAI_CONFIG_LIST and modified with a OpenAI API key (as in figure 1).

  • I tried to run 'python test\twoagent.py', but got messages as in figure 2.

  • I did run 'pip install autogen' again to get the warnings again (see figure 3).

  • As I saw that apparently something more has been installed this time I tried to run 'python test\twoagent.py' again, and indeed the message I have got this time is different (see figure 4).

figure_1

figure_2

figure_3

figure_4

If anyone can give me some clue of what's happening I would be very grateful.

Some info about my environment:

  • Windows 10.0.22621.2283 (Windows 11 22H2)
  • Python 3.11.5

Thank you for this awesome project.

Two errors about token limits and rate per minute limits in autogen_agentchat_groupchat_research, agentchat_teaching

I got this error in notebook/autogen_agentchat_groupchat_research.ipynb
error:
Rate limit reached for default-gpt-4 in organization org-dRcLOmvP4zyFH42QuipEteyV on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues.

got it too, in notebook/agentchat_teaching.ipynb

error :
InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 9297 tokens. Please reduce the length of the messages.

Please modify the code so that it works even if it is only used as an OPENAI API, or provide a guide to apply for an AZURE API KEY.

Read timeout when running Getting Started Example

Hi!

First of all, this looks fantastic and I am excited to play with it! Unfortunately, I ran into a problem early on...

I have been trying to run the code example in the "Getting Started" section from here

Here is the stack trace:


Traceback (most recent call last):
  File "~\Documents\Dev\autogen_playground\main.py", line 9, in <module>
    user_proxy.initiate_chat(assistant, message="Show me the YTD gain of 10 largest technology companies as of today.")
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 521, in initiate_chat     
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 324, in send
    recipient.receive(message, self, request_reply, silent)
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 452, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 764, in generate_reply    
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 596, in generate_oai_reply    response = oai.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\oai\completion.py", line 801, in create
    return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\autogen\oai\completion.py", line 208, in _get_response
    response = openai_completion.create(request_timeout=request_timeout, **config)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\openai\api_requestor.py", line 289, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "~\Documents\Dev\autogen_playground\venv\Lib\site-packages\openai\api_requestor.py", line 617, in request_raw
    raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)

I am on Windows 10 and have been running the app from a python venv via Powershell in case that matters. The OPENAI_API_KEY environment variable is set to the correct value. I also have a working Internet connection.

Let me know if there is anything I can provide to help more with this issue.

Voyager-Style Skill Library for Agents

I really like the concept of the "skill recipe" in AutoGen, but I think it can be taken much further. One of the key takeaways from Voyager was that it was possible to continually scale up an agent's capabilities by letting it access a permanent library of previously generated capabilities, along with the ability to compose them into new ones; that's particularly relevant in AutoGen, where the individual capabilities of agents in a group can have a multiplicative effect on the intelligence of the agent group as a whole, depending on the role of the agent.

As a first step, simply having a straightforward, programmatic way to automate the creation, saving, and loading of recipes would go a long way towards developing a full "skill library" feature down the line.

RetrieveUserProxyAgent - Collection autogen-docs already exists.

Can someone please explain what this output means? i have a directory called docs in the same folder as the notebook, the doc_ids list of lists is empty, but i have documents in that folder.

In the RetrieveAssistantAgent docstring:
docs_path (Optional, str): the path to the docs directory. It can also be the path to a single file, or the url to a single file. If key not provided, a default path `./docs` will be used.

Looking at the create_vector_db_from_dir() function, which is used by RetrieveAssistantAgent, i can't make out if it's that a certain file type is expected.

from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent
from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent
import autogen
import chromadb

# Start logging
autogen.ChatCompletion.start_logging()

# Create an instance of RetrieveAssistantAgent
assistant = RetrieveAssistantAgent(
    name="assistant",
    system_message="You are a helpful assistant.",
    llm_config={
        "request_timeout": 600,
        "seed": 42,
        "config_list": config_list,
    },
)

# Create an instance of RetrieveUserProxyAgent
ragproxyagent = RetrieveUserProxyAgent(
    name="ragproxyagent",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    retrieve_config={
        "task": "code",
        "docs_path": "./docs/",  # Assuming documentation is stored in a 'docs' directory
        "chunk_token_size": 2000,
        "model": config_list[0]["model"],
        "client": chromadb.PersistentClient(path="/tmp/chromadb"),
        "embedding_model": "all-mpnet-base-v2",
    },
)

# Simulate a user asking a question and initiate chat
user_question = "Can you provide a sample Python code for printing 'Hello, World!'?"
ragproxyagent.initiate_chat(assistant, problem=user_question)
Collection autogen-docs already exists.
doc_ids:  [[]]
No more context, will terminate.
ragproxyagent (to assistant):

TERMINATE

Any help would be appreciated.

Agent Character Specifications

Is there a mechanism in place that allows for the customization of agents to display distinct character attributes? I've noticed that many of these digital agents are primarily designed to assist users, always ensuring they respond with a demeanor of kindness and politeness. However, I see a broader application for them beyond mere assistance. I believe these agents can be pivotal tools in the fields of social training and psychotherapeutic education. Imagine an agent that's been tailored to accurately emulate the behavior and responses of a depressed individual. Such an agent could serve as a valuable resource in a controlled and safe setting, allowing psychology students and budding therapists to engage with it. This interaction could help them hone their communication and empathetic skills, better preparing them for real-life scenarios. Is there a framework or feature within this platform that facilitates such advanced configurations?

handle context size overflow in AssistantAgent

microsoft/FLAML#1098, microsoft/FLAML#1153, microsoft/FLAML#1158 each addresses this in some specialized way. Can we integrate these ideas into a generic solution and make AssistantAgent able to overcome this limitation out of the box?

Tasks

Agent Mixing

If we are moving towards a real-world scenario with many people having different opinions, we can take the next step.

We can create new agents by mixing properties from different existing agents (and even create new properties, which is a job for a model).

This makes it more realistic, as less productive agents will automatically go away.

In the end, we will have the best agents.

refactor code-based reply in responsive agent

Utilize the config in register_auto_reply to sanitize the implementation of code-based reply.

Tasks

Exceptions/Errors not recognised when run in a Jupyter notebook

When running the examples in a Jupyter notebook, it doesn't seem to recognise errors when they come in. I already tried switching to the IPythonUserProxyAgent but that didn't help. Any suggestions?
See an example below.

Love this repo btw, so much potential!

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[11], line 1
----> 1 import matplotlib.pyplot as plt
      3 plt.plot([1, 2, 3, 4])
      4 plt.ylabel('some numbers')

ModuleNotFoundError: No module named 'matplotlib'

ipython_user_proxy (to assistant):

exitcode: 0 (execution succeeded)
Code output: 


--------------------------------------------------------------------------------
assistant (to ipython_user_proxy):

Great! It seems like the 'matplotlib' library is now working correctly in your environment. You can now use it to create plots and visualize data in your Jupyter notebook. If you have any other issues or need further assistance, feel free to ask. 

TERMINATE

--------------------------------------------------------------------------------

Function calls dont work with Azure OpenAI currently due to differences between OpenAI API, and AzureOpenAI

I am one of the AI Global Black Belts and have been working with AutoGPT to build some applications in the last few days. I noticed a pretty impactful bug. I have only noticed this using AzureOpenAI, but custom functions are not working. When the API returns the function call spec {"function_call"}, it triggers issues with the AutoGen create completion, as it cannot find a "Content" for the message. I tested verbatim with your demo, as well as my custom code, and the issue persists. Hopefully this is helpful! Thanks!

role misalignment in group chat when using GPT 3.5

When the number of agents increases or the round of conversation goes up, some agents might play the role of other agents when responding, or include its name in it's response. Both are not expected.

For example, when running the agent_group_research notebook in the following conversation, the scientist start talking in a manner of critic. It goes even further by adding 'critic' name in front of its message

Engineer
�[33mEngineer�[0m (to chat_manager):

// python code

--------------------------------------------------------------------------------
Critic
�[33mCritic�[0m (to chat_manager):

# mistake 1: the name of agent shouldn't be included in the response
Critic: The plan and code look good overall. However, there are a few suggestions I would like to make:

// critism

Scientist
�[33mScientist�[0m (to chat_manager):

# mistake 2: Scientist speak as critic
Critic: Thank you for your suggestions. I agree with all of them. Here is the revised plan and code incorporating your feedback:

Plan:
1. Engineer: Write a script to scrape the arXiv website and retrieve papers on LLM applications from the last week.
2. Engineer: Extract relevant information from the scraped papers, such as the title, authors, and abstract.
...

Why did it happen?

For mistake 1 (include its name in response). it might be because the name of agent is not passed to GPT correctly. Right now AutoGen set up the name field in gpt chat message object, but GPT won't use that information unless the chat message is a function call. We might need to explicitly inject name information in the message content

For mistake 2 (role misalignment), My guess is this mistake is triggered by mistake 2, where one response starts with an agent name and causes the following responses start with an agent name as well.

How to fix it

Using GPT-4 is the most straightforward way, where agents are much less likely to include its name in response and select_next_speaker also works much better. The downside is also quite obvious though.

Another strategy is we can make use of the role misalignment error as a feature to double-check the select_next_speaker result. When the select agent (let's say A) starts talking using another agent's (let's say B) name, it usually indicates that A is not the right next agent to speaker. In that situation, group_chat_manager can make use of that information and re-select B as next speaker. This will also make group chat more robust with GPT 3.5. I verified this method when creating MLNet-Task-101 and the role misalignment mistake appears much less than original groupchat implementation

Other potential improvements for GroupChat

  • add llm_config to group_chat_manager so it can be fixed to a specific config. (like temperature=0, seed=..)

Format issue in README

A recent PR introduce a paragraph with some formatting issue:

Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

They are supposed to be sub bullet points. A fix is appreciated.

Feature Request: Support for Multiple Simultaneous LLM AI API Endpoints for Self-Hosting and Model Selection

Description:
We would like to propose the addition of a new feature to AutoGen that enables users to configure and utilize multiple Language Model (LLM) AI API endpoints for self-hosting and experimentation with different models. This feature would enhance the flexibility and versatility of AutoGen for developers and researchers working with LLMs.

Feature Details:

  1. Endpoint Configuration:

    • Allow users to store API keys and configure multiple endpoints.
    • Implement support for an environment (env) file to securely store sensitive information locally, facilitating scripted pass-through of values. Add to .gitignore
  2. Custom Endpoint Names:

    • Provide the ability to assign user-friendly names to each configured endpoint. This helps users easily identify and differentiate between endpoints. It would also allow multiple models to be leveraged on the same endpoint by having different configurations for each. A check could occur to validate that the endpoint has the model expected, and if not, do a quick unload/load of the desired model.
  3. Chat Parameters:

    • Integrate settings for chat parameters, such as temperature and other relevant options, that can be adjusted per endpoint. This allows users to fine-tune model behavior.
  4. Model Selection (if applicable):

    • If applicable to the specific LLM, enable users to preset a model for each endpoint. This feature can be especially useful when working with multiple LLMs simultaneously.
  5. API Key Management (if applicable):

    • For LLM services like OpenAI that require API keys, provide a dedicated parameter in each endpoint for users to input and manage their API keys for each endpoint.
  6. Endpoint Address:

    • Allow users to specify the endpoint address (URL) to which API requests should be sent. This flexibility is crucial for self-hosted instances or when working with different LLM providers.
  7. Optional - Endpoint Tagging:

  • Allowing us to add tags like #code #logic, or #budget could let us give key indicators of where a model's strengths are in, and select from a pool of models with a particular benefit, allowing more diverse outcomes. It could also allow for side-by-side comparisons. This could allow future result tracking/scoring to better identify which models are best at particular features, by having multiple #code models, and testing each's results you can identify and retrain or replace under-performing models to build an optimum workflow.

Expected Benefits:
This feature will benefit developers, researchers, and users who work with LLMs by offering a centralized and user-friendly interface for managing multiple AI API endpoints. It enhances the ability to experiment with various models, configurations, and providers while maintaining security and simplicity. This could allow different characters to leverage specific fine-tuned models rather than the same model for each. This could also allow self-hosted users to experiment with expand the number of repeated looped calls without drastically increasing the bill.

Additional Notes:
Consider implementing an intuitive user interface for configuring and managing these endpoints within the GitHub platform, making it accessible to both novice and experienced users.

References:
Include any relevant resources or references that support the need for this feature, such as the growing popularity of LLMs in various fields and the demand for flexible API management solutions.

Related Issues/Pull Requests:

Assignees:
If you permit this ticket to remain open, I will assemble some links and resources, as well as opening another ticket to handle TextGenWebUI with relevant links there to implementing it. I can try implementing and doing a PR if someone else doesn't get to it first.

Thank you for considering this feature request. I believe that this enhancement will greatly benefit the AutoGen community and its users working with Language Model AI API endpoints.

edit: 9.28
Looking through the repo, it looks like there's a standardized json config, going to look into this next as a method for expanding and holding the features listed above. page found while reading documentation, note near top how it loads the json, then references it further down as : https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_research.ipynb

Found it https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample

Going to look into how it gets loaded.

[Roadmap] Logging in agents

For each agent, maintain a logging dict, and call ChatCompletion.start_logging with that dict to log the next LLM call in generate_oai_reply.

Tasks

  1. 3 of 7
    enhancement
    qingyun-wu yiranwu0
  2. enhancement profiler

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.