Giter Club home page Giter Club logo

Comments (10)

AaronWard avatar AaronWard commented on May 22, 2024 19

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me


Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

from autogen.

rustyorb avatar rustyorb commented on May 22, 2024 2

Is there a working example of this file, in JSON format? I can get this file to work or parse right when I create it.

from autogen.

rustyorb avatar rustyorb commented on May 22, 2024 1

This is tremendously helpful, thank you.

from autogen.

EdFries avatar EdFries commented on May 22, 2024 1

Thanks, that worked!

from autogen.

sonichi avatar sonichi commented on May 22, 2024

Could you read https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#runtime-error and see if the issue is addressed?

from autogen.

sonichi avatar sonichi commented on May 22, 2024

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me

Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

Thanks. You can also have a single env var which contains the entire json and load it directly:

load_dotenv(Path('../../.env'))
config_list = autogen.config_list_from_json(YOUR_ENV_VAR_NAME_FOR_JSON)

from autogen.

EdFries avatar EdFries commented on May 22, 2024

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

from autogen.

sonichi avatar sonichi commented on May 22, 2024

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

Please add llm_config={"config_list": config_list} in the constructor of AssistantAgent

from autogen.

sonichi avatar sonichi commented on May 22, 2024

Added to FAQ: https://microsoft.github.io/autogen/docs/FAQ/#set-your-api-endpoints

from autogen.

AaronWard avatar AaronWard commented on May 22, 2024

Update: i found that my previous example was throwing an error because the json was being parsed as a string. Here is working example of setting up your config list using dotenv. This will allow you to dynamically create the json file required by autogen.config_list_from_json() when you're using a .env file

import tempfile
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.getenv('OPENAI_API_KEY')
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.getenv('OPENAI_API_KEY')
    }
]

# Create a temporary file
# Write the JSON structure to a temporary file and pass it to config_list_from_json
with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
    env_var = json.dumps(env_var)
    temp.write(env_var)
    temp.flush()

    # Setting configurations for autogen
    config_list = autogen.config_list_from_json(
        env_or_file=temp.name,
        filter_dict={
            "model": {
                "gpt-4",
                "gpt-3.5-turbo",
            }
        }
    )

assert len(config_list) > 0 
print("models to use: ", [config_list[i]["model"] for i in range(len(config_list))])

models to use: ['gpt-4', 'gpt-3.5-turbo']

from autogen.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.