Comments (10)
I was a bit confused on how this works. It would be nice to just have something like load_dotenv()
to handle the keys. But anyways - Just going off the examples in the /notebooks
directory i made a file called OAI_CONFIG_LIST
(with no filetype)
[
{
"model": "gpt-4",
"api_key": "***"
},
{
"model": "gpt-3.5-turbo",
"api_key": "***"
}
]
In my notebook:
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file="OAI_CONFIG_LIST",
file_location=".",
filter_dict={
"model": {
"gpt-4",
"gpt4",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-v0314",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"chatgpt-35-turbo-0301",
"gpt-35-turbo-v0301",
"gpt",
}
}
)
config_list
When printing config_list
[{'model': 'gpt-4',
'api_key': '***'},
{'model': 'gpt-3.5-turbo',
'api_key': '***}]
And then i pass the config_list
to the initiate_chat
function.
assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant,
message="Plot a chart of META and TESLA stock price change YTD.",
config_list=config_list
)
This worked for me
Update
If you'd rather use load_dotenv()
this worked for me.
import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))
env_var = [
{
'model': 'gpt-4',
'api_key': os.environ['OPENAI_API_KEY']
},
{
'model': 'gpt-3.5-turbo',
'api_key': os.environ['OPENAI_API_KEY']
}
]
# needed to convert to str
env_var = json.dumps(env_var)
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file=env_var,
filter_dict={
"model": {
"gpt-4",
"gpt4",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-v0314",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"chatgpt-35-turbo-0301",
"gpt-35-turbo-v0301",
"gpt",
}
}
)
from autogen.
Is there a working example of this file, in JSON format? I can get this file to work or parse right when I create it.
from autogen.
This is tremendously helpful, thank you.
from autogen.
Thanks, that worked!
from autogen.
Could you read https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#runtime-error and see if the issue is addressed?
from autogen.
I was a bit confused on how this works. It would be nice to just have something like
load_dotenv()
to handle the keys. But anyways - Just going off the examples in the/notebooks
directory i made a file calledOAI_CONFIG_LIST
(with no filetype)[ { "model": "gpt-4", "api_key": "***" }, { "model": "gpt-3.5-turbo", "api_key": "***" } ]In my notebook:
# Setting configurations for autogen config_list = autogen.config_list_from_json( env_or_file="OAI_CONFIG_LIST", file_location=".", filter_dict={ "model": { "gpt-4", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0301", "chatgpt-35-turbo-0301", "gpt-35-turbo-v0301", "gpt", } } ) config_listWhen printing
config_list
[{'model': 'gpt-4', 'api_key': '***'}, {'model': 'gpt-3.5-turbo', 'api_key': '***}]And then i pass the
config_list
to theinitiate_chat
function.assistant = AssistantAgent("assistant") user_proxy = UserProxyAgent("user_proxy") user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list )This worked for me
Update
If you'd rather use
load_dotenv()
this worked for me.import json from dotenv import find_dotenv, load_dotenv load_dotenv(Path('../../.env')) env_var = [ { 'model': 'gpt-4', 'api_key': os.environ['OPENAI_API_KEY'] }, { 'model': 'gpt-3.5-turbo', 'api_key': os.environ['OPENAI_API_KEY'] } ] # needed to convert to str env_var = json.dumps(env_var) # Setting configurations for autogen config_list = autogen.config_list_from_json( env_or_file=env_var, filter_dict={ "model": { "gpt-4", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0301", "chatgpt-35-turbo-0301", "gpt-35-turbo-v0301", "gpt", } } )
Thanks. You can also have a single env var which contains the entire json and load it directly:
load_dotenv(Path('../../.env'))
config_list = autogen.config_list_from_json(YOUR_ENV_VAR_NAME_FOR_JSON)
from autogen.
I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:
from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
{
"model": "chatglm2-6b",
"api_base": "http://localhost:8000/v1",
"api_type": "open_ai",
"api_key": "NULL", # just a placeholder
}
]
response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine
assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.
from autogen.
I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:
from autogen import AssistantAgent, UserProxyAgent, oai config_list=[ { "model": "chatglm2-6b", "api_base": "http://localhost:8000/v1", "api_type": "open_ai", "api_key": "NULL", # just a placeholder } ] response = oai.Completion.create(config_list=config_list, prompt="Hi") print(response) # works fine assistant = AssistantAgent("assistant") user_proxy = UserProxyAgent("user_proxy") user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list) # fails with the error: openai.error.AuthenticationError: No API key provided.
Please add llm_config={"config_list": config_list}
in the constructor of AssistantAgent
from autogen.
Added to FAQ: https://microsoft.github.io/autogen/docs/FAQ/#set-your-api-endpoints
from autogen.
Update: i found that my previous example was throwing an error because the json was being parsed as a string. Here is working example of setting up your config list using dotenv
. This will allow you to dynamically create the json file required by autogen.config_list_from_json()
when you're using a .env
file
import tempfile
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
env_var = [
{
'model': 'gpt-4',
'api_key': os.getenv('OPENAI_API_KEY')
},
{
'model': 'gpt-3.5-turbo',
'api_key': os.getenv('OPENAI_API_KEY')
}
]
# Create a temporary file
# Write the JSON structure to a temporary file and pass it to config_list_from_json
with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
env_var = json.dumps(env_var)
temp.write(env_var)
temp.flush()
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file=temp.name,
filter_dict={
"model": {
"gpt-4",
"gpt-3.5-turbo",
}
}
)
assert len(config_list) > 0
print("models to use: ", [config_list[i]["model"] for i in range(len(config_list))])
models to use: ['gpt-4', 'gpt-3.5-turbo']
from autogen.
Related Issues (20)
- [Feature Request]: Make the cost info easier to read HOT 3
- [Bug]: Error occurred while processing message: api_key is not present in llm_config or OPENAI_API_KEY env variable for agent ** primary_assistant**. Update your workflow to provide an api_key to use the LLM HOT 1
- [Feature Request]: An example notebook for agents with nested chats inside a group chat. HOT 6
- [Bug]: `MessageTokenLimiter` can't handle missing contents
- [Bug]: HOT 4
- [Bug]: Images in notebooks not shown
- Does the autogen-studio 2 front-end have an open source address?
- [Issue]: HOT 3
- [.Net][Feature Request]: Update Azure.AI.OpenAI package to 1.0.0-beta.15
- [Bug]: SyntaxError: expected ':' HOT 1
- [Feature Request]: JSON mode HOT 4
- AgentOptimizer test is not included in the CI. HOT 1
- [Roadmap] Persistence and state management
- [Feature Request]: Resumable GroupChat
- [Bug]: Custom speaker selection method expected to be a string HOT 3
- [Bug]: 'NoneType' object has no attribute 'chat'
- [Issue]: Why the agent gives the same reply for same prompt with temperature 0.9? HOT 4
- [Feature Request]: Support Google gemini api in Autogen and Autogen Studio- Google AI Studio
- [Feature Request]: Support More core language and make extension HOT 3
- [Bug]: Cannot Complete Notebook: Agent Chat with custom model loading HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autogen.