Comments (12)
Would you be able to provide us with a minimal example/script we can run to reproduce what you're seeing? I'm interested in seeing how you initiated the llm configs, agents, group chat, etc.
from autogen.
@WaelKarkoub I am providing a minimal script,
import os
from typing import Annotated
import argostranslate.package
import argostranslate.translate
import autogen
from autogen import register_function, GroupChat, GroupChatManager
from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent
config_list = [{"model": "gpt-4", "api_key": <API_KEY>}]
llm_config = {
"cache_seed": 42, # change the cache_seed for different trials
"temperature": 0,
"config_list": config_list,
"timeout": 120,
}
user = autogen.UserProxyAgent(
name="User",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
code_execution_config=False, # we don't want to execute code in this case.
description="The user who ask questions and give tasks.",
default_auto_reply="Ask critic if the task has been solved at full satisfaction. "
"Otherwise, continue or the reason why the task is not solved yet."
)
rag_proxy_agent = RetrieveUserProxyAgent(
name="Ragproxyagent",
human_input_mode="NEVER",
retrieve_config={
"task": "qa",
"docs_path": "autoalign.txt",
},
)
guardrail_agent = autogen.AssistantAgent(
name="Guardrail",
system_message="You are a helpful AI assistant and act as a policy checker which determines whether "
" the a text violates any policies or exhibits any bias."
" You perform evaluation on user's and assistant's response only, and you cannot perform"
" the evaluation for other agents' response. You make use of your guardrail_func tool for "
"this evaluation. You cannot answer any question, you can just use the guardrail_func tool.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
human_input_mode="NEVER",
)
assistant = autogen.AssistantAgent(
name="Assistant",
system_message="You are an AI assistant that is responsible for answering questions of user."""
"If question is related to AutoAlign then respond only based upon response of Retriever"
" or ask retriever to provide some response and then answer the question. Keep the answer"
" concise. You can answer other questions directly.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
factcheck_agent = autogen.AssistantAgent(
name="Factcheck",
system_message="You are an assistant that checks if a piece of text is factually correct.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
translator = autogen.AssistantAgent(
name="Translator",
system_message="You are a translator which that determines which can "
"translate the Assistant's response to french. "
"You make use of your translate_func for the translation",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
retriever = autogen.AssistantAgent(
name="Retriever",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are an helpful assistant which uses of retrieve_content get more information"
" about a particular question related to AutoAlign. Do not answer any question of user,"
" ask Assistant to do that."
"Pass only relevant part of question to the function, that is related to AutoAlign",
llm_config=llm_config,
)
planner = autogen.AssistantAgent(
name="Planner",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are a helpful AI assistant and act as a 'Planner' which provides the plan that is sequence"
" of agents and their tasks. Anytime the user asks a question or provides instruction, you will "
"respond first with the plan."
"The first part of plan should be to check whether the question or statement violates any "
"policy or shows bias by using Guardrail. "
"Similarly, it must be indicated through plan that the assistant's response should be "
"evaluated for any form of bias or policy violation by using Guardrail."
"Provide the plan by evaluating the question or statement, the instructions"
" and the available speakers. Do not add tasks for agents which are not required."
"Add other agents in plan only when stated in instructions."
"The following are the speakers available: Retriever, Assistant, Translator, Guardrail, "
" Factcheck and Critic. You do not ask for user's response, you just provide the plan."
" Do not add the term 'TERMINATE' in your response."
"You can have any agent any number of times in your plan. Always end the plan with Critic",
llm_config=llm_config,
)
critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are a helpful AI assistant and act as a 'Critic' which determines whether the given"
" response is appropriate or not for the given question of user. And check if all the steps "
"mentioned by the user are followed or not, if not you can ask the corresponding agent to do "
"the task. The following are the agents available: Retriever, Translator, Guardrail, Assistant, "
"Factcheck and Planner. If the response seems okay for the user's question and all the tasks "
"given by the user are completed then reply TERMINATE",
llm_config=llm_config,
)
def guardrail_func(text: Annotated[
str,
"the text being evaluated",
], speaker: Annotated[
str,
"the speaker of the text that can be User, and Assistant",]
) -> str:
...
blocked = False
if blocked:
return "TERMINATE"
return "No, policy violations found."
def retrieve_content(
message: Annotated[
str,
"Refined message which keeps the original meaning and can be used to retrieve content for question "
"answering.",
],
n_results: Annotated[int, "number of results"] = 1,
) -> str:
rag_proxy_agent.n_results = n_results # Set the number of results to be retrieved.
# Check if we need to update the context.
update_context_case1, update_context_case2 = rag_proxy_agent._check_update_context(message)
if (update_context_case1 or update_context_case2) and rag_proxy_agent.update_context:
rag_proxy_agent.problem = message if not hasattr(rag_proxy_agent, "problem") else rag_proxy_agent.problem
_, ret_msg = rag_proxy_agent._generate_retrieve_user_reply(message)
else:
_context = {"problem": message, "n_results": n_results}
ret_msg = rag_proxy_agent.message_generator(rag_proxy_agent, None, _context)
ans = ret_msg if ret_msg else message
return ans
rag_proxy_agent.human_input_mode = "NEVER" # Disable human input for ragproxyagent since it only retrieves content.
def factcheck_func(text: Annotated[str, "text which is to be fact-checked",]) -> str:
...
factcheck_output = "Factcheck Score: 1.0"
return text + " \n" + factcheck_output
def translate_func(text: Annotated[str, "the text to be translated"]) -> str:
from_code = "en"
to_code = "fr"
argostranslate.package.update_package_index()
available_packages = argostranslate.package.get_available_packages()
package_to_install = next(
filter(
lambda x: x.from_code == from_code and x.to_code == to_code, available_packages
)
)
argostranslate.package.install_from_path(package_to_install.download())
translated_text = argostranslate.translate.translate(text, from_code, to_code)
return translated_text
register_function(
guardrail_func,
caller=guardrail_agent, # The assistant agent can suggest calls to the calculator.
executor=guardrail_agent, # The user proxy agent can execute the calculator calls.
name="guardrail_func", # By default, the function name is used as the tool name.
description="checks whether the 'text' violates any safety policies", # A description of the tool.
)
register_function(
retrieve_content,
caller=retriever, # The assistant agent can suggest calls to the calculator.
executor=retriever, # The user proxy agent can execute the calculator calls.
name="retrieve_content", # By default, the function name is used as the tool name.
description="retrieve content for question answering about AutoAlign. Send only relevant part of the question",
)
register_function(
factcheck_func,
caller=factcheck_agent, # The assistant agent can suggest calls to the calculator.
executor=factcheck_agent, # The user proxy agent can execute the calculator calls.
name="factcheck_func", # By default, the function name is used as the tool name.
description="Checks if a piece of text is factually correct.", # A description of the tool.
)
register_function(
translate_func,
caller=translator, # The assistant agent can suggest calls to the calculator.
executor=translator, # The user proxy agent can execute the calculator calls.
name="translate_func", # By default, the function name is used as the tool name.
description="A translate_func is a function which translates the assistant's response from english to french."
"It takes the 'text' as input and translates it to french.", # A description of the tool.
)
groupchat = GroupChat(
agents=[planner, user, retriever, assistant, guardrail_agent, critic, factcheck_agent, translator],
messages=[], max_round=20,
send_introductions=True
)
manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)
chat_history = user.initiate_chat(
manager,
message="Can you tell me what does AutoAlign do? Please factcheck the answer."
" Then translate it to french. Keep the answer concise, 2-3 lines."
)
from autogen.
The additional dependencies:
argostranslate 1.9.6
autoalign.txt :
autoalign.txt
from autogen.
Following is the trace:
User (to chat_manager):
Can you tell me what does AutoAlign do? Please factcheck the answer. Then translate it to french. Keep the answer concise, 2-3 lines.
--------------------------------------------------------------------------------
Next speaker: Planner
Planner (to chat_manager):
Plan:
1. Guardrail: Evaluate the user's question for any policy violation or bias using guardrail_func.
2. Retriever: Retrieve content related to AutoAlign.
3. Assistant: Formulate a concise response (2-3 lines) based on the information provided by the Retriever.
4. Guardrail: Evaluate the Assistant's response for any policy violation or bias using guardrail_func.
5. Factcheck: Check the factual correctness of the Assistant's response.
6. Translator: Translate the fact-checked response into French.
7. Critic: Evaluate the entire process and ensure all tasks were completed correctly.
--------------------------------------------------------------------------------
Next speaker: Guardrail
Guardrail (to chat_manager):
***** Suggested tool call (call_Tj6OKMiS7TdfYZEO5ehEnV3t): guardrail_func *****
Arguments:
{
"text": "Can you tell me what does AutoAlign do? Please factcheck the answer. Then translate it to french. Keep the answer concise, 2-3 lines.",
"speaker": "User"
}
*******************************************************************************
--------------------------------------------------------------------------------
Next speaker: Guardrail
>>>>>>>> EXECUTING FUNCTION guardrail_func...
Guardrail (to chat_manager):
Guardrail (to chat_manager):
***** Response from calling tool (call_Tj6OKMiS7TdfYZEO5ehEnV3t) *****
No, policy violations found.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Retriever
Retriever (to chat_manager):
***** Suggested tool call (call_PWQ2yZIk4KS0DlumciGGZYwI): retrieve_content *****
Arguments:
{
"message": "what does AutoAlign do",
"n_results": 1
}
*********************************************************************************
--------------------------------------------------------------------------------
Next speaker: Retriever
>>>>>>>> EXECUTING FUNCTION retrieve_content...
Trying to create collection.
2024-06-07 15:07:44,917 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 1 chunks.
2024-06-07 15:07:44,921 - autogen.agentchat.contrib.vectordb.chromadb - INFO - No content embedding is provided. Will use the VectorDB's embedding function to generate the content embedding.
VectorDB returns doc_ids: [['19053c6a']]
Adding content of doc 19053c6a to context.
Retriever (to chat_manager):
Retriever (to chat_manager):
***** Response from calling tool (call_PWQ2yZIk4KS0DlumciGGZYwI) *****
You're a retrieve augmented chatbot. You answer user's questions based on your own knowledge and the
context provided by the user.
If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.
You must give as short an answer as possible.
User's question is: what does AutoAlign do
Context is: AutoAlign�s Sidecar solution is the first AI firewall that dynamically interacts with LLMs to make them safe, secure, and effective. Now, make any LLM safer, smarter, and stronger.
AutoAlign technology provides the most robust generative AI security so enterprises can leverage safe and effective AI solutions across their business. Now any enterprise can confidently deploy generative AI tools that strengthen business goals, user safety, and compliance requirements.
INTRODUCING
The first dynamic AI firewall
Sidecar dynamically interacts with, and continually evolves alongside, LLMs to provide consistent security. This ensures model outcomes align with business requirements, user safety, government compliance, and ultimately, their brand promise.
Sidecar�s automated ability to scale and interact with models allows business to spend time fine-tuning models for performance or continuously tweaking guardrails for safety and security.
Sidecar�s intelligent LLM monitoring for LLMs ensures continued alignment with business goals. Companies can easily operationalize their policies, enabling use cases even in highly regulated environments while staying compliant with government regulations.
Sidecar ensures model outcomes align with business requirements, user safety, government compliance, and ultimately, brand promise. Spend more time focused on performance and driving ROI.
Detect and mitigate ungrounded answers, and warn or highlight unfounded content.
Detect and prevent sophisticated prompt injects and jailbreaks, reducing reputational risks, data leakage and other harms.
Operationalize compliance requirements and policies with Alignment Controls that enable direct mapping to regulations and internal controls.
Upgrade foundation models and infrastructures with your Alignment Controls and monitoring in place. Choose the appropriate controls and customize for your requirements.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Assistant
Assistant (to chat_manager):
AutoAlign's Sidecar solution is an AI firewall that interacts with Language Learning Models (LLMs) to make them safe, secure, and effective. It provides robust generative AI security, allowing enterprises to deploy AI tools that align with business goals, user safety, and compliance requirements.
--------------------------------------------------------------------------------
Next speaker: Guardrail
Guardrail (to chat_manager):
***** Suggested tool call (call_hxOgeddOHcH7GaZArdyhxBf3): guardrail_func *****
Arguments:
{
"text": "AutoAlign's Sidecar solution is an AI firewall that interacts with Language Learning Models (LLMs) to make them safe, secure, and effective. It provides robust generative AI security, allowing enterprises to deploy AI tools that align with business goals, user safety, and compliance requirements.",
"speaker": "Assistant"
}
*******************************************************************************
--------------------------------------------------------------------------------
Next speaker: Guardrail
>>>>>>>> EXECUTING FUNCTION guardrail_func...
Guardrail (to chat_manager):
Guardrail (to chat_manager):
***** Response from calling tool (call_hxOgeddOHcH7GaZArdyhxBf3) *****
No, policy violations found.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Factcheck
Traceback (most recent call last):
File "autogen\temp.py", line 218, in <module>
chat_history = user.initiate_chat(
File "autogen\autogen\agentchat\conversable_agent.py", line 1018, in initiate_chat
self.send(msg2send, recipient, silent=silent)
File "autogen\autogen\agentchat\conversable_agent.py", line 655, in send
recipient.receive(message, self, request_reply, silent)
File "autogen\autogen\agentchat\conversable_agent.py", line 818, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "autogen\autogen\agentchat\conversable_agent.py", line 1974, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "autogen\autogen\agentchat\groupchat.py", line 1058, in run_chat
reply = speaker.generate_reply(sender=self)
File "autogen\autogen\agentchat\conversable_agent.py", line 1974, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "autogen\autogen\agentchat\conversable_agent.py", line 1340, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
File "autogen\autogen\agentchat\conversable_agent.py", line 1359, in _generate_oai_reply_from_client
response = llm_client.create(
File "autogen\autogen\oai\client.py", line 639, in create
response = client.create(params)
File "autogen\autogen\oai\client.py", line 285, in create
response = completions.create(**params)
File "autogen\venv\lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "autogen\venv\lib\site-packages\openai\resources\chat\completions.py", line 581, in create
return self._post(
File "C:\Users\abhij\PycharmProjects\autogen\venv\lib\site-packages\openai\_base_client.py", line 1232, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 921, in request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 997, in _request
return self._retry_request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1045, in _retry_request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 997, in _request
return self._retry_request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1045, in _retry_request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}
Process finished with exit code 1
from autogen.
@abhijitpal1247 thanks! Looking at your example, when you register a function, the caller and the executor should be different (at least from looking at the docs), so I would recommend you setting the executor to the user proxy/ other agent. Let me know if that works for you. https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb
from autogen.
@WaelKarkoub I changed the code as per your recommendation:
import os
from typing import Annotated
import argostranslate.package
import argostranslate.translate
import autogen
from autogen import register_function, GroupChat, GroupChatManager
from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent
config_list = [{"model": "gpt-4", "api_key": <API-KEY>}]
llm_config = {
"cache_seed": 42, # change the cache_seed for different trials
"temperature": 0,
"config_list": config_list,
"timeout": 120,
}
user = autogen.UserProxyAgent(
name="User",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
code_execution_config=False, # we don't want to execute code in this case.
description="The user who ask questions and give tasks.",
default_auto_reply="Ask critic if the task has been solved at full satisfaction. "
"Otherwise, continue or the reason why the task is not solved yet."
)
rag_proxy_agent = RetrieveUserProxyAgent(
name="Ragproxyagent",
human_input_mode="NEVER",
retrieve_config={
"task": "qa",
"docs_path": "autoalign.txt",
},
)
guardrail_agent = autogen.AssistantAgent(
name="Guardrail",
system_message="You are a helpful AI assistant and act as a policy checker which determines whether "
" the a text violates any policies or exhibits any bias."
" You perform evaluation on user's and assistant's response only, and you cannot perform"
" the evaluation for other agents' response. You make use of your guardrail_func tool for "
"this evaluation. You cannot answer any question, you can just use the guardrail_func tool.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
human_input_mode="NEVER",
)
assistant = autogen.AssistantAgent(
name="Assistant",
system_message="You are an AI assistant that is responsible for answering questions of user."""
"If question is related to AutoAlign then respond only based upon response of Retriever"
" or ask retriever to provide some response and then answer the question. Keep the answer"
" concise. You can answer other questions directly.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
factcheck_agent = autogen.AssistantAgent(
name="Factcheck",
system_message="You are an assistant that checks if a piece of text is factually correct.",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
translator = autogen.AssistantAgent(
name="Translator",
system_message="You are a translator which that determines which can "
"translate the Assistant's response to french. "
"You make use of your translate_func for the translation",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
llm_config=llm_config,
)
retriever = autogen.AssistantAgent(
name="Retriever",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are an helpful assistant which uses of retrieve_content get more information"
" about a particular question related to AutoAlign. Do not answer any question of user,"
" ask Assistant to do that."
"Pass only relevant part of question to the function, that is related to AutoAlign",
llm_config=llm_config,
)
planner = autogen.AssistantAgent(
name="Planner",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are a helpful AI assistant and act as a 'Planner' which provides the plan that is sequence"
" of agents and their tasks. Anytime the user asks a question or provides instruction, you will "
"respond first with the plan."
"The first part of plan should be to check whether the question or statement violates any "
"policy or shows bias by using Guardrail. "
"Similarly, it must be indicated through plan that the assistant's response should be "
"evaluated for any form of bias or policy violation by using Guardrail."
"Provide the plan by evaluating the question or statement, the instructions"
" and the available speakers. Do not add tasks for agents which are not required."
"Add other agents in plan only when stated in instructions."
"The following are the speakers available: Retriever, Assistant, Translator, Guardrail, "
" Factcheck and Critic. You do not ask for user's response, you just provide the plan."
" Do not add the term 'TERMINATE' in your response."
"You can have any agent any number of times in your plan. Always end the plan with Critic",
llm_config=llm_config,
)
critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
system_message="You are a helpful AI assistant and act as a 'Critic' which determines whether the given"
" response is appropriate or not for the given question of user. And check if all the steps "
"mentioned by the user are followed or not, if not you can ask the corresponding agent to do "
"the task. The following are the agents available: Retriever, Translator, Guardrail, Assistant, "
"Factcheck and Planner. If the response seems okay for the user's question and all the tasks "
"given by the user are completed then reply TERMINATE",
llm_config=llm_config,
)
def guardrail_func(text: Annotated[
str,
"the text being evaluated",
], speaker: Annotated[
str,
"the speaker of the text that can be User, and Assistant",]
) -> str:
...
blocked = False
if blocked:
return "TERMINATE"
return "No, policy violations found."
def retrieve_content(
message: Annotated[
str,
"Refined message which keeps the original meaning and can be used to retrieve content for question "
"answering.",
],
n_results: Annotated[int, "number of results"] = 1,
) -> str:
rag_proxy_agent.n_results = n_results # Set the number of results to be retrieved.
# Check if we need to update the context.
update_context_case1, update_context_case2 = rag_proxy_agent._check_update_context(message)
if (update_context_case1 or update_context_case2) and rag_proxy_agent.update_context:
rag_proxy_agent.problem = message if not hasattr(rag_proxy_agent, "problem") else rag_proxy_agent.problem
_, ret_msg = rag_proxy_agent._generate_retrieve_user_reply(message)
else:
_context = {"problem": message, "n_results": n_results}
ret_msg = rag_proxy_agent.message_generator(rag_proxy_agent, None, _context)
ans = ret_msg if ret_msg else message
return ans
rag_proxy_agent.human_input_mode = "NEVER" # Disable human input for ragproxyagent since it only retrieves content.
def factcheck_func(text: Annotated[str, "text which is to be fact-checked",]) -> str:
...
factcheck_output = "Factcheck Score: 1.0"
return text + " \n" + factcheck_output
def translate_func(text: Annotated[str, "the text to be translated"]) -> str:
from_code = "en"
to_code = "fr"
argostranslate.package.update_package_index()
available_packages = argostranslate.package.get_available_packages()
package_to_install = next(
filter(
lambda x: x.from_code == from_code and x.to_code == to_code, available_packages
)
)
argostranslate.package.install_from_path(package_to_install.download())
translated_text = argostranslate.translate.translate(text, from_code, to_code)
return translated_text
register_function(
guardrail_func,
caller=guardrail_agent, # The assistant agent can suggest calls to the calculator.
executor=user, # The user proxy agent can execute the calculator calls.
name="guardrail_func", # By default, the function name is used as the tool name.
description="checks whether the 'text' violates any safety policies", # A description of the tool.
)
register_function(
retrieve_content,
caller=retriever, # The assistant agent can suggest calls to the calculator.
executor=user, # The user proxy agent can execute the calculator calls.
name="retrieve_content", # By default, the function name is used as the tool name.
description="retrieve content for question answering about AutoAlign. Send only relevant part of the question",
)
register_function(
factcheck_func,
caller=factcheck_agent, # The assistant agent can suggest calls to the calculator.
executor=user, # The user proxy agent can execute the calculator calls.
name="factcheck_func", # By default, the function name is used as the tool name.
description="Checks if a piece of text is factually correct.", # A description of the tool.
)
register_function(
translate_func,
caller=translator, # The assistant agent can suggest calls to the calculator.
executor=user, # The user proxy agent can execute the calculator calls.
name="translate_func", # By default, the function name is used as the tool name.
description="A translate_func is a function which translates the assistant's response from english to french."
"It takes the 'text' as input and translates it to french.", # A description of the tool.
)
groupchat = GroupChat(
agents=[planner, user, retriever, assistant, guardrail_agent, critic, factcheck_agent, translator],
messages=[], max_round=20,
send_introductions=True
)
manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)
chat_history = user.initiate_chat(
manager,
message="Can you tell me what does AutoAlign do? Please factcheck the answer."
" Then translate it to french. Keep the answer concise, 2-3 lines."
)
I got the same error again:
User (to chat_manager):
Can you tell me what does AutoAlign do? Please factcheck the answer. Then translate it to french. Keep the answer concise, 2-3 lines.
--------------------------------------------------------------------------------
Next speaker: Planner
Planner (to chat_manager):
Plan:
1. Guardrail: Evaluate the user's question for any policy violation or bias using guardrail_func.
2. Retriever: Retrieve content related to AutoAlign.
3. Assistant: Formulate a concise response (2-3 lines) based on the information provided by the Retriever.
4. Guardrail: Evaluate the Assistant's response for any policy violation or bias using guardrail_func.
5. Factcheck: Check the factual correctness of the Assistant's response.
6. Translator: Translate the fact-checked response into French.
7. Critic: Evaluate the entire process and ensure all tasks were completed correctly.
--------------------------------------------------------------------------------
Next speaker: Guardrail
Guardrail (to chat_manager):
***** Suggested tool call (call_Tj6OKMiS7TdfYZEO5ehEnV3t): guardrail_func *****
Arguments:
{
"text": "Can you tell me what does AutoAlign do? Please factcheck the answer. Then translate it to french. Keep the answer concise, 2-3 lines.",
"speaker": "User"
}
*******************************************************************************
--------------------------------------------------------------------------------
Next speaker: User
>>>>>>>> EXECUTING FUNCTION guardrail_func...
User (to chat_manager):
User (to chat_manager):
***** Response from calling tool (call_Tj6OKMiS7TdfYZEO5ehEnV3t) *****
No, policy violations found.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Retriever
Retriever (to chat_manager):
***** Suggested tool call (call_PWQ2yZIk4KS0DlumciGGZYwI): retrieve_content *****
Arguments:
{
"message": "what does AutoAlign do",
"n_results": 1
}
*********************************************************************************
--------------------------------------------------------------------------------
Next speaker: User
>>>>>>>> EXECUTING FUNCTION retrieve_content...
Trying to create collection.
2024-06-07 15:52:30,924 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 1 chunks.
2024-06-07 15:52:30,926 - autogen.agentchat.contrib.vectordb.chromadb - INFO - No content embedding is provided. Will use the VectorDB's embedding function to generate the content embedding.
VectorDB returns doc_ids: [['19053c6a']]
Adding content of doc 19053c6a to context.
User (to chat_manager):
User (to chat_manager):
***** Response from calling tool (call_PWQ2yZIk4KS0DlumciGGZYwI) *****
You're a retrieve augmented chatbot. You answer user's questions based on your own knowledge and the
context provided by the user.
If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.
You must give as short an answer as possible.
User's question is: what does AutoAlign do
Context is: AutoAlign�s Sidecar solution is the first AI firewall that dynamically interacts with LLMs to make them safe, secure, and effective. Now, make any LLM safer, smarter, and stronger.
AutoAlign technology provides the most robust generative AI security so enterprises can leverage safe and effective AI solutions across their business. Now any enterprise can confidently deploy generative AI tools that strengthen business goals, user safety, and compliance requirements.
INTRODUCING
The first dynamic AI firewall
Sidecar dynamically interacts with, and continually evolves alongside, LLMs to provide consistent security. This ensures model outcomes align with business requirements, user safety, government compliance, and ultimately, their brand promise.
Sidecar�s automated ability to scale and interact with models allows business to spend time fine-tuning models for performance or continuously tweaking guardrails for safety and security.
Sidecar�s intelligent LLM monitoring for LLMs ensures continued alignment with business goals. Companies can easily operationalize their policies, enabling use cases even in highly regulated environments while staying compliant with government regulations.
Sidecar ensures model outcomes align with business requirements, user safety, government compliance, and ultimately, brand promise. Spend more time focused on performance and driving ROI.
Detect and mitigate ungrounded answers, and warn or highlight unfounded content.
Detect and prevent sophisticated prompt injects and jailbreaks, reducing reputational risks, data leakage and other harms.
Operationalize compliance requirements and policies with Alignment Controls that enable direct mapping to regulations and internal controls.
Upgrade foundation models and infrastructures with your Alignment Controls and monitoring in place. Choose the appropriate controls and customize for your requirements.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Assistant
Assistant (to chat_manager):
AutoAlign's Sidecar solution is an AI firewall that interacts with Language Learning Models (LLMs) to make them safe, secure, and effective. It provides robust generative AI security, allowing enterprises to deploy AI tools that align with business goals, user safety, and compliance requirements.
--------------------------------------------------------------------------------
Next speaker: Guardrail
Guardrail (to chat_manager):
***** Suggested tool call (call_hxOgeddOHcH7GaZArdyhxBf3): guardrail_func *****
Arguments:
{
"text": "AutoAlign's Sidecar solution is an AI firewall that interacts with Language Learning Models (LLMs) to make them safe, secure, and effective. It provides robust generative AI security, allowing enterprises to deploy AI tools that align with business goals, user safety, and compliance requirements.",
"speaker": "Assistant"
}
*******************************************************************************
--------------------------------------------------------------------------------
Next speaker: User
>>>>>>>> EXECUTING FUNCTION guardrail_func...
User (to chat_manager):
User (to chat_manager):
***** Response from calling tool (call_hxOgeddOHcH7GaZArdyhxBf3) *****
No, policy violations found.
**********************************************************************
--------------------------------------------------------------------------------
Next speaker: Factcheck
Traceback (most recent call last):
File "autogen\temp.py", line 218, in <module>
chat_history = user.initiate_chat(
File "autogen\autogen\agentchat\conversable_agent.py", line 1018, in initiate_chat
self.send(msg2send, recipient, silent=silent)
File "autogen\autogen\agentchat\conversable_agent.py", line 655, in send
recipient.receive(message, self, request_reply, silent)
File "autogen\autogen\agentchat\conversable_agent.py", line 818, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "autogen\autogen\agentchat\conversable_agent.py", line 1974, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "autogen\autogen\agentchat\groupchat.py", line 1058, in run_chat
reply = speaker.generate_reply(sender=self)
File "autogen\autogen\agentchat\conversable_agent.py", line 1974, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "autogen\autogen\agentchat\conversable_agent.py", line 1340, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
File "autogen\autogen\agentchat\conversable_agent.py", line 1359, in _generate_oai_reply_from_client
response = llm_client.create(
File "autogen\autogen\oai\client.py", line 639, in create
response = client.create(params)
File "autogen\autogen\oai\client.py", line 285, in create
response = completions.create(**params)
File "autogen\venv\lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "autogen\venv\lib\site-packages\openai\resources\chat\completions.py", line 581, in create
return self._post(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1232, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 921, in request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 997, in _request
return self._retry_request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1045, in _retry_request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 997, in _request
return self._retry_request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1045, in _retry_request
return self._request(
File "autogen\venv\lib\site-packages\openai\_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}
Process finished with exit code 1
from autogen.
i tried it but couldn't install the dependencies. @abhijitpal1247
cc @WaelKarkoub
from autogen.
@Hk669 Which dependency are you facing the issue with?
argostranslate https://pypi.org/project/argostranslate/
openai https://pypi.org/project/openai/
pyautogen https://pypi.org/project/pyautogen/
argostranslate v1.9.6
openai v1.16.2
pyautogen v0.2.28
from autogen.
@Hk669 Which dependency are you facing the issue with?
argostranslate https://pypi.org/project/argostranslate/
openai https://pypi.org/project/openai/
pyautogen https://pypi.org/project/pyautogen/argostranslate v1.9.6
openai v1.16.2
pyautogen v0.2.28
Its the argostranslate
from autogen.
@Hk669 Which dependency are you facing the issue with?
argostranslate https://pypi.org/project/argostranslate/
openai https://pypi.org/project/openai/
pyautogen https://pypi.org/project/pyautogen/
argostranslate v1.9.6
openai v1.16.2
pyautogen v0.2.28Its the argostranslate
Please try: pip install argostranslate
from autogen.
I'm also experiencing the same issue. Would love a fix. It has to do with not using tools
correctly: https://community.openai.com/t/error-the-model-produced-invalid-content/747511/4
from autogen.
Guys, did anyone find how to solve this issue. I'm unable to proceed without getting this issue resolved. Any help is much appreciated.
Thanks in advance!
from autogen.
Related Issues (20)
- [Roadmap]: Enhanced RAG Support HOT 2
- [Bug]: autogen can't work with vllm v0.5.1 HOT 5
- [Feature Request]: Support for function calling and images in MessageHistoryLimiter
- [Bug]: AutogenStudio 0.1.3 GroupChat Skills being overwritten. HOT 1
- [.Net][API Break Change]: Deprecate `TextMessageUpdate`
- [.Net][API Break Change]: Deprecate `ToolCallUpdate`
- [.Net][Feature Request]: Allow more options to be passed into `OpenAIChatAgent` constructor
- [Bug]: AutoGen.NET middelware overview documentation contains errors in the example code.
- [.Net][API Break Change] In `AutoGen.SourceGenerator`, stop generating `FunctionDefinition`
- [.Net][Feature Request]: Enable dotnet format check in ci
- [Issue][.Net] Why does the all-in-one Autogen package depend on all clients? HOT 2
- [.Net][Document]: Use AutoGen.Net agent as model in AG Studio HOT 1
- [.Net][Bug]: AutoGen.WebAPI not released
- [Bug]: Websocket demo not working
- [Roadmap]: Google Integrations
- [Issue]: Gemini models do not execute code in Autogen Studio
- [.Net][Feature Request]: Rename the namespace of AutoGen.WebAPI from AutoGen.Service to AutoGen.WebAPI
- [Issue]: How can I disable caching? HOT 3
- [Bug]: Handle CTRL+C gracefully
- [Bug]: Not understanding warning message from autogen during ConverseableAgent creation.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autogen.