Comments (7)
When I chain the guard after the OpenAIToolsAgentOutputParser(), a value is passed to the guard finally, however, the value includes the function calls. The problem here is that the agent is not executed by the time the results are passed to teh guard. Figuring out if there's a way to fix that
[OpenAIToolAgentAction(tool='get_retriever_docs', tool_input={'query': 'secret'}, log="\nInvoking: get_retriever_docs
with {'query': 'secret'}
\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_3WbE44YBc9F7snOpUwLjMhgK', 'function': {'arguments': '{"query":"secret"}', 'name': 'get_retriever_docs'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'})], tool_call_id='call_3WbE44YBc9F7snOpUwLjMhgK')]
from guardrails.
Hi, I'm looking into this one now.
from guardrails.
I've got reason to believe this is actually not a Guardrails specific problem but rather a chaining one in general - when I swap out the | guard
line for this
topic = "*apricot*"
guard = Guard().use(RegexMatch(topic, match_type="search", on_fail="filter"))
model = ChatOpenAI(temperature=0, streaming=False)
llm = model | StrOutputParser()
I get the same RunnableSequence error.
I'm diving deeper to see what the actual issue is.
from guardrails.
ok the second use seems to be the right place to chain in (where we got the 'API must be provided' error). I attached a debugger, and found that when the guard invoke
function is called, it's passed an empty input. I think something else is going on here, where LCEL is executing guard validation before it has results ready.
from guardrails.
Where we chain is very important - the place to chain the guard with output validation will be after the agent executes. See the code sample below
from langchain import hub
from langchain.agents import AgentExecutor
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.documents.base import Document
from guardrails.hub import RegexMatch
from guardrails import Guard
from langchain_core.runnables import RunnablePassthrough
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
prompt = hub.pull("hwchase17/openai-tools-agent")
@tool
def get_retriever_docs(query: str) -> list[Document]:
"""Returns a list of documents from the retriever."""
return [
Document(
page_content="# test file\n\nThis is a test file with a secret code of 'blue-green-apricot-brownie-cake-mousepad'.",
metadata={"source": "./test.md"},
)
]
# Set up a Guard
topic = "apricot"
guard = Guard().use(RegexMatch(topic, match_type="search", on_fail="filter"))
model = ChatOpenAI(temperature=0, streaming=False)
llm = model
tools = [get_retriever_docs]
############################################ this is a copy-paste from langchain.agents.create_openai_tools_agent
llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in tools])
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
############################################
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_config(
{"run_name": "Agent"}
)
chain = agent_executor | guard
query = "call get_retriever_docs and tell me a secret from the docs"
print(chain.invoke({"input": query}))
from guardrails.
@zsimjee Confirmed that chaining the guard
after the agent_executor
works in your minimal example. Intuitively, this make a lot of sense: we want the agent's output, to go through guardrails. For this reason alone, it makes sense to me to chain guard
all the way at the end rather than in an intermediate step as I had tried to do in my code above.
from guardrails.
Awesome! I wish there was a way to chain the actual agent
and [type]AgentOutputParser()
directly into the AgentExecutor
, but I couldn't get that to work. IF it does work, it would be cool to have a chain like
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
| AgentExecutor().with_config("run_name": "Agent")
| guard
| [other output parsers]
Closing for now
from guardrails.
Related Issues (20)
- [bug] pip version not updated HOT 2
- [q] Cohere async? Input should be a valid string [type=string_type, input_value=[cohere.Generation {
- [feat] Provenance - Add metadata with underlying numbers to the PassResult and FailResult HOT 1
- Guardrails support of AzureOpenAI with openai>1.0.0[bug] HOT 9
- [feat] Make `on_fail` an enum instead of string for the validator class
- [bug] Reference links in official guardrails docs are broken HOT 3
- [bug] JSON schema validation throws an exception when an LLM generates an array instead of an object HOT 2
- [feat] - llama-index integration HOT 1
- [bug] ToxicLanguage doesn't work with streaming and OpenAI HOT 4
- [bug] Message: 'An unexpected error occurred! Failed to authenticate! HOT 11
- Add better_profanity as a validator which takes care of word dividers HOT 4
- Dead link to docs page HOT 2
- [bug] Importing guardrails raises a deprecation warning for old validators HOT 1
- [bug] JSON validation converts null values to True:bool HOT 1
- No capability to disable telemetry via env var [bug] HOT 2
- [bug] Message: guardrails-cli[6204] , ERROR Failed to inspect HOT 8
- Automatically open `hub.guardrailsai.com/tokens` url when running guardrails configure
- [feat] Support for image inputs in prompts HOT 1
- [bug] Endpoint is reachable Validator "Zero-click" Vulnerability HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from guardrails.