Giter Club home page Giter Club logo

conductor's Introduction

Conductor


Conductor is an automated data aggregation, contextualization, and action creation tool that empowers users to pull in several different kinds of data to create intelligent market research workflows.


Conductor's data pipeline is divided into two sections:

  • Internal Enrichment

    • Take internal knowledge from business sources such as Discord, Slack, Google Docs and turn that into actionable data
  • External Enrichment

    • Customer Intelligence API
      • Third party customer and lead APIs called upon by Conductor
    • Web API
      • Gather information from websites and use Conductor to structure incoming data, ready for insights.

conductor's People

Watchers

 avatar

conductor's Issues

Report Generation

Start /outcomes submodule with the ability to craft a PDF.
This should be filled with information coming from the marketing team.

Report Parsing

Solve for JSON decoding error went transferring raw text to Report JSON. Ideally, this is something we might be able to do without an LLM integration.

OpenAI Rate Limit

Sentry Issue: EVRIM-DEV-6

RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o in organization org-jkrKCHGCWDy80zVTM4cx0t0J on tokens per min (TPM): Limit 30000, Used 29394, Requested 4731. Please try again in 8.25s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
(4 additional frame(s) were not displayed)
...
  File "openai/_base_client.py", line 1025, in _request
    return self._retry_request(
  File "openai/_base_client.py", line 1074, in _retry_request
    return self._request(
  File "openai/_base_client.py", line 1025, in _request
    return self._retry_request(
  File "openai/_base_client.py", line 1074, in _retry_request
    return self._request(
  File "openai/_base_client.py", line 1040, in _request
    raise self._make_status_error_from_response(err.response) from None

Too many Requests

rottlingException) when calling the InvokeModelWithResponseStream operation: Too many requests, please wait before trying again. You have sent too many requests. Wait before trying again.')
conductor-server-celery | Traceback (most recent call last):
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/celery/app/trace.py", line 453, in trace_task
conductor-server-celery | R = retval = fun(*args, **kwargs)
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/celery/app/trace.py", line 736, in protected_call
conductor-server-celery | return self.run(*args, **kwargs)
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/app/agents/tasks.py", line 76, in run_url_marketing_report_thread_task
conductor-server-celery | report = run_url_marketing_report_thread(
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/app/agents/utils.py", line 312, in run_url_marketing_report_thread
conductor-server-celery | raise exception
conductor-server-celery | File "/app/agents/utils.py", line 257, in run_url_marketing_report_thread
conductor-server-celery | crew_run = run_marketing_crew(
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langsmith/run_helpers.py", line 576, in wrapper
conductor-server-celery | raise e
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langsmith/run_helpers.py", line 573, in wrapper
conductor-server-celery | function_result = run_container["context"].run(func, *args, **kwargs)
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/conductor/crews/marketing/init.py", line 45, in run_marketing_crew
conductor-server-celery | crew_run = crew.run()
conductor-server-celery | ^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/conductor/crews/marketing/crew.py", line 223, in run
conductor-server-celery | result = crew.kickoff()
conductor-server-celery | ^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/crew.py", line 264, in kickoff
conductor-server-celery | result = self._run_sequential_process()
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/crew.py", line 305, in _run_sequential_process
conductor-server-celery | output = task.execute(context=task_output)
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/task.py", line 183, in execute
conductor-server-celery | result = self._execute(
conductor-server-celery | ^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/task.py", line 192, in _execute
conductor-server-celery | result = agent.execute_task(
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/agent.py", line 236, in execute_task
conductor-server-celery | result = self.agent_executor.invoke(
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain/chains/base.py", line 163, in invoke
conductor-server-celery | raise e
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain/chains/base.py", line 153, in invoke
conductor-server-celery | self._call(inputs, run_manager=run_manager)
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/agents/executor.py", line 128, in _call
conductor-server-celery | next_step_output = self._take_next_step(
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
conductor-server-celery | [
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/crewai/agents/executor.py", line 192, in _iter_next_step
conductor-server-celery | output = self.agent.plan( # type: ignore # Incompatible types in assignment (expression has type "AgentAction | AgentFinish | list[AgentAction]", variable has type "AgentAction")
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain/agents/agent.py", line 397, in plan
conductor-server-celery | for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2875, in stream
conductor-server-celery | yield from self.transform(iter([input]), config, **kwargs)
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2862, in transform
conductor-server-celery | yield from self._transform_stream_with_config(
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1881, in _transform_stream_with_config
conductor-server-celery | chunk: Output = context.run(next, iterator) # type: ignore
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
conductor-server-celery | for output in final_pipeline:
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1282, in transform
conductor-server-celery | for ichunk in input:
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4736, in transform
conductor-server-celery | yield from self.bound.transform(
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1300, in transform
conductor-server-celery | yield from self.stream(final, config, **kwargs)
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 249, in stream
conductor-server-celery | raise e
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 229, in stream
conductor-server-celery | for chunk in self._stream(messages, stop=stop, **kwargs):
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_community/chat_models/bedrock.py", line 249, in _stream
conductor-server-celery | for chunk in self._prepare_input_and_invoke_stream(
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_community/llms/bedrock.py", line 656, in _prepare_input_and_invoke_stream
conductor-server-celery | for chunk in LLMInputOutputAdapter.prepare_output_stream(
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/langchain_community/llms/bedrock.py", line 203, in prepare_output_stream
conductor-server-celery | for event in stream:
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/botocore/eventstream.py", line 603, in iter
conductor-server-celery | parsed_event = self._parse_event(event)
conductor-server-celery | ^^^^^^^^^^^^^^^^^^^^^^^^
conductor-server-celery | File "/usr/local/lib/python3.12/site-packages/botocore/eventstream.py", line 619, in _parse_event
conductor-server-celery | raise EventStreamError(parsed_response, self._operation_name)
conductor-server-celery | botocore.exceptions.EventStreamError: An error occurred (throttlingException) when calling the InvokeModelWithResponseStream operation: Too many requests, please wait before trying again. You have sent too many requests. Wait before trying again.

TypeError: requests.api.get() got multiple values for keyword argument 'timeout'

Sentry Issue: EVRIM-DEV-14

TypeError: requests.api.get() got multiple values for keyword argument 'timeout'
(5 additional frame(s) were not displayed)
...
  File "conductor/crews/rag_marketing/tools.py", line 136, in _run
    return ingest(
  File "conductor/crews/rag_marketing/tools.py", line 68, in ingest
    webpage = url_to_db(
  File "conductor/rag/ingest.py", line 82, in url_to_db
    webpage = ingest_webpage(url, **kwargs)
  File "conductor/rag/ingest.py", line 74, in ingest_webpage
    raise e
  File "conductor/rag/ingest.py", line 50, in ingest_webpage
    normal_response = requests.get(url, timeout=10, **kwargs)

TypeError: requests.api.get() got multiple values for keyword argument 'timeout'
(5 additional frame(s) were not displayed)
...
  File "conductor/crews/rag_marketing/tools.py", line 136, in _run
    return ingest(
  File "conductor/crews/rag_marketing/tools.py", line 68, in ingest
    webpage = url_to_db(
  File "conductor/rag/ingest.py", line 82, in url_to_db
    webpage = ingest_webpage(url, **kwargs)
  File "conductor/rag/ingest.py", line 74, in ingest_webpage
    raise e
  File "conductor/rag/ingest.py", line 50, in ingest_webpage
    normal_response = requests.get(url, timeout=10, **kwargs)

TypeError: requests.api.get() got multiple values for keyword argument 'timeout'
(5 additional frame(s) were not displayed)
...
  File "conductor/crews/rag_marketing/tools.py", line 136, in _run
    return ingest(
  File "conductor/crews/rag_marketing/tools.py", line 68, in ingest
    webpage = url_to_db(
  File "conductor/rag/ingest.py", line 82, in url_to_db
    webpage = ingest_webpage(url, **kwargs)
  File "conductor/rag/ingest.py", line 74, in ingest_webpage
    raise e
  File "conductor/rag/ingest.py", line 50, in ingest_webpage
    normal_response = requests.get(url, timeout=10, **kwargs)

TypeError: requests.api.get() got multiple values for keyword argument 'timeout'
(2 additional frame(s) were not displayed)
...
  File "conductor/crews/rag_marketing/tools.py", line 136, in _run
    return ingest(
  File "conductor/crews/rag_marketing/tools.py", line 68, in ingest
    webpage = url_to_db(
  File "conductor/rag/ingest.py", line 82, in url_to_db
    webpage = ingest_webpage(url, **kwargs)
  File "conductor/rag/ingest.py", line 74, in ingest_webpage
    raise e
  File "conductor/rag/ingest.py", line 50, in ingest_webpage
    normal_response = requests.get(url, timeout=10, **kwargs)

entropy filter

Add Entropy FIlter for gibberish prior to sending to llms

Editor Agent

Add an Editor Agent at the end of the Marketing Crew pipeline.

Tool Result Cache

Create a Redis cache for tool results so we don't burn credits and llm calls

EventStreamError: An error occurred (throttlingException) when calling the InvokeModelWithResponseStream operation:...

Sentry Issue: EVRIM-DEV-5

EventStreamError: An error occurred (throttlingException) when calling the InvokeModelWithResponseStream operation: Too many requests, please wait before trying again. You have sent too many requests.  Wait before trying again.
(28 additional frame(s) were not displayed)
...
  File "agents/tasks.py", line 124, in run_rag_marketing_crew_task
    run_url_rag_report(
  File "agents/utils.py", line 626, in run_url_rag_report
    raise exception
  File "agents/utils.py", line 573, in run_url_rag_report
    pydantic_crew_run = run_rag_marketing_crew(

Too Long Error Handling

Handle this
botocore.exceptions.EventStreamError: An error occurred (validationException) when calling the InvokeModelWithResponseStream operation: Input is too long for requested model.

Add Report Images

Create images from report text or gathered images during collect
SerpAPI may be a good option here

Discord Issue

discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: NotFound: 404 Not Found (error code: 10062): Unknown interaction

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.