Giter Club home page Giter Club logo

mindmate's Introduction

install package

virtual environment

pip install mindmate

It's not recommended to install in virtual environment (except for testing) try it with default pip

operating system level

sudo apt update
sudo apt install -y python3-pip
export PATH="$PATH:/home/$USER/.local/bin"
pip install mindmate

usage

$ mindmate [ARGUMENT] [OPTIONS] [OPTIONS] [OPTIONS] --help

examples

$ mindmate configure
$ mindmate directory prompting list

$ mindmate chat --platform openai \
  --model text-davinci-003 \
  --stream true \
  --max-tokens 500 \
  --prompt "Act as a professional developer, provide best file structure for fastAPI framework"

$ mindmate image create -p "mindmate written on the background in a garden and friends playing around"

compatibility

Not tested yet, but should be compatible with any Python >= 3.8

mindmate's People

Contributors

yalattas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mindmate's Issues

[FEATURE] cli configuration to yaml file

Is your feature request related to a problem? Please describe.
user shall be able to store his credentials in the cli

example:

mindmate configure

Describe the solution you'd like
credentials should be stored in yaml file

[BUG] FileNotFoundError in case user didn't configure any credentials

Describe the bug
In case user attempt to make a prompt to AI without providing credentials. System is looking for a specific file to authenticate first. Since user didn't configure the client. File wasn't created.

To Reproduce
Steps to reproduce the behavior:

  1. Install cli on new computer
  2. Perform regular command without configure the client first -> python main.py chat -p PROMPT

Expected behavior
Regular return informing user that crednetials is wrong OR wasn't provided

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version Ubuntu 20.04.5 LTS

Version

  • v0.0.2

[BUG] exceed maximum number of tokens

Describe the bug
openai throwing an error in case user exceeds the maximum allowed token per respond

To Reproduce
pass --max-token 5000 flag

Expected behavior
To work or partition to handle the error

Screenshots

Traceback (most recent call last):
  File "/home/yalattas/.local/lib/python3.8/site-packages/mindmate/services/openai.py", line 81, in ask_ai_with_stream
    completion = openai.Completion.create(
  File "/home/yalattas/.local/lib/python3.8/site-packages/openai/api_resources/completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/yalattas/.local/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/yalattas/.local/lib/python3.8/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/yalattas/.local/lib/python3.8/site-packages/openai/api_requestor.py", line 620, in _interpret_response
    self._interpret_response_line(
  File "/home/yalattas/.local/lib/python3.8/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 5018 tokens (18 in your prompt; 5000 for the completion). Please reduce your prompt; or completion length.

Desktop (please complete the following information):

  • OS: ubuntu 20.04
  • Version v0.1.6

Version

  • v0.1.6

[FEATURE] validate selected model based on selected platform

Is your feature request related to a problem? Please describe.
OPENAI offers pre-defined models and other platforms are different. Therefore, we need to distinguish between selected model while choosing platform

Describe the solution you'd like
pass --platform option and --model to select a specific model. While maintaining default values in case user didn't define

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.