Giter Club home page Giter Club logo

chatgpt-mattermost-bot's Introduction

A ChatGPT-powered Chatbot for Mattermost

A chat window in Mattermost showing the chat between the OpenAI bot and "yGuy"

The bot can talk to you like a regular mattermost user. It's like having chat.openai.com built collaboratively built into Mattermost! But that's not all, you can also use it to generate images via Dall-E or diagram visualizations via a yFiles plugin!

Here's how to get the bot running - it's easy if you have a Docker host.

You need

  • the Mattermost token for the bot user (@chatgpt by default)
  • the OpenAI API key
  • a Docker server for continuously running the service, alternatively for testing, Node.js 16 is sufficient.

Andrew Zigler from Mattermost created a YouTube Video that quickly guides you through the setup.

If you want to learn more about how this plugin came to live, read the blog post at yWorks.com!

Options

These are the available options, you can set them as environment variables when running the script or when running the docker image or when configuring your docker-compose file.

Name Required Example Value Description
MATTERMOST_URL yes https://mattermost.server The URL to the server. This is used for connecting the bot to the Mattermost API
MATTERMOST_TOKEN yes abababacdcdcd The authentication token from the logged in mattermost bot
OPENAI_API_KEY yes sk-234234234234234234 The OpenAI API key to authenticate with OpenAI
OPENAI_API_BASE no http://example.com:8080/v1 The address of an OpenAI compatible API. Overrides the default base path (https://api.openai.com)
OPENAI_MODEL_NAME no gpt-3.5-turbo The OpenAI language model to use, defaults to gpt-3.5-turbo
OPENAI_MAX_TOKENS no 2000 The maximum number of tokens to pass to the OpenAI API, defaults to 2000
OPENAI_TEMPERATURE no 0.2 The sampling temperature to use, between 0 and 2, defaults to 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
YFILES_SERVER_URL no http://localhost:3835 The URL to the yFiles graph service for embedding auto-generated diagrams.
NODE_EXTRA_CA_CERTS no /file/to/cert.crt a link to a certificate file to pass to node.js for authenticating self-signed certificates
MATTERMOST_BOTNAME no "@chatgpt" the name of the bot user in Mattermost, defaults to '@chatgpt'
PLUGINS no graph-plugin, image-plugin The enabled plugins of the bot. By default all plugins (grpah-plugin and image-plugin) are enabled.
DEBUG_LEVEL no TRACE a debug level used for logging activity, defaults to INFO
BOT_CONTEXT_MSG no 15 The number of previous messages which are appended to the conversation with ChatGPT, defaults to 100
BOT_INSTRUCTION no Act like Elon Musk Extra instruction to give your assistance. How should the assistant behave?

Note The YFILES_SERVER_URL is used for automatically converting text information created by the bot into diagrams. This is currently in development. You can see it in action, here: LinkedIn Post If you are interested in getting your hands on the plugin, please contact yWorks!

Using the ready-made Docker image

Use the prebuilt image from ghcr.io/yguy/chatgpt-mattermost-bot

docker run -d --restart unless-stopped \
  -e MATTERMOST_URL=https://mattermost.server \
  -e MATTERMOST_TOKEN=abababacdcdcd \
  -e OPENAI_API_KEY=234234234234234234 \
  --name chatbot \
  ghcr.io/yguy/chatgpt-mattermost-bot:latest

Building the Docker image manually

First step is to clone this repo.

git clone https://github.com/yGuy/chatgpt-mattermost-bot.git && cd chatgpt-mattermost-bot

For testing, you could now just run npm install and npm run start directly, but be sure to set the environment variables or pass them to the node process, first!

For production use, in order to create a service on a docker container that will always provide the service without you having to run it on your own machine, you can do the following:

Build the docker image from the Dockerfile:

docker build . -t yguy/chatgpt-mattermost-bot

Create and run a container from the image

docker run -d --restart unless-stopped \
  -e MATTERMOST_URL=https://mattermost.server \
  -e MATTERMOST_TOKEN=abababacdcdcd \
  -e OPENAI_API_KEY=234234234234234234 \
  --name chatbot \
  yguy/chatgpt-mattermost-bot

Private TLS Certificate

If your Mattermost instance uses a TLS certificate signed by a private CA, you will need to provide the CA's public root to the container for validation.

If the root certificate is located at /absolutepath/to/certfile.crt, then you can mount that file into the container at a fixed position and specify the node environment variable accordingly:

docker run -d --restart unless-stopped \
  -v /absolutepath/to/certfile.crt:/certs/certfile.crt \
  -e NODE_EXTRA_CA_CERTS=/certs/certfile.crt \
  -e MATTERMOST_URL=https://mattermost.server \
  -e MATTERMOST_TOKEN=abababacdcdcd \
  -e OPENAI_API_KEY=234234234234234234 \
  --name chatbot \
  yguy/chatgpt-mattermost-bot

Verify it's running

docker ps

Later, to stop the service

docker stop chatbot

Docker Compose

If you want to run docker compose (maybe even merge it with your mattermost docker stack), you can use this as a starting point: First adjust the environment variables in docker-compose.yml.

Required Environment Variables

MATTERMOST_URL: https://mattermost.server
MATTERMOST_TOKEN: abababacdcdcd
OPENAI_API_KEY: sk-234234234234234234

Optional Environment Variables

# Set this if using a custom username for the bot, default = @chatgpt
MATTERMOST_BOTNAME: "@chatgpt"

# Console logging output level, default = INFO
DEBUG_LEVEL: TRACE

# Node environment, default = production
NODE_ENV: production

Private TLS Certificate

If your Mattermost instance uses a TLS certificate signed by a private CA, you will need to provide the CA's public root to the container for validation.

If the root certificate is located at /absolutepath/to/certfile.crt, then you would merge the contents below into the docker-compose.yml file:

services:
  chatbot:
    volumes:
      - /absolutepath/to/certfile.crt:/certs/certfile.crt:ro
    environment:
      NODE_EXTRA_CA_CERTS: /certs/certfile.crt

Run the container as a service

When all configuration is complete, start the container service.

docker compose up -d

Verify it's running:

docker compose ps

To stop the container:

docker compose down

Deploy to Kubernetes with Helm

The chatgpt-mattermost-bot chart deploys a containerized chatgpt-mattermost-bot instance which will connect to a running mattermost container in the same kubernetes cluster. Chart uses 'mattermost-team-edition' and the 'mattermost' namespace by default. Uses environment variables MATTERMOST_TOKEN and OPENAI_API_KEY.

helm upgrade chatgpt-mattermost-bot ./helm/chatgpt-mattermost-bot \
  --create-namespace \
  --install \
  -n mattermost \
  -f ./helm/chatgpt-mattermost-bot/values.yaml \
  --set config.MATTERMOST_TOKEN="$MATTERMOST_TOKEN" \
  --set config.OPENAI_API_KEY="$OPENAI_API_KEY"

Example Conversation

Here's an example chat that I just had with our bot:

@yguy:

@chatgpt - please write a short introductory paragraph for the README in the GitHub repo that hosts the sources for you, the friendly Mattermost bot. Tell the readers that you enjoy working for the nice people at yWorks and that everybody should install a Mattermost chatbot into their Mattermost chat system to benefit from a nice and helpful assistant like you are. I will include this in the README on the GitHub repo, so make this an engaging text! Thanks!

chatgpt (BOT):

Hello there! I'm @chatgpt, a friendly and reliable Mattermost bot designed to make your communication experience smoother and more efficient. I'm honored to be part of the team at yWorks, where I help employees get things done faster and with less hassle. With my advanced machine learning algorithms, I'm capable of understanding complex natural language queries and providing succinct and accurate responses in a matter of seconds. Whether you're a developer looking for information on your codebase or a manager seeking to streamline your team's workflows, I'm here to help. Installing a Mattermost chatbot like me is a must for any organization looking to boost its productivity and enjoy a more pleasant communication experience. So what are you waiting for? Let's chat!

I built this tool as a quick hack on a rainy Saturday afternoon, but we are using the bot in production in our Mattermost instance at our office at yworks and the bot has proved to be very helpful for many simple tasks. Give it a try and provide feedback if you like! It's really not very expensive: We had it running for about 30 users for two weeks and that cost us less than half a dollar for the ChatGPT service!

I will also accept helpful pull requests if you find an issue or have an idea for an improvement.

Last but not least, check out yWorks' fine diagramming SDKs for software developers yFiles and our free online graph and diagram editors!

This is under MIT license Copyright (c) 2023 Sebastian Mueller (yWorks) and Michael Haeglsperger (yWorks)

chatgpt-mattermost-bot's People

Contributors

davidkellyongithub avatar jokneemo avatar m00nyone avatar thunderbottom avatar yguy avatar your-highness avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-mattermost-bot's Issues

Not work if Mattermost is installed on Subpath.

Overview

This tool will not work if Mattermost is installed on Subpath.

Example

$ sudo head -3 /opt/mattermost/config/config.json
{
    "ServiceSettings": {
        "SiteURL": "http://localhost:8065/mattermost",

$ export MATTERMOST_URL=http://localhost:8065/mattermost
$ node src/botservice.js

websocket connecting to ws://localhost:8065/api/v4/websocket
{"level":"INFO","name":"bot-ws","msg":"websocket error","diff":0}
{"level":"INFO","name":"bot-ws","diff":1}
{"level":"INFO","name":"bot-ws","msg":"websocket closed","diff":1}
  ERROR bot +0ms

Proposed fix

diff --git a/src/mm-client.js b/src/mm-client.js
index abb3820..1d89947 100644
--- a/src/mm-client.js
+++ b/src/mm-client.js
@@ -16,7 +16,8 @@ client.setToken(mattermostToken)

 const wsClient = new WebSocketClient();
 let matterMostURL = new URL(matterMostURLString);
-const wsUrl = `${matterMostURL.protocol === 'https:' ? 'wss' : 'ws'}://${matterMostURL.host}/api/v4/websocket`
+const pathname = matterMostURL.pathname.replace(/\/+$/, '');
+const wsUrl = `${matterMostURL.protocol === 'https:' ? 'wss' : 'ws'}://${matterMostURL.host}${pathname}/api/v4/websocket`

 new Promise((resolve, reject) => {
   wsClient.addCloseListener(connectFailCount => reject())

Best regards.

Error 401 sending message to mattermost

Here is the debug log in the docker container:
chatgpt-mattermost-bot-chatbot-1 | {"level":"ERROR","time":"2023-04-26T11:22:12.976Z","name":"bot","msg":"Request failed with status code 401","pid":1,"hostname":"64484852c59c","diff":158,"err":{"name":"Error","stack":"Error: Request failed with status code 401\n at createError (/app/node_modules/axios/lib/core/createError.js:16:15)\n at settle (/app/node_modules/axios/lib/core/settle.js:17:12)\n at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:322:11)\n at IncomingMessage.emit (node:events:525:35)\n at endReadableNT (node:internal/streams/readable:1358:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.2.1","Authorization":"Bearer sk_HIDDEN","Content-Length":232},"method":"post","data":"{\"messages\":[{\"role\":\"system\",\"content\":\"You are a helpful assistant named @chatgpt who provides succinct answers in Markdown format.\"},{\"role\":\"user\",\"content\":\"@chatgpt what do you do?\"}],\"model\":\"gpt-3.5-turbo\",\"max_tokens\":2000}","url":"https://api.openai.com/v1/chat/completions"},"status":401}}

The only thing I didn't do was the cert stuff, I'm not sure what that was about so could it be ssl related?

[FeatureRequest] Channel Content processable with chatgpt

A friend of mine has build a similar bot for slack.

One thing he can do with it, is to summarize the conversations of a channel in a specific time frame (see screenshot).

It would be nice if this bot could also do something like this. In my opinion an UI isn't needed. A short command to chatgpt with the params should be enough.

It could also be used for different commands that have channel content as context.

image

Result:

image

yFiles for HTML 2.5 not working

Dear Team,
I've already deploy docker chatgpt-mattermost-bot using yFiles HTML, but the image is now show correctly

it show like this https://prnt.sc/9z9Nt8pY8jek

<GRAPH>{
  "nodes":[
    {"id":"Herbivore","label":"Herbivore","type":"animal"},
    {"id":"Plants","label":"Plants","type":"food"},
  ],
  "edges":[
    {"from":"Herbivore","to":"Plants","label":"eats","type":"consumes"}
  ],
  "types":[
    {"name":"animal","shape":"ellipse","color":"#fff","thickness":2,"size":30},
    {"name":"food","shape":"ellipse","color":"#fff","thickness":2,"size":25},
    {"name":"consumes","thickness":1,"color":"#000","label":"consumes"}
  ],
  "layout":"organic"
}</GRAPH>

In the console log

chatgpt-mattermost-bot-chatbot-1 | {"level":"ERROR","time":"2023-05-10T05:45:32.466Z","name":"bot","msg":"request to http://localhost:4242/json-to-svg failed, reason: connect ECONNREFUSED 127.0.0.1:4242","pid":1,"hostname":"ece245de25a2","diff":0,"err":{"msg":"request to http://localhost:4242/json-to-svg failed, reason: connect ECONNREFUSED 127.0.0.1:4242","name":"FetchError","stack":"FetchError: request to http://localhost:4242/json-to-svg failed, reason: connect ECONNREFUSED 127.0.0.1:4242\n at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1505:11)\n at ClientRequest.emit (node:events:513:28)\n at Socket.socketErrorListener (node:_http_client:494:9)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)","message":"request to http://localhost:4242/json-to-svg failed, reason: connect ECONNREFUSED 127.0.0.1:4242","type":"system","errno":"ECONNREFUSED","code":"ECONNREFUSED"}}

Is that issue compatibility or still development features if using yFiles?

Can't add chatbot to channel(s)

This looks like a fun thing for my digital mentors.
I got it all running and chatbot is communicating in direct conversations with me. However, I don't seem to be able to add chatbot to any of my channels?
When I type it's name in add people, nothing shows

image

Have maybe seen this before and know the solution?

I got another bot, which is showing as expected

azure open ai support

Would be great to be able to azure open ai api, which is not 100% compatible but almost.

bot stops replying

Had this up and running and just started receiving this error:

{\"role\":\"user\",\"content\":\"hey are you still there? \"}],\"model\":\"gpt-3.5-turbo\",\"max_tokens\":2000}","url":"https://api.openai.com/v1/chat/completions"},"status":400}}

Any idea what might be causing this?

Update for DALL-E-3

Hey, absolutely love this bot. It shuts down every few minutes and I have a script that restarts it automatically, but thats not what I'm here for. Could you add support for DALL-E-3? I tried updating the script and use docker-compose, but the openai package also needs updating and there are several errors, sadly I'm not an expert.

model: 'dall-e-3',
quality: 'hd',
resolution: '1024x1024',

Basically stuff like this needs to be added to the request.

[Improvement] Map usernames before sending to OpenAI

For some features (summarization, e.g.) we transmit the user names to OpenAI, but there are very strict limits about the user names in place at OpenAI.

We should rather come up with unique dummy user names that adhere to the strict limits and send them to OpenAI, instead. When the response comes back we should search and replace the dummy names with the real names again. This would allow having arbitrary user names and also adds a little bit of privacy.

Bug - bot will sleep after a few hours and require restart of container to continue operating

Hello - fantastic plugin super valuable for having ChatGPT built into mattermost.

I am experiencing a small bug in the software in which when the bot is marked as "away" after end of day when the workers are no longer using it, the bot will fail to wakeup again the morning when users start messaging it - requiring a restart of the container in order to continue using the plugin.

Custom BOT name not working

1st off thank you for this implementation with MM. Works really well.

I have pulled the latest code and build it into a docker image and wanted to test out the custom name option. Atm I am unable to get it working in any way. I have tried to use the exiting MM bot with it, a new one but nothing works and there are no error logs at all. It just doesn't respond.

Any info on that front?

Tried to use the variable with " ", with ' ', with or without @ but nothing helps.

Any info on this would be great!

Thanks again!

Bot working on Mattermost, but logs show errors and container restarts regularly

The bot seems to be functioning on my Mattermost installation, but when I check the logs, I encounter the following messages:

chatgpt   | websocket connecting to wss://mattermost.zeppelinlg.com/api/v4/websocket
chatgpt   | {"level":"ERROR","time":"2023-03-30T16:15:00.682Z","name":"bot","msg":"undefined","pid":1,"hostname":"52ed9da46fb8","diff":0}
chatgpt   | {"level":"INFO","time":"2023-03-30T16:15:00.667Z","name":"bot-ws","msg":"websocket closed","pid":1,"hostname":"52ed9da46fb8","diff":0}

Additionally, the container seems to be restarting regularly. Any assistance with resolving these issues would be appreciated.

Scaling app

Hi there! What about scaling one? I found adjustable "replics number" in the helm-chart values, so is it mean, that I can to up any number pods without conflicting?

Problem with setting GPT-4 as language model

I set the environment variable "OPENAI_MODEL_NAME" with the value "gpt-4" but still it uses gpt3.5.
based on OpenAI API documentation the value "gpt-4" is correct.


Used config:
docker run -d --restart unless-stopped
-e MATTERMOST_URL=https://********
-e MATTERMOST_TOKEN=*********
-e OPENAI_API_KEY=********
-e OPENAI_MODEL_NAME=gpt-4 --name testGPT ghcr.io/yguy/chatgpt-mattermost-bot:latest

Per-channel and per-team access for data mining / spelunking and retrieval, etc.

I don't really think a chatbot is as helpful as it would be to have a personal bot where we could grant singular channel access to GPT via OpenAI and/or ChatGPT and then allow it to go back through all history of a channel and extract data / context and information for you.

Having it available for an entire team would also be helpful.

An example interaction might be:

Hey (Chat)GPT, could you please go through the history of ~some-channel and summarize the discussions that Roberto had with everyone about Client-X or that were associated with them?

or

Please review the entire history of Team-X and summarize all mentions of Project-Y from November 2019 to July 2022.

Bot is caught in a reboot loop on a mattermost server behind an nginx proxy, failing to auth as bot

Bot is caught in a reboot loop on a mattermost server behind an nginx proxy failing to login to mattermost. Here's the offending log entry causing a reboot loop:

/app/node_modules/@mattermost/client/lib/client4.js:1793
            throw new ClientError(this.getUrl(), {
                  ^

ClientError: Invalid or expired session, please login again.
    at Client4.<anonymous> (/app/node_modules/@mattermost/client/lib/client4.js:1793:19)
    at Generator.next (<anonymous>)
    at fulfilled (/app/node_modules/@mattermost/client/lib/client4.js:7:58)
    at processTicksAndRejections (node:internal/process/task_queues:96:5) {
  url: 'https://chat.arcadalabs.com/api/v4/users/me',
  server_error_id: 'api.context.session_expired.app_error',
  status_code: 401
}

and the relevant section of my nginx config:

server {
        server_name chat.arcadalabs.com;


   location ~ /api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       client_body_timeout 60;
       send_timeout 300;
       lingering_timeout 5;
       proxy_connect_timeout 90;
       proxy_send_timeout 300;
       proxy_read_timeout 90s;
       proxy_pass http://chat;
   }

   location / {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_http_version 1.1;
       proxy_pass http://chat;
   }
listen 443 ssl http2; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/chat.arcadalabs.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/chat.arcadalabs.com/privkey.pem; # managed by Certbot
#    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


        http2_push_preload on; # Enable HTTP/2 Server Push

        # Enable TLS versions (TLSv1.3 is required upcoming HTTP/3 QUIC).
        ssl_protocols TLSv1.2 TLSv1.3;

        # Enable TLSv1.3's 0-RTT. Use $ssl_early_data when reverse proxying to
        # prevent replay attacks.
        #
        # @see: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data
        ssl_early_data on;

   ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384';
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:50m;
   # HSTS (ngx_http_headers_module is required) (15768000 seconds = six months)
   add_header Strict-Transport-Security max-age=15768000;
   # OCSP Stapling ---
   # fetch OCSP records from URL in ssl_certificate and cache them
   ssl_stapling on;
   ssl_stapling_verify on;

   add_header X-Early-Data $tls1_3_early_data;

}
server {
    if ($host = chat.arcadalabs.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot



        server_name chat.arcadalabs.com;
    listen 80;
    return 404; # managed by Certbot


}

# This block is useful for debugging TLS v1.3. Please feel free to remove this
# and use the `$ssl_early_data` variable exposed by NGINX directly should you
# wish to do so.
map $ssl_early_data $tls1_3_early_data {
  "~." $ssl_early_data;
  default "";
}

I also posted about this here: https://forum.mattermost.com/t/chatgpt-bot-failing-to-connect-docker-instance-caught-in-reboot-loop/15746

I super appreciate any help, thank you!

Querying the MATTERMOST_URL environment variable

I'm completely new to web development and programming, so I'd like to apologize for asking such a basic question.
I've been telling my team that i can build a great chat app with chatgpt, and now I'm regretting it. I've been searching for two weeks and still haven't solved it.

I'm running mattermost with docker on my synology NAS
I followed this manual : https://mariushosting.com/how-to-install-mattermost-on-your-synology-nas/

Mattermost's site URL is a reverse proxy set up on the synology nas. It's in the form of "https://chat.nickname.synology.me" and I've also created a websocket custom header.

And when I run the chatgpt-mattermost-bot container, I always get an error log about websocket and it won't run.
Testing with npm install -> npm run start tells me that there is a problem with MATTERMOST_URL.

For the value of MATTERMOST_URL, I put "https://caht.nickname.synology.me" from the /admin_console/environment/web_server path, but maybe I'm wrong.

I'm writing in a translator, so again, I apologize if the context is weird.

(error) container log.txt
(error) npm run start.txt
docker-compose.txt
1 container manager -  mattermost
3 reverse proxy rules - mattermost 1
4 reverse proxy rules - mattermost 2
5 mattermost - system console

Invalid URL error

Hello

Thanks for this work
I try to use it but I have this error in the log.

chatgpt   | websocket connecting to wss://mattermost.**********.com/api/v4/websocket
chatgpt   | node:internal/url:565
chatgpt   |   throw new ERR_INVALID_URL(input);
chatgpt   |   ^
chatgpt   | 
chatgpt   | TypeError [ERR_INVALID_URL]: Invalid URL
chatgpt   |     at new NodeError (node:internal/errors:387:5)
chatgpt   |     at URL.onParseError (node:internal/url:565:9)
chatgpt   |     at new URL (node:internal/url:641:5)
chatgpt   |     at Object.<anonymous> (/app/src/process-graph-response.js:7:24)
chatgpt   |     at Module._compile (node:internal/modules/cjs/loader:1191:14)
chatgpt   |     at Object.Module._extensions..js (node:internal/modules/cjs/loader:1245:10)
chatgpt   |     at Module.load (node:internal/modules/cjs/loader:1069:32)
chatgpt   |     at Function.Module._load (node:internal/modules/cjs/loader:904:12)
chatgpt   |     at Module.require (node:internal/modules/cjs/loader:1093:19)
chatgpt   |     at require (node:internal/modules/cjs/helpers:108:18) {
chatgpt   |   input: '/json-to-svg',
chatgpt   |   code: 'ERR_INVALID_URL'
chatgpt   | }

thanks for pushing this

I have been looking to get a bot like this setup in Mattermost and got it up and running yesterday thanks to your code. There was no Readme at the time so I had to learn by reverse engineering as I am not a programmer. Came to check in today and see you've updated a lot including docker. Its very interesting to see this project get built out, I am learning a lot. Thank you!

How to implement for self-hosted user?

I see there is a documentation for docker users only. Is there a roadmap for MATTERMOST self-hosted (ubuntu) users? Can anyone help on how to set it up?

Program is stopping after few minutes

Starting the program with npm start (and the right env), it works for few minutes. After a bit (or when I leave the convo in Mattermost?) the program just ends like this:

{"level":"DEBUG","name":"mattermost","msg":{"event":"thread_updated","data":[obfuscated],"diff":25080}
{"level":"INFO","name":"bot-ws","msg":"websocket closed","diff":180288}
{"level":"ERROR","name":"bot","msg":"undefined","diff":0}

Using latest Node LTS on a Debian system.

Request failed with status code 429

Cannot get response from the ChatGPT even with the paid Plus ChatGPT account:

{
  "level": "ERROR",
  "time": "2023-04-18T09:46:19.306Z",
  "name": "bot",
  "msg": "Request failed with status code 429",
  "pid": 1,
  "hostname": "2b5d342ab71e",
  "diff": 253,
  "err": {
    "name": "Error",
    "stack": "Error: Request failed with status code 429\n    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)\n    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:322:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at endReadableNT (node:internal/streams/readable:1358:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)",
    "config": {
      "transitional": {
        "silentJSONParsing": true,
        "forcedJSONParsing": true,
        "clarifyTimeoutError": false
      },
      "transformRequest": [
        null
      ],
      "transformResponse": [
        null
      ],
      "timeout": 0,
      "xsrfCookieName": "XSRF-TOKEN",
      "xsrfHeaderName": "X-XSRF-TOKEN",
      "maxContentLength": -1,
      "maxBodyLength": -1,
      "headers": {
        "Accept": "application/json, text/plain, */*",
        "Content-Type": "application/json",
        "User-Agent": "OpenAI/NodeJS/3.2.1",
        "Authorization": "Bearer **MY_TOKEN_API_KEY**",
        "Content-Length": 247
      },
      "method": "post",
      "data": "{\"messages\":[{\"role\":\"system\",\"content\":\"You are a helpful assistant named @chatgpt who provides succinct answers in Markdown format.\"},{\"role\":\"user\",\"content\":\"@chatgpt  now with OPENAI_MAX_TOKENS=100\"}],\"model\":\"gpt-3.5-turbo\",\"max_tokens\":100}",
      "url": "https://api.openai.com/v1/chat/completions"
    },
    "status": 429
  }
}

Mattermost stack with llama running on separate local lan server.

I have a llama server running on another local server in the house. I have my mattermost stack running on a synology server. I was able to get the openai bot into mattermost and its working great. I would like to add another bot into the mattermost stack but reaching out to the other localmachine running the dalai server.

I noticed you have 2 branches a main and a llama branch. I am port forwarding correctly for port 3000.

chatgpt:
image: ghcr.io/yguy/chatgpt-mattermost-bot:latest
container_name: chatgpt
environment:
MATTERMOST_URL: 'https://'
MATTERMOST_TOKEN: '
'
OPENAI_API_KEY: '
**********************'
MATTERMOST_BOTNAME: '@chatgpt'
DEBUG_LEVEL: 'TRACE'
NODE_ENV: 'production'
restart: always

llamagpt:
image: ghcr.io/yguy/chatgpt-mattermost-bot:latest
container_name: llamagpt
environment:
MATTERMOST_URL: '***'
MATTERMOST_TOKEN: '
'
DALAI_SERVER_URL: 'http://ipofcomputerrunningdalaiserve:3000'
MATTERMOST_BOTNAME: '@llamagpt'
DEBUG_LEVEL: 'TRACE'
NODE_ENV: 'production'
restart: always

I am not getting a response but it looks like its trying. 

is there anything else 
    
what am i doing wrong? 

PS i did ask about the diagram bot and am excited.   Thank you for everything you do. 

Critical Bug: Bot sometimes reads and replies to unrelated messages

This still needs some investigation, but it seems that the bot replies to arbitrary messages that are posted by the person who initially invited the bot into a channel, even outside of the thread that it had been invited to.

This is a severe problem, likely introduced with the refactorings in 2.0.0.

I would recommend sticking with the 1.x version until this has been resolved, unless you are OK with the bot sometimes replying (and thus sending your input to OpenAI).

Note that OpenAI will not use this data for training data, but possibly sensitive chat data might still be sent to OpenAI. Check their data processing rules to find out if this is problem for you. If so: revert to 1.x or disable the bot - removing it from a channel should also make it technically impossible for the bot to process and react to the messages in the channel.

Note that you can notice when this happens, because the bot always reacts when it reads messages and sends contents to OpenAI. So if you do not see spurious replies, you are not affected by this issue.

retain message content when creating a thread

I am not sure what changed in the code but initially I was able to create a thread to a message and mention @chatgpt and the bot was able to pull information from the original message.

For example, I had an automation that forwarded emails to a Mattermost channel called ~emails . I would then hit 'reply' on a message that was sent to the ~email channel and mention @chatgpt to draft a response. The bot was able to draft an email response based on original message I was replying to.

Would it be possible to get this functionality back?

ETIMEDOUT when trying to Fetch

I think I have gotten everything right, but the docker logs just show the following error (URL and IP are edited :):

/app/node_modules/node-fetch/lib/index.js:1505
                        reject(new FetchError(`request to ${request.url} failed, reason: ${err.message}`, 'system', err));
                               ^
FetchError: request to https://example.com/api/v4/users/me failed, reason: connect ETIMEDOUT 123.456.789.000:443
    at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1505:11)
    at ClientRequest.emit (node:events:513:28)
    at TLSSocket.socketErrorListener (node:_http_client:494:9)
    at TLSSocket.emit (node:events:513:28)
    at emitErrorNT (node:internal/streams/destroy:157:8)
    at emitErrorCloseNT (node:internal/streams/destroy:122:3)
    at processTicksAndRejections (node:internal/process/task_queues:83:21) {
  type: 'system',
  errno: 'ETIMEDOUT',
  code: 'ETIMEDOUT'
}

I can get the bot to respond using curl, but there seems to be nothing going on the docker side of things.

429 Error

Initial run of the bot, returns nothing after questions are asked.

Running with TRACE shows the following:

{"level":"ERROR","name":"bot-ws","msg":"(node:27143) UnhandledPromiseRejectionWarning: Error: Request failed with status code 429\n

Unsure what is causing the 429 error as this is the first time attempting to run/use a specific AI token, so would not expect any rate limiting to have been hit?

chatgpt-mattermost-bot direct message

Can the ChatGPT Mattermost bot be used in direct messages with other team members? When I use "@chatgpt" in a direct message, nothing happens.

I have added the bot to my team, and now I can directly message the bot or add it to a channel and use "@chatgpt" to trigger the bot. However, it does not work when I mention the bot in a direct message with someone else.

GPT-4 API

Hello!

Is there any way to set the new GPT-4 api instead of 3.5?

BR

Once ChatGPT has been tagged in a thread, it will automatically reply to any reply.... this is very annoying. It should only reply when it is tagged in the latest reply of a thread.

See the attached screenshot, as soon as i mentioned @chatgpt once as a string (not even a tag) on the red cross. he started to pollute my conversation with my team. This is especially annoying as my team would like to discuss the response from ChatGPT in the same thread. For us, ChatGPT should only been triggered on the reply where he is explicitly mentioned.

Would there be a way to have different mode of triggering chatGPT sometimes in full interactive mode (as now) and sometime in a more on-demand way. This is especially true for ChatGPT4 with an API limit of 24 requests per 3hours, you don't want to trigger him at every reply. Still when summoned on-demand ChatGPT should be able to use all the previous messages of the thread.

Capture d’écran 2023-06-13 à 19 42 12

Plugins don't work unless the 'plugins' variable is set

In code the plugins environment variable is checked, however the Dockerfile uses PLUGINS as the default, so by default neither the image nor the graph plugin are enabled.

if(!process.env.plugins || process.env.plugins.indexOf('graph-plugin') === -1)

ENV PLUGINS=image-plugin,graph-plugin

We should be using the upper case names in the code, too.

Workaround for now is to use plugin as environment variable name.

Mattermost-bot

Dear yGuy,

Thank you for the great api.

I have recently in stalled the lattest Mattermost on docker with Portainer.
And thus also you container ghcr.io/yguy/chatgpt-mattermost-bot:latest:

When I try chat with the bot I get the following reply:

chatgpt
BOT
Sorry, but I encountered an internal error when trying to process your message

My versions:

  • Mattermost Team Edition - Mattermost Version: 9.0.0
  • Community Edition 2.19.1
  • Paid Personal OpenAI GPT Chat subscription

Stack setup for ghcr.io/yguy/chatgpt-mattermost-bot:latest:

version: '3.7'

services:
chatbot:
image: ghcr.io/yguy/chatgpt-mattermost-bot:latest
build: .
restart: unless-stopped
environment:
MATTERMOST_URL: https://removed for privacy.co.za
MATTERMOST_TOKEN: removed for privacy
OPENAI_API_KEY: sk-removed for privacy

I am not sure how I can set the model to gpt4 but befor I digress see the log response below.

Log :
On first run it has the following log:

2023-10-01T07:51:05.523Z INFO name=bot-ws msg=websocket connecting to wss://removed-for-privacy/api/v4/websocket pid=1 hostname=1af6ed135303 diff=0
2023-10-01T07:51:05.792Z LOG name=bot msg=Connected to Mattermost. pid=1 hostname=1af6ed135303 diff=0

Then as soon as you try to talk to it you get:

2023-10-01T07:56:49.002Z ERROR name=bot msg=Request failed with status code 429 pid=1 hostname=1af6ed135303 diff=343210 err={"name":"Error","stack":"Error: Request failed with status code 429\n at createError (/app/node_modules/axios/lib/core/createError.js:16:15)\n at settle (/app/node_modules/axios/lib/core/settle.js:17:12)\n at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:322:11)\n at IncomingMessage.emit (node:events:525:35)\n at endReadableNT (node:internal/streams/readable:1358:12)\n at processTicksAndRejections(node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, /","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.3.0","Authorization":"Bearer Removed-For-Privacy","Content-Length":1452},"method":"post","data":"{"model":"gpt-3.5-turbo","messages":[{"role":"system","content":"Your name is @chatgpt and you are a helpful assistant. Whenever users asks you for help you will provide them with succinct answers formatted using Markdown. You know the user's name as it is provided within the meta data of the messages."},{"role":"user","name":"Crim","content":"hi @chatgpt can you help me"}],"max_tokens":2000,"temperature":1,"functions":[{"name":"image-plugin","description":"Generates an image based on a given image description.","parameters":{"type":"object","properties":{"imageDescription":{"type":"string","description":"The description of the image provided by the user"}},"required":["imageDescription"]}},{"name":"exit-plugin","description":"Says goodbye to the user and wish him a good day.","parameters":{"type":"object","properties":{},"required":[]}},{"name":"message-collect-plugin","description":"Collects messages in the thread for a specific user or time","parameters":{"type":"object","properties":{"lookBackTime":{"type":"number","description":"The time in milliseconds to look back in time and collect messages which were posted within this timespan. Omit this parameter if the collected messages are independent from the time they were sent."},"messageCount":{"type":"number","description":"The number of messages which should be collected. Omit this parameter if you want to collect all messages."}},"required":[]}}],"function_call":"auto"}","url":"https://api.openai.com/v1/chat/completions"},"status":429}

Thank you and I hope you can give me some insight.

Kind Regards

Enable finetuning of Assistant Role

it would be awesome if we would get the ability to customize the line here

const botInstructions = "Your name is " + name + " and you are a helpful assistant. Whenever users asks you for help you will " +
with our own. The easiest solution would be an Environment Variable to do this.
This would allow us to give the Bot more Identity. Our Employees usually ask ChatGPT coding questions or need assistance on writing in-code documentation. If we could personalize the instruction message, this would be very helpfull.

Bot refuses to create images.

I have not explicitly disabled any of the plugins. "Exit" plugin works.
Whenever I ask for an image to be generated the bot refuses by stating that it can only respond via text.

Bot hallucinates "python" plugin

This is more like a known issue issue rather than a bug report, but we might consider implementing workarounds for this problem in the ChatGPT LLM in the future.

If you ask ChatGPT mathematical questions, it will hallucinate the existence of a "python" plugin and call that plugin to compute the results. However there is no "python" plugin available.
I guess that plugin is available at OpenAI and the LLM was finetuned to make use of that plugin. However we don't have an implementation of that plugin and thus the bot's plugin mechanism detects an error and will tell you:

"Sorry, but it seems there was an error when using the plugin python".

Keeping this around for other people to find if they run into this.
If you came across similar messages with non-existent plugins, please report them, here and possibly the prompt that caused it to appear.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.