Giter Club home page Giter Club logo

azure-openai-proxy's Introduction

azure-openai-proxy

License GitHub release Go Report Card CI Release PRs Welcome Docker Pulls

English|中文

Azure OpenAI Service Proxy, convert OpenAI official API request to Azure OpenAI API request, support all models, support GPT-4,Embeddings.

Eliminate the differences between OpenAI and Azure OpenAI, acting as a bridge connecting them, OpenAI ecosystem accesses Azure OpenAI at zero cost.

aoai-proxy.jpg

Verified support projects:

Name Status
chatgpt-web
chatbox
langchain
ChatGPT-Next-Web

Get Start

Retrieve key and endpoint

To successfully make a call against Azure OpenAI, you'll need the following:

Name Desc Default
AZURE_OPENAI_ENDPOINT This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in Azure OpenAI Studio > Playground > Code View. An example endpoint is: https://docs-test-001.openai.azure.com/. N
AZURE_OPENAI_API_VER See here or Azure OpenAI Studio 2024-02-01
AZURE_OPENAI_MODEL_MAPPER This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal or alternatively under Management > Deployments in Azure OpenAI Studio. N

AZURE_OPENAI_MODEL_MAPPER is a mapping from Azure OpenAI deployed model names to official OpenAI model names. You can use commas to separate multiple mappings.

Format:

AZURE_OPENAI_MODEL_MAPPER: <OpenAI Model Name>=<Azure OpenAI deployment model name>

OpenAI Model Names: https://platform.openai.com/docs/models

Azure Deployment Names: Resource Management > Deployments

Example:

AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt-35-turbo

Screenshot of the overview UI for an OpenAI Resource in the Azure portal with the endpoint & access keys location circled in red.

API Key: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.

Proxy

HTTP Proxy

Env:

AZURE_OPENAI_HTTP_PROXY=http://127.0.0.1:1087

Socks5 Proxy

Env:

AZURE_OPENAI_SOCKS_PROXY=socks5://127.0.0.1:1080

Use Docker

# config by environment 
docker run -d -p 8080:8080 --name=azure-openai-proxy \
  --env AZURE_OPENAI_ENDPOINT=your_azure_endpoint \
  --env AZURE_OPENAI_API_VER=your_azure_api_ver \
  --env AZURE_OPENAI_MODEL_MAPPER=your_azure_deploy_mapper \
  stulzq/azure-openai-proxy:latest

# config by file
docker run -d -p 8080:8080 --name=azure-openai-proxy \
  -v /path/to/config.yaml:/app/config.yaml \
  stulzq/azure-openai-proxy:latest

Call API:

curl --location --request POST 'localhost:8080/v1/chat/completions' \
-H 'Authorization: Bearer <Azure OpenAI Key>' \
-H 'Content-Type: application/json' \
-d '{
    "max_tokens": 1000,
    "model": "gpt-3.5-turbo",
    "temperature": 0.8,
    "top_p": 1,
    "presence_penalty": 1,
    "messages": [
        {
            "role": "user",
            "content": "Hello"
        }
    ],
    "stream": true
}'

Use ChatGPT-Next-Web

chatgpt-web

docker-compose.yml

version: '3'

services:
  chatgpt-web:
    image: yidadaa/chatgpt-next-web
    ports:
      - 3000:3000
    environment:
      OPENAI_API_KEY: <Azure OpenAI API Key>
      BASE_URL: http://azure-openai:8080
      CODE: ""
      HIDE_USER_API_KEY: 1
      HIDE_BALANCE_QUERY: 1
    depends_on:
      - azure-openai
    links:
      - azure-openai
    networks:
      - chatgpt-ns

  azure-openai:
    image: stulzq/azure-openai-proxy
    ports:
      - 8080:8080
    environment:
      AZURE_OPENAI_ENDPOINT: <Azure OpenAI API Endpoint>
      AZURE_OPENAI_MODEL_MAPPER: <Azure OpenAI API Deployment Mapper>
      # AZURE_OPENAI_MODEL_MAPPER: gpt-4=gpt-4,gpt-3.5-turbo=gpt-35-turbo
      AZURE_OPENAI_API_VER: "2024-02-01"
    networks:
      - chatgpt-ns

networks:
  chatgpt-ns:
    driver: bridge

Use ChatGPT-Web

ChatGPT Web: https://github.com/Chanzhaoyu/chatgpt-web

chatgpt-web

Envs:

  • OPENAI_API_KEY Azure OpenAI API Key
  • AZURE_OPENAI_ENDPOINT Azure OpenAI API Endpoint
  • AZURE_OPENAI_MODEL_MAPPER Azure OpenAI API Deployment Name Mappings

docker-compose.yml:

version: '3'

services:
  chatgpt-web:
    image: chenzhaoyu94/chatgpt-web
    ports:
      - 3002:3002
    environment:
      OPENAI_API_KEY: <Azure OpenAI API Key>
      OPENAI_API_BASE_URL: http://azure-openai:8080
      # OPENAI_API_MODEL: gpt-4
      AUTH_SECRET_KEY: ""
      MAX_REQUEST_PER_HOUR: 1000
      TIMEOUT_MS: 60000
    depends_on:
      - azure-openai
    links:
      - azure-openai
    networks:
      - chatgpt-ns

  azure-openai:
    image: stulzq/azure-openai-proxy
    ports:
      - 8080:8080
    environment:
      AZURE_OPENAI_ENDPOINT: <Azure OpenAI API Endpoint>
      AZURE_OPENAI_MODEL_MAPPER: <Azure OpenAI API Deployment Mapper>
      AZURE_OPENAI_API_VER: "2024-02-01"
    networks:
      - chatgpt-ns

networks:
  chatgpt-ns:
    driver: bridge

Run:

docker compose up -d

Use Config File

The configuration file supports different endpoints and API keys for each model.

config.yaml

api_base: "/v1"
deployment_config:
  - deployment_name: "xxx"
    model_name: "text-davinci-003"
    endpoint: "https://xxx-east-us.openai.azure.com/"
    api_key: "11111111111"
    api_version: "2024-02-01"
  - deployment_name: "yyy"
    model_name: "gpt-3.5-turbo"
    endpoint: "https://yyy.openai.azure.com/"
    api_key: "11111111111"
    api_version: "2024-02-01"
  - deployment_name: "zzzz"
    model_name: "text-embedding-ada-002"
    endpoint: "https://zzzz.openai.azure.com/"
    api_key: "11111111111"
    api_version: "2024-02-01"

By default, it reads <workdir>/config.yaml, and you can pass the path through the parameter -c config.yaml.

docker-compose:

azure-openai:
    image: stulzq/azure-openai-proxy
    ports:
      - 8080:8080
    volumes:
      - /path/to/config.yaml:/app/config.yaml
    networks:
      - chatgpt-ns

azure-openai-proxy's People

Contributors

cubercsl avatar fqdeng avatar huhu415 avatar john-theo avatar peterdavehello avatar soulteary avatar stulzq avatar warjiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-openai-proxy's Issues

[OpenAI] 提供错误的API密钥 | Incorrect API key provided

各位大佬能看看配置文件哪里有问题? 提示错误的API密钥
Azure的 Model name:gpt-4-32k Model version:0314
API key和endpoint确定是正确的。

version: '3'

services:
chatgpt-web:
image: chenzhaoyu94/chatgpt-web
ports:
- 3002:3002
environment:
OPENAI_API_KEY: b77000000000000000000003
#OPENAI_API_BASE_URL: http://azure-openai:8080
OPENAI_API_MODEL: gpt-4-32k
AUTH_SECRET_KEY: ""
MAX_REQUEST_PER_HOUR: 1000
TIMEOUT_MS: 60000
depends_on:
- azure-openai
links:
- azure-openai
networks:
- chatgpt-ns

azure-openai:
image: stulzq/azure-openai-proxy
ports:
- 8080:8080
environment:
AZURE_OPENAI_ENDPOINT: https://my.openai.azure.com/
AZURE_OPENAI_MODEL_MAPPER: gpt-4-32k-0314=gpt-4-32k
AZURE_OPENAI_API_VER: "2023-03-14"
networks:
- chatgpt-ns

networks:
chatgpt-ns:
driver: bridge

API Call is not working

可能是一个非常小白的问题, 我跑了docker之后, 直接curl call api,得到一下结果,是我配置漏了哪一步吗? 谢谢!

image

{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}

proxy返回404 Not Found

界面返回这个
ChatGPT error 404: {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}

查看proxy日志,提示404 Not Found
请求的 /v1/chat/completions 这个接口

"code":"404","message": "Resource not found"

按照提示部署了docker
并且测试curl 连接api
回应:
{"error":{"code":"404","message": "Resource not found"}}
请大佬看下哪里有问题。
docker logs(apikey已替换处理):

2023/06/16 16:35:34 AzureOpenAIAPIVer:  apikey
2023/06/16 16:35:34 AzureOpenAIEndpoint:  https://useastgpt.openai.azure.com/
2023/06/16 16:35:34 AzureOpenAIModelMapper:  map[gpt-3.5-turbo:gpt-35-turbo]
2023/06/16 16:35:34 Server listening at :8080
2023/06/16 16:42:46 proxying request [] /v1/chat/completions -> https://useastgpt.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=apikey
[GIN] 2023/06/16 - 16:42:47 | 404 |    826.7726ms |      172.17.0.1 | POST     "/v1/chat/completions"
2023/06/16 16:45:36 Server Shutdown...
2023/06/16 16:45:36 Server exiting
2023/06/16 17:28:13 AzureOpenAIAPIVer:  apikey
2023/06/16 17:28:13 AzureOpenAIEndpoint:  https://useastgpt.openai.azure.com/
2023/06/16 17:28:13 AzureOpenAIModelMapper:  map[gpt-3.5-turbo:gpt-35-turbo]
2023/06/16 17:28:13 Server listening at :8080
2023/06/16 17:28:25 proxying request [] /v1/chat/completions -> https://useastgpt.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=apikey
[GIN] 2023/06/16 - 17:28:26 | 404 |  856.023736ms |      172.17.0.1 | POST     "/v1/chat/completions"

English Version by Default?

Hello!

I have just deployed this docker image! Fantastic work!

I was wondering if I could set the English Language as default in the web frontend? Maybe some flag in the docker-compose file?

Thank you for reading,

添加出错重试的逻辑

Azure的API很不稳,经常报如下错误:
1)
{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}
或者 invalid key
但是过一会又正常了
建议在反向代理那边加一个重试,返回值不是200就重试个几次

panic

azure-openai-gpt4 | 2023/04/18 04:49:37 [Recovery] 2023/04/18 - 04:49:37 panic recovered:
azure-openai-gpt4 | net/http: abort Handler
azure-openai-gpt4 | net/http/httputil/reverseproxy.go:363 (0x30666f)
azure-openai-gpt4 | github.com/stulzq/azure-openai-proxy/azure/proxy.go:65 (0x31cffb)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30dc7b)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/recovery.go:102 (0x30dc5c)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30cf1b)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/logger.go:240 (0x30cef8)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30c01f)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/gin.go:620 (0x30bd08)
azure-openai-gpt4 | github.com/gin-gonic/[email protected]/gin.go:576 (0x30ba5b)
azure-openai-gpt4 | net/http/server.go:2947 (0x22246b)
azure-openai-gpt4 | net/http/server.go:1991 (0x21e9f3)
azure-openai-gpt4 | runtime/asm_arm64.s:1172 (0x75343)

gpt-35-turbo能用,但text-davinci-003不行。求解

我部署了两个model,一个gpt-35-turbo命名gpt35,一个text-davinci-003命名davinci3。

在我的yaml配置中,我先尝试了:
AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt35
可以正常运行。

然后我又把这行改成了
AZURE_OPENAI_MODEL_MAPPER: text-davinci-003=davinci3
对话时报错:
ChatGPT error 404: {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}

请问该如何解决呢?我查了openai的doc,名字应该是正确的。过了几个小时测试还是这样。

支持多对模型的映射

AZURE_OPENAI_MODEL_MAPPER只能设置一对映射,但是我在azure还部署了embedding模型,能不能支持多对模型的映射

支持gpt4

hi 老师您好,我现在申请了azure上边的gpt4的模型试用,但是目前配置完成之后报错,docker-compose配置文件如下
version: '3'

services:
chatgpt4-web:
#image: chenzhaoyu94/chatgpt-web
image: chenzhaoyu94/chatgpt-web:latest
ports:
- 3003:3002
environment:
OPENAI_API_KEY: xxxxxxx
OPENAI_API_BASE_URL: http://10.18.72.88:8086
AUTH_SECRET_KEY: ""
MAX_REQUEST_PER_HOUR: 1000
TIMEOUT_MS: 60000
depends_on:
- azure4-openai
links:
- azure4-openai
networks:
- chatgpt4-ns

azure4-openai:
image: ishadows/azure-openai-proxy
ports:
- 8086:8080
environment:
AZURE_OPENAI_ENDPOINT: https://xxxxxxx.openai.azure.com/
AZURE_OPENAI_MODEL_MAPPER: gpt-4=zzr_test
AZURE_OPENAI_API_VER: 2023-03-15-preview
networks:
- chatgpt4-ns

networks:
chatgpt4-ns:
driver: bridge

报错信息:
image

proxy的报错:
[GIN] 2023/04/17 - 08:33:45 | 404 | 1.018µs | 172.27.0.1 | GET "/v1/dashboard/billing/usage?start_date=2023-04-01&end_date=2023-04-30"
2023/04/17 08:34:11 proxying request [gpt-3.5-turbo] /v1/chat/completions -> https://.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=2023-03-15-preview
[GIN] 2023/04/17 - 08:34:14 | 404 | 3.201908308s | 172.27.0.1 | POST "/v1/chat/completions"

请教您一下 这个问题怎么解决

what's AZURE_OPENAI_MODEL_MAPPER?

the proxy doesn't work in my azure's server.

If I use openai key, it works fine, but if I use azure key after proxy, it doesn't work.

image

I think that problem is in the yaml file.
image

what's AZURE_OPENAI_MODEL_MAPPER? How can I get it? Would u give me an example? Thanks a lot!

拒绝连接错误connect ECONNREFUSED 127.0.0.1:8080

我在docker compose up -d之后,在ChatGPT web页面发送消息后没有正常回复,报错如下:
request to http://127.0.0.1:8080/v1/chat/completions failed, reason: connect ECONNREFUSED 127.0.0.1:8080
请大佬帮忙看看设置是否有问题,非常感谢!yml如下:

services:
  chatgpt-web:
    image: chenzhaoyu94/chatgpt-web
    ports:
      - 3002:3002
    environment:
      OPENAI_API_KEY: xxxxxxxxxxxx
      OPENAI_API_BASE_URL: http://localhost:8080
      OPENAI_API_MODEL: "gpt-3.5-turbo"
      AUTH_SECRET_KEY: "Bearer"
      MAX_REQUEST_PER_HOUR: 1000
      TIMEOUT_MS: 60000
    depends_on:
      - azure-openai
    links:
      - azure-openai
    networks:
      - chatgpt-ns

  azure-openai:
    image: stulzq/azure-openai-proxy
    ports:
      - 8080:8080
    environment:
      AZURE_OPENAI_ENDPOINT: https://xxx.openai.azure.com
      AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt-35-turbo
      AZURE_OPENAI_API_VER: 2023-05-15
    networks:
      - chatgpt-ns

networks:
  chatgpt-ns:
    driver: bridge

支持,不过有一个问题

我也是azure的用户,之前针对https://github.com/Chanzhaoyu/chatgpt-web,用python自己实现了azure的接口,看到这个项目的思路非常好。但看了下代码,应该有个主要问题没解决。
openapi和azure的最大区别是,对于prompt,官方的入参是json数组形式,但azure是一个string。
官方的例子:message=[{"role": "user", "content": prompt},{"role": "assistant", "content": resp}]
azure的例子:prompt=<|im_start|>user\n prompt\n<|im_end|>\n<|im_start|>assistant\n resp\n<|im_end|>\n
另外azure需要额外设置stop=["<|im_end|>"]
希望能完善,后续如果好用我也切换到这个里面来

ChatGPT error 404: {"error":{"code":"404","message": "Resource not found"}}

您好,请教下。

我的操作基本参考您的yaml文件配置,只修改了下面4个参数。

AZURE_OPENAI_ENDPOINT:
AZURE_OPENAI_MODEL_MAPPER:
AZURE_OPENAI_API_VER: 2023-03-15-preview
OPENAI_API_KEY:

改为:
environment:
OPENAI_API_KEY: 500000dedefr98999900
OPENAI_API_BASE_URL: 没改动
AUTH_SECRET_KEY: ""
MAX_REQUEST_PER_HOUR: 1000
TIMEOUT_MS: 60000

environment:
AZURE_OPENAI_ENDPOINT: myname链接
AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo-16k=gpt-35-turbo
AZURE_OPENAI_API_VER: 2023-05-15

结果:
网页运行了,我输入 hello,GPT 报错如下:
ChatGPT error 404: {"error":{"code":"404","message": "Resource not found"}}

麻烦各位大佬,参谋指教

如何捕获 azure-openai 域名的

Hi, 我发现Readme中,在定义 docker compose yaml file时,使用了 OPENAI_API_BASE_URL: http://azure-openai:8080
但是在这两个repo中都没有对这个host的捕获,请问是怎么实现这一转发的?是说docker compose里直接把 service name "azure-openai" 绑定成network里的一个host了吗?

docker-compose up 后 azure-openai-proxy 验证正常,chatgpt-web 对话返回 Invalid URL

单独启动 azure-openai-proxy 验证正常,但是用 docker-compose 把两个容器一起启动,chatgpt-web 那边提示网页端对话窗口全是返回Invalid URL,看服务器端 log 提示TypeError [ERR_INVALID_URL]: Invalid URL。但是看 log 里的 URL 又都是对的,proxy 转换也是成功的。
image
log:

ubuntu@instance-20210522-1543:~$ sudo docker-compose up
/snap/docker/2746/lib/python3.6/site-packages/paramiko/transport.py:32: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography (40.0) will be the last to support Python 3.6.
from cryptography.hazmat.backends import default_backend
Creating ubuntu_azure-openai_1 ... done
Creating ubuntu_chatgpt-web_1 ... done
Attaching to ubuntu_azure-openai_1, ubuntu_chatgpt-web_1
azure-openai_1 | 2023/04/17 09:36:26 AzureOpenAIAPIVer: 2023-03-15-preview
azure-openai_1 | 2023/04/17 09:36:26 AzureOpenAIEndpoint: https://xxxx.openai.azure.com/
azure-openai_1 | 2023/04/17 09:36:26 AzureOpenAIModelMapper: map[gpt-3.5-turbo:gpt-35-turbo0301-peizh]
azure-openai_1 | 2023/04/17 09:36:26 Server listening at :8080
chatgpt-web_1 |
chatgpt-web_1 | > [email protected] prod /app
chatgpt-web_1 | > node ./build/index.mjs
chatgpt-web_1 |
chatgpt-web_1 | Server is running on port 3002
azure-openai_1 | 2023/04/17 09:36:52 proxying request [] /v1/chat/completions -> https://xxxx.openai.azure.com/openai/deployments/gpt-35-turbo0301-xxx/chat/completions?api-version=2023-03-15-preview
azure-openai_1 | [GIN] 2023/04/17 - 09:36:56 | 200 | 3.782634822s | 58.213.197.42 | POST "/v1/chat/completions"
chatgpt-web_1 | sendMessage (31 tokens) {
chatgpt-web_1 | max_tokens: 1000,
chatgpt-web_1 | model: 'gpt-3.5-turbo',
chatgpt-web_1 | temperature: 0.8,
chatgpt-web_1 | top_p: 1,
chatgpt-web_1 | presence_penalty: 1,
chatgpt-web_1 | messages: [
chatgpt-web_1 | {
chatgpt-web_1 | role: 'system',
chatgpt-web_1 | content: "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown."
chatgpt-web_1 | },
chatgpt-web_1 | { role: 'user', content: 'Hi', name: undefined }
chatgpt-web_1 | ],
chatgpt-web_1 | stream: true
chatgpt-web_1 | }
chatgpt-web_1 | TypeError [ERR_INVALID_URL]: Invalid URL
chatgpt-web_1 | at new NodeError (node:internal/errors:399:5)
chatgpt-web_1 | at new URL (node:internal/url:560:13)
chatgpt-web_1 | at new Request (file:///app/node_modules/.pnpm/[email protected]/node_modules/node-fetch/src/request.js:55:16)
chatgpt-web_1 | at file:///app/node_modules/.pnpm/[email protected]/node_modules/node-fetch/src/index.js:51:19
chatgpt-web_1 | at new Promise ()
chatgpt-web_1 | at fetch (file:///app/node_modules/.pnpm/[email protected]/node_modules/node-fetch/src/index.js:49:9)
chatgpt-web_1 | at options.fetch (file:///app/build/index.mjs:181:14)
chatgpt-web_1 | at fetchSSE (file:///app/node_modules/.pnpm/[email protected]/node_modules/chatgpt/build/index.js:46:21)
chatgpt-web_1 | at file:///app/node_modules/.pnpm/[email protected]/node_modules/chatgpt/build/index.js:228:11
chatgpt-web_1 | at new Promise () {
chatgpt-web_1 | input: 'url:8080/v1/chat/completions',
chatgpt-web_1 | code: 'ERR_INVALID_URL'
chatgpt-web_1 | }

azure可以长期白嫖吗?

azure的部分服务每个月是有免费额度的,这个openai的服务每个月有没有免费的额度?还是说只能免费用一年?

无法使用GPT-4

我尝试使用azure openai gpt-4部署模式时,报错。

docker-compose.yml 配置为:
AZURE_OPENAI_MODEL_MAPPER: gpt-4=gpt4

启动后mapper中有两个键值对(第一个是默认值,第二个是yml中设定的值):如下图
CleanShot 2023-04-20 at 20 31 45@2x

开始对话报错如下:
undefined
[ChatGPT error 404: {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}]

多次尝试发现AZURE_OPENAI_MODEL_MAPPER如果设置为非”gpt-3.5-turbo“键,则最终mapper中会添加一个键值对,而不是替换默认第一个键值对,而程序默认使用第一个键值对的值,该值在我的azure资源中没有相应的资源名称,因此报错资源不存在。
如果将model mapper设置为:AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt4,则仅有一个键值对,可以正常对话,但看程序输出使用的还是gpt-3.5模型,而不是gpt-4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.