Giter Club home page Giter Club logo

cf-openai-azure-proxy's Introduction

cf-openai-azure-proxy

English | 中文

大多数 OpenAI 客户端不支持 Azure OpenAI Service,但Azure OpenAI Service的申请和绑卡都非常简单,并且还提供了免费的额度。此脚本使用免费的 Cloudflare Worker 作为代理,使得支持 OpenAI 的客户端可以直接使用 Azure OpenAI Service。

支持模型:

  • GPT-3
  • GPT-4
  • DALL-E-3

模型子类添加非常容易, 参考下面的使用说明

项目说明:

  • 我没有服务器可以使用吗?
    • 这段脚本跑在Cloudflare Worker, 不需要服务器, 不需要绑卡, 每天10W次请求 免费
  • 我没有自己的域名可以使用吗?
    • 也可以, 参考: #3
  • 实现打印机模式:
    • Azure OpenAI Service's 回复是一段一段回复的
    • 返回给客户端的时候, 本项目拆出一条条的消息, 依次给, 达到打印机模式
  • 项目也支持 Docker 部署(基于 wrangler)

部署

代理 OpenAI 的请求到 Azure OpenAI Serivce,代码部署步骤:

  1. 注册并登录到 Cloudflare 账户
  2. 创建一个新的 Cloudflare Worker
  3. cf-openai-azure-proxy.js 复制并粘贴到 Cloudflare Worker 编辑器中
  4. 通过修改或环境变量调整 resourceName 和 deployment mapper 的值
  5. 保存并部署 Cloudflare Worker
  6. #3 可选绑定自定义域名: 在 Worker 详情页 -> Trigger -> Custom Domains 中为这个 Worker 添加一个自定义域名

使用说明

先得到 resourceName 和 deployment mapper, 登录到Azure的后台:

env

这里有两种做法:

  • 直接修改他们的值, 如:
// The name of your Azure OpenAI Resource.
const resourceName="codegpt"

// deployment model mapper
const mapper = {
     'gpt-3.5-turbo': 'gpt3',
     'gpt-4': 'gpt4',
     'dall-e-3': 'dalle3' 
   };
其他的map规则直接按这样的格式续写即可
  • 或者通过 cloudflare worker 控制台, 进入 Workers script > Settings > Add variable under Environment Variables.

    env

客户端

以 OpenCat 为例: 自定义 API 域名填写 第六步绑定的域名:

opencat

我已经尝试了多种客户端, 如果遇到其他客户端有问题, 欢迎创建issue.

cf-openai-azure-proxy's People

Contributors

caixiaopig avatar casyalex avatar coconut49 avatar cxjava avatar haibbo avatar hbsgithub avatar invisprints avatar ruochenlyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cf-openai-azure-proxy's Issues

配置opencat后,返回404

绑定了域名,并且也能validate success。

请问是什么问题呢? 不过好像没地方需要填 azure openai的秘钥?

求更新增加dall-e-3

求更新增加dall-e-3,DALL-E 和 GPT-3 或 GPT-4 都是由 OpenAI 开发的,但是它们有点不同。

如何关闭 打印机 模式

感谢作者提供了打印机模式
有个不情之请。请问如何关闭打印机模式?
如果关闭打印机模式就是类似azure广场那种一段段蹦出来的是么?
谢谢。

日志

可以在cloudflare上看到api请求和响应的日志吗?

docker run命令和https访问

docker run -d -p 8787:8787 -t \
--name cf-azure-openai-proxy \
--env RESOURCE_NAME=codegpt \
--env DEPLOY_NAME_GPT35=gpt3 \
--env DEPLOY_NAME_GPT4=gpt4 \
haibbo/cf-openai-azure-proxy:latest
好像指定容器名要加--name,不加它就识别成镜像名,然后没有这个镜像

我本地运行好像也可以

怎么实现https访问啊?想部署到服务器上(不然就闲置了)

404 NOT FOUND

如果按照你给的instruc 部署的话,只能得到
image

问个小白问题

在完成创建Cloudflare Worker并按要求把里边变量设置后得到了一个https://xxxxx.workers.dev/
的网址,接下来如何操作呢?我是跟着issue中想使用在zetoro GPT插件里的,部署成功后,因为没学过开发,有点好奇,这样一个接口如果拿来代码调用的话,应该如何写呢?我试着把

      openai.api_key=''#设置成azure中提供的一串非sk开头的秘钥

但我发现这样并不写,特别是

engine="gpt-3.5-turbo",
# model='gpt-3.5-turbo',
prompt='hello',
max_tokens=1024,
n=1,
stop=None,
temperature=0.5,
)```
这个函数的engine的参数应该如何选择呢?之前我使用azureapi的时候传的是自己部署的名字。
想问问大佬这个问题如何解决,希望可以得到回复

输出结果中显示乱码

部署了最新的代码,输出结果中有机会产生乱码。尝试了几个客户端,都有这种现象,切换为标准的OpenAI API显示正常。

ChatGPT Next Web 客户端((上半部分是这个项目的转换的Azure接口,下半部分是OpenAI接口):
image

另外一个客户端AMA(转换的Azure接口):
image

404报错

404
alt-svc:h3=":443"; ma=86400
cf-ray:817a58c532f34a0c-TPE
connection:keep-alive
content-length:13
content-type:text/plain;charset=UTF-8
date:Tue, 17 Oct 2023 17:43:57 GMT
nel:{"success_fraction":0,"report_to":"cf-nel","max_age":604800}
report-to:{"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v3?s=Z0g9wpUZoc9NhhHzc22LsiK17llTRFdKhJ5AzEMb0W2yNs4EW7PPuUgQ4FooMXcvIRGRDEozWhHyFyGQez5pQ0i8c6GoI81dAz2gn%2FS9lvJGbz7%2F9uOvAcpFAx6jbMEC7cnPFSGEfq0zIhJhU%2BzwgYC2bmE%2BZwAIrvMXzRCzMeY%2FBd5cNbpVNz3ttg%3D%3D"}],"group":"cf-nel","max_age":604800}
server:cloudflare
vary:Accept-Encoding
404 Not Found

Can not be used in `ChatGPT Next Web`

I deployed the worker in Cloudflare and bind a domain. I could use the domain in OpenCat, but when I set the BASE_URL in ChatGPT Next Web I got the {"error":{"code":"404","message": "Resource not found"}} message in the chat.

I am sure I set the correct OPENAI_API_KEY.

返回404

您好,抱歉因为我没有技术背景,自己用chatgpt研究了半天,发现大概是path的问题所以返回都是404
但好像我自己也发现不了做错了哪里,请教下
Xnip2023-05-14_22-26-34

in CloudFlare preview shows error 404

如题。 非程序员, 不是特别懂前端, 麻烦帮助。

问题:error 404

image

我的步骤:

  1. 在 cloudflare --> create worker --> Create"Hello World" script ( default sample ) --> Deploy
  2. cf-openai-azure-proxy.js 复制并粘贴到 Cloudflare Worker 编辑器中 --> 我copy所有内容到上面的worker.js文件里面。 覆盖之前给的hello world code.
  3. 修改 Env Var.
const resourceName="chatgpt" //RESOURCE_NAME

const mapper = {
    'gpt-3.5-turbo': "aBot" // DEPLOY_NAME_GPT35,
    // 'gpt-4': DEPLOY_NAME_GPT4
};

where resourceName seems correct --> endpoint https://chatgpt.openai.azure.com/

image

不知道哪里出了问题。 麻烦看看, 谢谢!

ChatGPT-Next-Web v2.0+中打印机模式失效的问题

在Yidadaa/ChatGPT-Next-Web的最新版本v2.0+中,使用本项目转换的API,会出现打印机模式失效的情况,输出变为一段一段显示的形式,并且在内容输出完毕后会依然显示“Typing...”,需要等待一小会儿才能结束。尝试切换到OpenAI官方API Key和URL进行测试,输出正常。

image

Stream 偶尔还是会返回不完整的内容

[Base Url] https://worker-azure-openai-proxy.19511344.workers.dev
content data: {"id":"","object":"","created":0,"model":"","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[],"usage
eventData {"id":"","object":"","created":0,"model":"","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[],"usage [
  'data',
  '{"id":"","object":"","created":0,"model":"","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[],"usage'
]
- error app/api/openai/[...path]/route.ts (113:36) @ parse
- error unhandledRejection: Error [SyntaxError]: Unexpected end of JSON input
    at JSON.parse (<anonymous>)
    at eval (webpack-internal:///(sc_server)/./app/api/openai/[...path]/route.ts:100:51)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  digest: undefined

usage [ 后面的内容没有一次性返回。

延迟问题

首先,感谢作者开源这个项目。
其次,目前使用的是cluodflare的免费1天10w的worker+opencat+梯子的方案, 在每次提问后,需要等待大概3·5秒才会返回答案。 想请问如果采用付费版本的workder,能降低这个等待的时间么?谢谢。

不能使用0301版本

在chatbox切换到0301响应失败,只能使用默认的gpt3.5-turbo,opencat同样的情况

调用此 Proxy 接口时出现 CloudFlare 质询

ChatGPT-Next-Web 中使用此 Proxy 接口时出现了 CloudFlare 质询的返回:
image

CloudFlare 端的安全日志:

{
  "action": "challenge",
  "clientASNDescription": "AMAZON-02",
  "clientAsn": "16509",
  "clientCountryName": "JP",
  "clientIP": "2a06:98c0:3600::103",
  "clientRequestHTTPHost": "此worker地址",
  "clientRequestHTTPMethodName": "POST",
  "clientRequestHTTPProtocol": "HTTP/1.1",
  "clientRequestPath": "/v1/chat/completions",
  "clientRequestQuery": "?path=v1%2Fchat%2Fcompletions",
  "datetime": "2023-12-20T01:16:06Z",
  "rayName": "838409bd9317835a",
  "ruleId": "country",
  "rulesetId": "",
  "source": "country",
  "userAgent": "Vercel Edge Functions",
  "matchIndex": 0,
  "metadata": [],
  "sampleInterval": 1
}

想问下有什么办法绕过 CF 的质询吗?

Convert User-Set Variables to Environment Variables for Cloudflare Workers

Currently, there are some variables for user settings in cf-openai-azure-proxy.js. It would be more convenient to change these variables to environment variables so that users can directly fill in the information in the Cloudflare workers settings. This will make it easier to update the code by simply replacing the entire code instead of having to fill out the variables every time.

const resourceName="your-resource-name"
// The deployment name you chose when you deployed the model.
const deployName="deployment-name"
const apiVersion="2023-03-15-preview"

I noticed that there is an example of using cloudflare workers' environment variables in this repo, which can be referred to. Thank you.

apiVersion可以填写哪些?

apiVersion只能填写“2023-08-01-preview”吗,Azure上的gpt4最新的版本是1106-Preview,我填写“2023-11-06-preview”就报错了

[Feature]大佬有计划支持access-token吗?

社区中有许多优秀的代理,例如pandora(fakeopen),通过access-token的形式来对接ChatGPT(WEB,不是API),大佬计划支持通过“'Authorization': 'Bearer {自己的access-token}‘”的形式连接chatgpt吗?

在cloudflare新建worker后,系统报错,代码 (Code: 10021)

以下两个参数均已替换
Your Azure OpenAI Resource Name
Your GPT-3.5 model deployment name

在保存并配置后,会显示
Uncaught ReferenceError: *** is not defined at worker.js:2:20 (Code: 10021)

提交GPT4后,它会重写一段新的,但最后还是无法运行。

Error for new version

After trying the new mapping version, I got this error:
[OpenAI] 服务器拒绝访问,请稍后再试 | Server refused to access, please try again later
I only config for GPT4 model only and it's seem like it's didn't work as well as previous version in my case

编译报错了

Sending build context to Docker daemon 10.62MB Step 1/9 : FROM golang:1.19 AS builder ---> aa2a9b347a08 Step 2/9 : COPY . /builder ---> Using cache ---> ea9658ac6662 Step 3/9 : WORKDIR /builder ---> Using cache ---> d4089a68a8c2 Step 4/9 : RUN make build ---> Running in 04d72eda2db1 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/gin-gonic/gin v1.9.0 go: downloading github.com/bytedance/sonic v1.8.6 go: downloading github.com/mattn/go-isatty v0.0.18 go: downloading golang.org/x/net v0.8.0 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/go-playground/validator/v10 v10.12.0 go: downloading github.com/pelletier/go-toml/v2 v2.0.7 go: downloading github.com/ugorji/go/codec v1.2.11 go: downloading google.golang.org/protobuf v1.30.0 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading golang.org/x/sys v0.6.0 go: downloading github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 go: downloading github.com/twitchyliquid64/golang-asm v0.15.1 go: downloading golang.org/x/arch v0.3.0 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.2.2 go: downloading golang.org/x/crypto v0.7.0 go: downloading golang.org/x/text v0.8.0 go: downloading github.com/klauspost/cpuid/v2 v2.2.4 go: downloading github.com/go-playground/locales v0.14.1 error obtaining VCS status: exit status 128 Use -buildvcs=false to disable VCS stamping. make: *** [Makefile:5: build] Error 1 The command '/bin/sh -c make build' returned a non-zero code: 2

希望增加对gpt4

希望增加对gpt4,已经有部分用户通过了gpt4的申请 但作者脚本应该不支持 希望作者能更新下 谢谢

求救,怎么都不行。使用opencat弹出Unexpected http code 403

各种尝试,都不行。我用第一种方式和第二种方式都进行了尝试
// The name of your Azure OpenAI Resource.
const resourceName='RESOURCE_NAME'

// The deployment name you chose when you deployed the model.
const mapper = {
'gpt-3.5-turbo': 'DEPLOY_NAME_GPT35',
//'gpt-4': DEPLOY_NAME_GPT4
};

const apiVersion="2023-05-15"

Azure模型名称 gpt-35-turbo
Azure模型版本 0613

Azure的设置如上,是不是版本问题?

[New feat request] Add PaLM API

I recently obtained access to the PaLM API for testing, but I found that there are hardly any client applications or products available for PaLM in the market. Therefore, I would like to know if it's possible to convert the PaLM API into OpenAI's format to integrate it with various third-party clients. I have already implemented the corresponding functionality for non-printer mode, but I have been struggling with printer mode. Can you help me resolve this issue? If you need, I can provide the API for you to debug.
Here is my code, mostly generated by GPT4 as I'm not familiar with JavaScript. Currently, it works fine in non-printer mode.

// The deployment name you chose when you deployed the model.
const deployName = 'chat-bison-001';

addEventListener("fetch", (event) => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  if (request.method === 'OPTIONS') {
    return handleOPTIONS(request)
  }

  const url = new URL(request.url);
  if (url.pathname === '/v1/chat/completions') {
    var path = "generateMessage"
  } else if (url.pathname === '/v1/completions') {
    var path = "generateText"
  } else {
    return new Response('404 Not Found', { status: 404 })
  }

  let body;
  if (request.method === 'POST') {
    body = await request.json();
  }

  const authKey = request.headers.get('Authorization');
  if (!authKey) {
    return new Response("Not allowed", {
      status: 403
    });
  }

  // Remove 'Bearer ' from the start of authKey
  const apiKey = authKey.replace('Bearer ', '');

  const fetchAPI = `https://generativelanguage.googleapis.com/v1beta2/models/${deployName}:${path}?key=${apiKey}`


  // Transform request body from OpenAI to PaLM format
  const transformedBody = {
    prompt: {
      messages: body?.messages?.map(msg => ({
        author: msg.role === 'user' ? '0' : '1',
        content: msg.content,
      })),
    },
  };

  const payload = {
    method: request.method,
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify(transformedBody),
  };

  const response = await fetch(fetchAPI, payload);
  const palmData = await response.json();

  // Transform response from PaLM to OpenAI format
  const transformedResponse = transformResponse(palmData);

  if (body?.stream != true){
      return new Response(JSON.stringify(transformedResponse), {
        headers: { 'Content-Type': 'application/json' },
      });
    } else {
      // add stream output
    }
}

// Function to transform the response
function transformResponse(palmData) {
  return {
    id: 'chatcmpl-' + Math.random().toString(36).substring(2), // Generate a random id
    object: 'chat.completion',
    created: Math.floor(Date.now() / 1000), // Current Unix timestamp
    model: 'gpt-3.5-turbo', // Static model name
    usage: {
      prompt_tokens: palmData.messages.length, // This is a placeholder. Replace with actual token count if available
      completion_tokens: palmData.candidates.length, // This is a placeholder. Replace with actual token count if available
      total_tokens: palmData.messages.length + palmData.candidates.length, // This is a placeholder. Replace with actual token count if available
    },
    choices: palmData.candidates.map((candidate, index) => ({
      message: {
        role: 'assistant',
        content: candidate.content,
      },
      finish_reason: 'stop', // Static finish reason
      index: index,
    })),
  };
}

async function handleOPTIONS(request) {
    return new Response(null, {
      headers: {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Methods': '*',
        'Access-Control-Allow-Headers': '*'
      }
    })
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.