Ollama as your translator, with DeepLX-compatible API.
You need to run Ollama and download the model you want to use.
ollama serve
ollama pull llama3.1
You can run ARPK directly without installation.
ARPK_MODEL="llama3.1" npx arpk
TODO
/translate
, /api/v1/translate
and /api/v2/translate
are connected to the same translate endpoint.
await fetch('http://127.0.0.1:1188/translate', {
body: JSON.stringify({
source_lang: 'JA',
target_lang: 'ZH',
text: '้จใฎๅญฃ็ฏใ้ใ ๆพใฟๆธกใ็ฉบใ ็บใใฆ็ฌใๆณใ'
}),
method: 'POST'
}).then(res => res.json())
// {
// alternates: [],
// code: 200,
// data: '้จๅญฃ่ฟๅ๏ผๆดๆ็ๅคฉ็ฉบไธๆ็ฌ่ช้ๆใ',
// id: 1519129853500,
// method: 'ARPK',
// source_lang: 'JA',
// target_lang: 'ZH'
// }
Currently only Bearer Auth is supported when using
ARPK_TOKEN
, not URL Params.
Environment | Default | Description |
---|---|---|
ARPK_PORT | 1188 | The port the server will listen on |
ARPK_MODEL | llama3.1 | Model to be used by the ARPK |
ARPK_TOKEN | null | Access token to protect your API |
ARPK_OLLAMA_HOST | http://127.0.0.1:11434 | The Ollama host address |
ARPK_SYSTEM_PROMPT | https://github.com/moeru-ai/arpk/blob/main/src/lib/prompts.ts | System prompt |