Comments (6)
Is
http://api.openai-new.com
pulling any new models?If you left the model as
gpt-3.5-turbo
,gpt-3.5-turbo-16k
, orgpt-4
, it will still execute the OpenAI API, notapi.openai-new.com
.If
(It is fetching `llama2-7b.bin` from my local computer but it should fetch any models you have from `api.openai-new.com`)api.openai-new.com
is fetching new models (e.g. gpt-3.5.turbo-new), it should populate the dropdown menu once you refresh the tab like this:Also, when you insert
It will be executed as `http://api.openai-new.com/v1/chat/completions` to mimic the original OpenAI API endpoint.http://api.openai-new.com
into theREST API URL
settings like this:Since the
/v1/chat/completions
is hardcoded, you may need to change it in the source code to pull correctly from your API endpoint.// src/view.ts // Request response from self-hosted models async function requestUrlChatCompletion( url: string, settings: { apiKey: string; model: string; system_role: string; }, referenceCurrentNote: string, messageHistoryContent: { role: string; content: string }[] = [], maxTokens: string, temperature: number) { const messageHistory = messageHistoryContent.map((item: { role: string; content: string; }) => ({ role: item.role, content: item.content, })); try { const response = await requestUrl({ url: url + '/v1/chat/completions', <---------------- You may need to format this to your specified api endpoint (wherever your third-party API models are). method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${settings.apiKey}` }, body: JSON.stringify({ model: settings.model, messages: [ { role: 'system', content: referenceCurrentNote + settings.system_role }, ...messageHistory ], max_tokens: parseInt(maxTokens), temperature: temperature, }), }); return response; } catch (error) { console.error('Error making API request:', error); throw error; } }
Let me know if this helps!
I am curious if you got this working since I have not played with any third-party api outside of LocalAI :)
(I do see that your python script is using the same model name as OpenAI API,
gpt-3.5-turbo
, but with your third-party api_base and api_key. If you can, rename your third-party model to something else other thangpt-3.5-turbo
,gpt-3.5-turbo-16k
, orgpt-4
.)
I have added a feature to the program for customizing the OpenAI API, which addresses the issue raised by the user above. I hope it can be merged into the main branch. The link to the Pull Request is #12.
from obsidian-bmo-chatbot.
The data.json
file only stores the BMO Chatbot settings.
Try changing the REST API URL
in the BMO Chatbot settings with the third-party url API endpoint since it should be consistent with OpenAI API. When you modify the REST API URL, it will be inserted as [YOUR URL ENDPOINT] + '/v1/chat/completions'.
Let me know if that works!
from obsidian-bmo-chatbot.
Thank you for your response. I'm still not getting the expected result. Here's what I did:
-
I modified the
restAPIUrl
indata.json
to myMY URL ENDPOINT
(the actual value isapi.openai-new.com
, and I even tried various formats likehttp://api.openai-new.com
,http://api.openai-new.com/v1/chat/completions
,api.openai-new.com/v1/chat/completions
). Thedata.json
also returns theapikey
that I set up in the obsidian-bmo-chatbot settings page. However, I'm still getting an error message:"content": "<p>Incorrect API key provided: new-91859***************************************6d09. You can find your API key at <a href=\"https://platform.openai.com/account/api-keys\">https://platform.openai.com/account/api-keys</a>.</p>\n"
. -
Interestingly, I can successfully get a response from OpenAI when I set the
MY URL ENDPOINT
andapikey
in a Python script. I've attached the relevant code snippet below for reference.
Any guidance would be appreciated. Thank you.
# === L1_1_Configuration ===
openai.api_base = "https://api.openai-new.com/v1"
openai.api_key = "XXXXX"
prompt = "XXXXXXX "
max_token = 3000
# other part
# === L1_3_Main Logic ===
async def chat_gpt(content):
messages = [{"role": "user", "content": content}]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
return response.choices[0]['message']['content']
async def main(file_paths):
all_output = ""
results = []
current_time = time.strftime("%Y%m%d_%H%M", time.localtime())
tokenizer = AutoTokenizer.from_pretrained("gpt2") #Only method for calculating tokens is using GPT-2.
os.environ["TOKENIZERS_PARALLELISM"] = "false"
for file_path in file_paths:
print(f"Debug:{file_path}")
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
text = f.read()
tokens = tokenizer.tokenize(text)
if len(tokens) <= max_token:
selected_tokens = tokens
else:
num_chunks = -(-len(tokens) // max_token)
chunks = [tokens[i:i + max_token] for i in range(0, len(tokens), max_token)]
selected_tokens = chunks[0]
content = tokenizer.convert_tokens_to_string(selected_tokens)
content = prompt + content
api_response = await chat_gpt(content)
results.append((file_path, api_response))
return results
from obsidian-bmo-chatbot.
Is http://api.openai-new.com
pulling any new models?
If you left the model as gpt-3.5-turbo
, gpt-3.5-turbo-16k
, or gpt-4
, it will still execute the OpenAI API, not api.openai-new.com
.
If api.openai-new.com
is fetching new models (e.g. gpt-3.5.turbo-new), it should populate the dropdown menu once you refresh the tab like this:
(It is fetching llama2-7b.bin
from my local computer but it should fetch any models you have from api.openai-new.com
)
Also, when you insert http://api.openai-new.com
into the REST API URL
settings like this:
It will be executed as http://api.openai-new.com/v1/chat/completions
to mimic the original OpenAI API endpoint.
Since the /v1/chat/completions
is hardcoded, you may need to change it in the source code to pull correctly from your API endpoint.
// src/view.ts
// Request response from self-hosted models
async function requestUrlChatCompletion(
url: string,
settings: { apiKey: string; model: string; system_role: string; },
referenceCurrentNote: string,
messageHistoryContent: { role: string; content: string }[] = [],
maxTokens: string,
temperature: number)
{
const messageHistory = messageHistoryContent.map((item: { role: string; content: string; }) => ({
role: item.role,
content: item.content,
}));
try {
const response = await requestUrl({
url: url + '/v1/chat/completions', <---------------- You may need to format this to your specified api endpoint (wherever your third-party API models are).
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${settings.apiKey}`
},
body: JSON.stringify({
model: settings.model,
messages: [
{ role: 'system', content: referenceCurrentNote + settings.system_role },
...messageHistory
],
max_tokens: parseInt(maxTokens),
temperature: temperature,
}),
});
return response;
} catch (error) {
console.error('Error making API request:', error);
throw error;
}
}
Let me know if this helps!
I am curious if you got this working since I have not played with any third-party api outside of LocalAI :)
(I do see that your python script is using the same model name as OpenAI API, gpt-3.5-turbo
, but with your third-party api_base and api_key. If you can, rename your third-party model to something else other than gpt-3.5-turbo
, gpt-3.5-turbo-16k
, or gpt-4
.)
from obsidian-bmo-chatbot.
Awesome! I'm going to make some changes hopefully by the end of this week. I will definitely add this PR :)
from obsidian-bmo-chatbot.
Sorry it took me so long to actually get to this. I just merged @levie-vans PR (#12).
Let me know if v1.5.1 is working for you.
Thanks
from obsidian-bmo-chatbot.
Related Issues (20)
- Re-enable /prompts HOT 3
- Please advise how to set this up with GPT4ALL
- Leading space in output is not trimmed HOT 4
- Rename note title can output illegal characters (edit: caused by leading space) HOT 1
- Option to pass rendered note as opposed to source HOT 1
- [Functional Error] "/" anywhere in the prompt is causing errors. HOT 1
- Messages end up in wrong order, sometimes disappear, "delete" deleting the wrong one HOT 13
- BMO Talking to Itself? HOT 26
- Ollama API has been blocked by CORS policy macOS HOT 2
- BUG: Phi3 not stopping HOT 1
- Feature Request: Modelfile Support HOT 1
- why is so slow? HOT 1
- Streaming setting linux HOT 1
- Feature requests - Reference links HOT 1
- Ollama not working in 2.1.0 HOT 13
- Error: messages for role:user the following must be satisfied HOT 4
- Error: Request failed, status 400 HOT 2
- BUG about API base URL HOT 3
- Feature request: Load and Manage Saved Conversations in Profiles
- abort isn't fully aborting? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from obsidian-bmo-chatbot.