Giter Club home page Giter Club logo

Comments (6)

levie-vans avatar levie-vans commented on July 18, 2024 1

Is http://api.openai-new.com pulling any new models?

If you left the model as gpt-3.5-turbo, gpt-3.5-turbo-16k, or gpt-4, it will still execute the OpenAI API, not api.openai-new.com.

If api.openai-new.com is fetching new models (e.g. gpt-3.5.turbo-new), it should populate the dropdown menu once you refresh the tab like this:

Screenshot 2023-10-28 at 10 24 28 AM (It is fetching `llama2-7b.bin` from my local computer but it should fetch any models you have from `api.openai-new.com`)

Also, when you insert http://api.openai-new.com into the REST API URL settings like this:

Screenshot 2023-10-28 at 10 29 42 AM It will be executed as `http://api.openai-new.com/v1/chat/completions` to mimic the original OpenAI API endpoint.

Since the /v1/chat/completions is hardcoded, you may need to change it in the source code to pull correctly from your API endpoint.

// src/view.ts
// Request response from self-hosted models
async function requestUrlChatCompletion(
    url: string, 
    settings: { apiKey: string; model: string; system_role: string; }, 
    referenceCurrentNote: string,
    messageHistoryContent: { role: string; content: string }[] = [],
    maxTokens: string, 
    temperature: number)
    {
        const messageHistory = messageHistoryContent.map((item: { role: string; content: string; }) => ({
            role: item.role,
            content: item.content,
        }));

        try {
            const response = await requestUrl({
                url: url + '/v1/chat/completions', <---------------- You may need to format this to your specified api endpoint (wherever your third-party API models are).
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                    'Authorization': `Bearer ${settings.apiKey}`
                },
                body: JSON.stringify({
                    model: settings.model,
                    messages: [
                        { role: 'system', content: referenceCurrentNote + settings.system_role },
                        ...messageHistory
                    ],
                    max_tokens: parseInt(maxTokens),
                    temperature: temperature,
                }),
            });

            return response;

        } catch (error) {
            console.error('Error making API request:', error);
            throw error;
        }
}

Let me know if this helps!

I am curious if you got this working since I have not played with any third-party api outside of LocalAI :)

(I do see that your python script is using the same model name as OpenAI API, gpt-3.5-turbo, but with your third-party api_base and api_key. If you can, rename your third-party model to something else other than gpt-3.5-turbo, gpt-3.5-turbo-16k, or gpt-4.)

I have added a feature to the program for customizing the OpenAI API, which addresses the issue raised by the user above. I hope it can be merged into the main branch. The link to the Pull Request is #12.

from obsidian-bmo-chatbot.

longy2k avatar longy2k commented on July 18, 2024

The data.json file only stores the BMO Chatbot settings.

Try changing the REST API URL in the BMO Chatbot settings with the third-party url API endpoint since it should be consistent with OpenAI API. When you modify the REST API URL, it will be inserted as [YOUR URL ENDPOINT] + '/v1/chat/completions'.

Let me know if that works!

from obsidian-bmo-chatbot.

kl2111 avatar kl2111 commented on July 18, 2024

@longy2k

Thank you for your response. I'm still not getting the expected result. Here's what I did:

  1. I modified the restAPIUrl in data.json to my MY URL ENDPOINT (the actual value is api.openai-new.com, and I even tried various formats like http://api.openai-new.com, http://api.openai-new.com/v1/chat/completions, api.openai-new.com/v1/chat/completions). The data.json also returns the apikey that I set up in the obsidian-bmo-chatbot settings page. However, I'm still getting an error message: "content": "<p>Incorrect API key provided: new-91859***************************************6d09. You can find your API key at <a href=\"https://platform.openai.com/account/api-keys\">https://platform.openai.com/account/api-keys</a>.</p>\n".

  2. Interestingly, I can successfully get a response from OpenAI when I set the MY URL ENDPOINT and apikey in a Python script. I've attached the relevant code snippet below for reference.

Any guidance would be appreciated. Thank you.

# === L1_1_Configuration ===

openai.api_base = "https://api.openai-new.com/v1"
openai.api_key = "XXXXX"
prompt = "XXXXXXX "
max_token = 3000

# other part

# === L1_3_Main Logic ===

async def chat_gpt(content):
    messages = [{"role": "user", "content": content}]
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
    )
    return response.choices[0]['message']['content']

async def main(file_paths):
    all_output = ""  
    results = []
    current_time = time.strftime("%Y%m%d_%H%M", time.localtime())


    tokenizer = AutoTokenizer.from_pretrained("gpt2")   #Only method for calculating tokens is using GPT-2.
    os.environ["TOKENIZERS_PARALLELISM"] = "false"
    
    for file_path in file_paths:
        print(f"Debug:{file_path}")
        
        with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
            text = f.read()
            
        tokens = tokenizer.tokenize(text)
        
        if len(tokens) <= max_token:
            selected_tokens = tokens
        else:
            num_chunks = -(-len(tokens) // max_token)
            chunks = [tokens[i:i + max_token] for i in range(0, len(tokens), max_token)]
            
            selected_tokens = chunks[0]
            
        content = tokenizer.convert_tokens_to_string(selected_tokens)
        
        content = prompt + content
                    
        api_response = await chat_gpt(content)
        results.append((file_path, api_response)) 
    return results  

from obsidian-bmo-chatbot.

longy2k avatar longy2k commented on July 18, 2024

Is http://api.openai-new.com pulling any new models?

If you left the model as gpt-3.5-turbo, gpt-3.5-turbo-16k, or gpt-4, it will still execute the OpenAI API, not api.openai-new.com.

If api.openai-new.com is fetching new models (e.g. gpt-3.5.turbo-new), it should populate the dropdown menu once you refresh the tab like this:

Screenshot 2023-10-28 at 10 24 28 AM

(It is fetching llama2-7b.bin from my local computer but it should fetch any models you have from api.openai-new.com)

Also, when you insert http://api.openai-new.com into the REST API URL settings like this:

Screenshot 2023-10-28 at 10 29 42 AM

It will be executed as http://api.openai-new.com/v1/chat/completions to mimic the original OpenAI API endpoint.

Since the /v1/chat/completions is hardcoded, you may need to change it in the source code to pull correctly from your API endpoint.

// src/view.ts
// Request response from self-hosted models
async function requestUrlChatCompletion(
    url: string, 
    settings: { apiKey: string; model: string; system_role: string; }, 
    referenceCurrentNote: string,
    messageHistoryContent: { role: string; content: string }[] = [],
    maxTokens: string, 
    temperature: number)
    {
        const messageHistory = messageHistoryContent.map((item: { role: string; content: string; }) => ({
            role: item.role,
            content: item.content,
        }));

        try {
            const response = await requestUrl({
                url: url + '/v1/chat/completions', <---------------- You may need to format this to your specified api endpoint (wherever your third-party API models are).
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                    'Authorization': `Bearer ${settings.apiKey}`
                },
                body: JSON.stringify({
                    model: settings.model,
                    messages: [
                        { role: 'system', content: referenceCurrentNote + settings.system_role },
                        ...messageHistory
                    ],
                    max_tokens: parseInt(maxTokens),
                    temperature: temperature,
                }),
            });

            return response;

        } catch (error) {
            console.error('Error making API request:', error);
            throw error;
        }
}

Let me know if this helps!

I am curious if you got this working since I have not played with any third-party api outside of LocalAI :)

(I do see that your python script is using the same model name as OpenAI API, gpt-3.5-turbo, but with your third-party api_base and api_key. If you can, rename your third-party model to something else other than gpt-3.5-turbo, gpt-3.5-turbo-16k, or gpt-4.)

from obsidian-bmo-chatbot.

longy2k avatar longy2k commented on July 18, 2024

Awesome! I'm going to make some changes hopefully by the end of this week. I will definitely add this PR :)

from obsidian-bmo-chatbot.

longy2k avatar longy2k commented on July 18, 2024

@kl2111

Sorry it took me so long to actually get to this. I just merged @levie-vans PR (#12).

Let me know if v1.5.1 is working for you.

Thanks

from obsidian-bmo-chatbot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.