Giter Club home page Giter Club logo

obsidian-bmo-chatbot's Introduction

BMO Chatbot for Obsidian

Generate and brainstorm ideas while creating your notes using Large Language Models (LLMs) from Ollama, LM Studio, Anthropic, OpenAI, Mistral AI, and more for Obsidian.

original_example dataview_example eli5_example

Breaking Changes

If you have <v2.0.0, please perform the following instructions:

  1. Go to Options > Community plugins > BMO Chatbot and uninstall the plugin.
  2. Re-install "BMO Chatbot"
  3. Restart Obsidian or toggle the plugin on/off to refresh.

Or,

  1. Go to Options > Community plugins and click on the folder's icon:
Screenshot 2024-03-10 at 9 28 38 PM
  1. Close Obsidian completely.
  2. Find the bmo-chatbot folder and delete data.json.
  3. Restart Obsidian.

Features

  • Interact with self-hosted Large Language Models (LLMs): Use the REST API URLs provided to interact with self-hosted Large Language Models (LLMs) using Ollama or LM Studio.
  • Profiles: Create chatbots with specific knowledge, personalities, and presets.
  • Chat from anywhere in Obsidian: Chat with your bot from anywhere within Obsidian.
  • Chat with current note: Use your chatbot to reference and engage within your current note.
  • Chatbot renders in Obsidian Markdown: Receive formatted responses in Obsidian Markdown for consistency.
  • Customizable bot name: Personalize the chatbot's name.
  • Prompt Select Generate: Prompt, select, and generate within your editor.
  • Save current chat history as markdown: Use the /save command in chat to save current conversation.

Requirements

If you want to interact with self-hosted Large Language Models (LLMs) using Ollama or LM Studio, you will need to have the self-hosted API set up and running. You can follow the instructions provided by the self-hosted API provider to get it up and running. Once you have the REST API URL for your self-hosted API, you can use it with this plugin to interact with your models.

Access to other models may require an API key.

Please see instructions to setup with other LLMs providers.

Explore some models at GPT4ALL under the "Model Explorer" section or Ollama's Library.

How to activate the plugin

Three methods:

Obsidian Community plugins (Recommended):

  1. Search for "BMO Chatbot" in the Obsidian Community plugins.
  2. Enable "BMO Chatbot" in the settings.

To activate the plugin from this repo:

  1. Navigate to the plugin's folder in your terminal.
  2. Run npm install to install any necessary dependencies for the plugin.
  3. Once the dependencies have been installed, run npm run build to build the plugin.
  4. Once the plugin has been built, it should be ready to activate.

Install using Beta Reviewers Auto-update Tester (BRAT) - Quick guide for using BRAT

  1. Search for "Obsidian42 - BRAT" in the Obsidian Community plugins.
  2. Open the command palette and run the command BRAT: Add a beta plugin for testing (If you want the plugin version to be frozen, use the command BRAT: Add a beta plugin with frozen version based on a release tag.)
  3. Paste "https://github.com/longy2k/obsidian-bmo-chatbot".
  4. Click on "Add Plugin".
  5. After BRAT confirms the installation, in Settings go to the Community plugins tab.
  6. Refresh the list of plugins.
  7. Find the beta plugin you just installed and enable it.

Getting Started

To start using the plugin, enable it in your settings menu and insert an API key or REST API URL from a provider. After completing these steps, you can access the bot panel by clicking on the bot icon in the left sidebar.

Commands

  • /help - Show help commands.
  • /model - List or change model.
    • /model 1 or /model "llama2"
      • ...
  • /profile - List or change profiles.
    • /profile 1 or /profile [PROFILE-NAME]
  • /prompt - List or change prompts.
    • /prompt 1 or /prompt [PROMPT-NAME]
  • /maxtokens [VALUE] - Set max tokens.
  • /temp [VALUE] - Change temperature range from 0 to 1.
  • /ref on | off - Turn on or off reference current note.
  • /append - Append current chat history to current active note.
  • /save - Save current chat history to a note.
  • /clear or /c - Clear chat history.
  • /stop or /s - Stop fetching response.

Supported Models

  • Any self-hosted models using Ollama.
  • Any self-hosted models using OpenAI's REST API URL endpoints.
  • Anthropic
    • claude-instant-1.2
    • claude-2.0
    • claude-2.1
    • claude-3-haiku-20240307
    • claude-3-sonnet-20240229
    • claude-3-opus-20240229
  • Mistral AI's models
  • Google Gemini Pro
  • OpenAI
    • gpt-3.5-turbo
    • gpt-3.5-turbo-1106
    • gpt-4
    • gpt-4-turbo-preview
  • Any Openrouter provided models.

Other Notes

"BMO" is a tag name for this project. Inspired by the character "BMO" from Adventure Time.

Be MOre!

Contributing

Any ideas or support is highly appreciated :)

If you have any bugs, improvements, or questions please create an issue or discussion!

Buy Me a Coffee at ko-fi.com

obsidian-bmo-chatbot's People

Contributors

keriati avatar levie-vans avatar longy2k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

obsidian-bmo-chatbot's Issues

Feature request - Extend a current prompt

using the / command method, the USER can extend a prompt by loading a snippet from a directory list. this would be similar to the /prompt command but it would be appended to the current context instead of replacing the prompt.

Mistral.ai API Support

I know this is a niche thing, however Mistral-medium is VERY impressive. If you are interested in trying this, please email me and I will get you access.

cant select model in Obsidian Plugin drop down

I am running a windows instance of ollama it works with other ollama obsidian plugins. With BMO though it won't show me any models to select in the dropdown list, it's empty. The interface works in obsidian in the right pane but its not responding at all.

I am assuming its because I can't select a model? What can I do?

Detach BMO from obsidian breaks it

If you take BMO off the main Obsidian frame, it will not respond. In my case I select the BMO tab, drag it off of Obsidian and onto the desktop, BMO will be in a separate window in a tab. Entering into the prompt field and hitting enter will not result in a response. Not a huge deal but it is convenient sometimes to move obsidian panes to a desktop location other than the main app. (especially in multiple monitor set ups.

Save chat messages

Nice the iPhone works well. Can we implement a feature to save chat messages in Markdown format to a note?

Feature request - regenerate / append more easily

Thanks for a nice extension, trying it out with ollama.

  1. sometimes ollama fails (UI either shows an empty response or throws an error and keeps playing the loading animation).

Would be nice if there was a regenerate/retry button (+some error handling, I see red error popup that says JSON.parse of the response failed, but it doesn't look like the BMO UI is aware of the error, also ollama is still generating for a long time after the error, so maybe if we handle that JSON.parse error we could "save" the rest of the reply?).

  1. I found /append and I am a keyboard oriented person, but it's a lot of typing, would be cool if there was a button next to a response to append just that response (or insert where the cursor last was?)

  2. Automatic append somehow? I sometimes want to generate-to-insert to the current note and doing it in multiple steps is a little frustrating (I'm using small models, so I'm not having long chats, just prompt then insert to note).

Thanks!

Feature Request

User Story: As a user, I want to save prompts that I commonly use so that I can easily reuse them later with a slash command.

Acceptance Criteria:

  1. The system should allow me to save prompts with a descriptive name
  2. The system should store my saved prompts for later use
  3. I should be able to see a list of my saved prompts
  4. I should be able to call up a saved prompt by its name or number using a slash command like /prompt1
  5. The chatbot should recognize my saved prompt and respond appropriately when I use the slash command
  6. Additional Details:

Support both numbered prompts /prompt {name} and named prompts /prompt {number}
Include confirmation when saving a new prompt

Allow deleting saved prompts
Store prompts locally so they are available across notes
Allow organizing prompts into categories

FEATURE request - Ollama style Modelfile syntax for prompts

allow a prompt to include header parameters that will permit the USER to set variables such as temperature or min_p etc (when applicable) Given that a user has these attributes set in the Prompt file, when loaded any model or BMO configuration may be prompt specific.

Claude 2.1

The current API will provide access to claude 2.1. I assume the modification needs to be here:

// view.ts

export const ANTHROPIC_MODELS = ["claude-instant-1.2", "claude-2.0"];

TIA!

Memory states

I find this feature of this app very useful and interesting, especially in large context windows. Imagine having an ongoing conversation with the assistant and tagging specific memories to remember and those to forget. These conversations could then be picked up as the current note (i.e. /ref on) and the conversation or assistant duties/tasks would be enriched by the memory status stored in the saved conversation and moderated within the active chat.

see:

https://github.com/InterwebAlchemy/obsidian-ai-research-assistant?tab=readme-ov-file#memories

Also that apps separation of preamble and prompt is useful.

optimise interface

Thanks for making a tip top app!

I'd love to have the interface updated for better use of the screen space / real-estate.

Specifically, the 'BMO/Chat \n Model: \n Reference' text area takes up lots of room. The text being smaller, and arranged horizontally would be a good start. But removing this all together would be even better, since knowing this information is more of a settings thing? Or even integrate it into the scrolling frame?

Screenshot 2024-01-01 at 12 22 37

Thanks again,

Integrating Third-Party OPENAI-Based API in Obsidian Plugin

Hello! Is it possible to integrate a third-party API based on OPENAI? I need to replace api.openai.com with a third-party API endpoint while keeping the rest of the usage consistent with OPENAI. Can this be achieved by modifying a part of the plugin file, such as a section in .obsidian/plugins/bmo-chatbot/data.json?

Feature Request: Obsidian commands.

Description. Provide the BMO bot the ability to CRUD entities in Obsidian. Provide settings, restrictions and BOT initiated reserved commands that a BOT can understand and BMO can interpret as function calls within the Obsidian API/URI etc. TLDR. Let my BOT manage obsidian.

Feature request - multiple API providers

Allow for OAI, Anthropic, OPENROUTER, MISTRAL, to coexist in the config such that models served by each might be selected from a list given that all APIs and keys are valid.

Openrouter.ai support

Title: Integrate Open Router API Support in BMO Chat Client

User Story: As a user of BMO, the chat client for Obsidian, I want to have the ability to connect to and select from various providers and models supported by openrouter.ai as an API provider, so that I can leverage a diverse range of AI capabilities within my chat experience.

Acceptance Criteria:

  • Given that I am using BMO, when I access the settings, then I should see an option to connect to the Open Router API.
  • Given that I have selected to use the Open Router API, when I am prompted to authenticate, then I should be able to input my credentials and successfully connect to the service.
  • Given that I am connected to the Open Router API, when I look for AI providers and models, then I should see a list of all supported options.
  • Given that I am viewing the list of providers and models, when I select one, then BMO should use that selection for processing my chat messages.
  • Given that I have selected a provider and model, when I send a chat message, then the message should be processed using the chosen AI capabilities.

Technical Notes:

  • Ensure the integration supports all current providers and models available on openrouter.ai.
  • The connection to the Open Router API should be secure and handle authentication tokens appropriately.
  • The user interface for selecting providers and models should be intuitive and user-friendly.
  • The integration should be designed to easily accommodate updates to the list of providers and models from openrouter.ai.

Tasks:

  • Research and understand the Open Router API documentation.
  • Implement the API connection within BMO's settings.
  • Create a user interface for selecting the AI provider and model.
  • Handle the authentication process for Open Router API.
  • Implement the logic to process chat messages using the selected AI provider and model.
  • Test the integration thoroughly to ensure reliability and security.

Definition of Done:

  • All acceptance criteria are met.
  • The feature has been tested across different scenarios and works as expected.
  • The code has been reviewed by peers.
  • The feature is documented, if necessary.
  • The new functionality is merged into the main branch and is deployable.

Feature Request - LM Studio Streaming

Love this plugin so far! Streaming works well with Ollama, but I'd love to be able to use this with LM Studio to enable streaming on Windows too with local LLMs. I have found LM Studio server setup to be much easier than Ollama, more choices are provided and it works on Windows.

Since it has an Open AI style API, I have seemingly been able to get it working both with the LocalAI and OpenAI base link settings under the Advanced Tab. This is great, but I need to wait for inferencing to finish before I get output from LM Studio.

Streaming does work otherwise, I believe all the API request needs is "stream": true added on the LM Studio side.
Screenshot 2024-01-15 at 11 04 40 AM

Not sure how much work is needed to handle the streaming within the plugin itself, I tried to make this change quickly to the API calls in a fork but completions vs streaming seem differently configured and therefore more work would be required.

Obsidian Copilot has this implemented for reference.

Thanks for the plugin - great work!

Feature request: Retrieval Augmented Generation (across all notes)

Adding retrieval to this plugin would be super powerful!

https://js.langchain.com/docs/get_started/introduction

Is something like this in the works? Potentially the chat interface could all for selection of the llm's access, with retrieval across: retrieval on current note, all of current note, retrieval across all notes, or no retrieval. this would also optimise token usage for the input on the current note.

Thanks for the great work

/Save is not working

since latest release, /save no longer is saving to a file. /append works as expected as does other slash commands.

Missing main.js?

I tried to sideload this plugin but I'm getting the error "Main.js is missing from the release"

Feature Request, support for the new ollama keep alive parameter

There is a new API parameter that would be awesome to have configurable support for. Here is what is in the ollama release notes:

keep_alive parameter: control how long models stay loaded
When making API requests, the new keep_alive parameter can be used to control how long a model stays loaded in memory:

curl http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Why is the sky blue?",
"keep_alive": "30s"
}'

If set to a positive duration (e.g. 20m, 1hr or 30), the model will stay loaded for the provided duration
If set to a negative duration (e.g. -1), the model will stay loaded indefinitely
If set to 0, the model will be unloaded immediately once finished
If not set, the model will stay loaded for 5 minutes by default

Input box text is unreadable with a bright accent color

See image :-)

Screenshot 2024-02-21 at 09 48 07

Elsewhere in Obsidian, all the text on the accent color background (like buttons) is black. That would be one possibility, but I also think the accent color is not the right choice for the input box, because it's used for buttons and similar UI elements.

How about the input box would simply be the same color as the background of your text? I think this would be the most logical and also nicest looking option :-) (Maybe you could choose to use the accent color for the text to make it stand out from "regular text".)

Otherwise this looks like a cool plugin, but it's just unusable for me now :-)
(I'm currently using Copilot, but I don't like its default visual settings that can't be changed.)

Add support for non-paragraph line breaks

I see lots of small models try to use single line breaks (like when making a list without using numbers or dashes). These lines appear squished together.

It's actually a really simple fix in the call to: marked({ breaks: true }).

messageBlock.innerHTML = marked(message, { breaks: true });

Reference doesn't included rendered information

The reference only takes file contents regardless of the rendered view. It would be nice to be able to load the rendered view of the file. As we use a lot of plugins like dataviewjs.

Note:```\n\n```dataviewjs\nconst average = array => array.reduce((a, b) => a + b) / array.length;\nconst data = await dv.query(...)

image

Feature request: Ability to edit the assistant response

As a user, I want the ability to edit the Assistant's response so that I can edit and steer the conversation.

Provide an icon to edit, similar to the user prompt icon
Provide a method to save the edit
Given that the assistant response has been edited, future responses will use the edits as the context.

Use case: As a writing tool or brainstorming tool, it is essential to steer the chat conversation or edit elements to comply with the User's vision.

enhancement: Permit 2 decimal places on /temp command

As a User, I would like the /temp command to support 2 decimal places so that I can fine tune temp.

Given that I enter /temp .72 then the temp will be .72

Currently it seems to round to a single decimal place.

Problem with setting system

(using local models via ollama)

I set the system prompt, then it acts like it doesn't see user messages and just repeats what's in the system.
I unset the system prompt, no matter what I write it tries to explain the concept of "system" to me, or offer code assistance.

Software bugs with AI are interesting - it's doing the opposite of what I expect and for some reason it's funny.

has to do with enabling/disabling reference note, so I'm guessing there's a logic problem with the note/system/user conditions

Feature Request - Profiles

As a User I would like to save all my BMO settings in Profiles, so I can load presets to switch functionality and settings.

  1. Given that I save a Profile, it is added to the list of valid profiles, so I can select it later as a preset.
  2. Given that I make changes to a profile, I can update the saved Profile, so that I can use those settings later.
  3. Given that I have an ollama model profile, when I switch profiles, the ollama specific information is changed
  4. Given that a profile has API settings, those settings will be saved in a profile.
  5. Given that I change a profile, models will change as well.

RESTAPIURL connection error

llama.cpp is deployed locally, which was previously available but is no longer available
在本地部署了llama.cpp,以前可以使用,但现在不能使用了

image

One-time context field

Overviewhen chatting with a chatbot, you sometimes need to provide additional "context" for the bot to better understand your intent. But constantly supplying this context throughout the conversation is very redundant and wasteful.

The "one-time context field" feature allows you to enter a paragraph of contextual text that gets sent automatically at the start of each new conversation, BEFORE your first message.

After that initial payload, the context field will NOT be resent for the remainder of the conversation. This saves on unnecessary repeated payload.

When starting a new conversation, the context field is cleared so you can update it with new content.


User Story: As a user, I want to configure a context field that gets sent automatically to the AI at the start of every new conversation so that I can establish more in-depth context without repeated payload.

Acceptance Criteria:

  1. The system should allow me to enter customizable context in a configuration field
  2. When starting a brand-new conversation, the context field should automatically be sent as the first request payload before the system prompt
  3. The context field payload should only be sent once per conversation
  4. In subsequent user messages within the same conversation, the context field should no longer be sent
  5. This will reduce unnecessary repeated context in ongoing conversations
  6. The context field should be cleared when starting conversations to allow updating

Additional Details:

Allow Markdown formatting in the context field
Indicate when context field payload was sent via UI notification
Cache context field payload locally to persist between sessions
Add toggle to enable/disable the context field feature

Ability to edit the context

add the ability to edit an entry or response. This would permit the user to alter the context for steering and correction. I imagine a pencil icon along with the existing copy and delete

localai.io is not setup correctly - models are not fetched if the remote address is != localhost

In settings.ts there are two things that prevent the model fetching from working correctly

first: instead of hard coding the url, this.plugin.settings.restAPIUrl should be used

async fetchData() {
		const url = 'http://localhost:8080/v1/models';
	

2nd: when filling the drop down of models, there should be no default values for localai.io, as this is completely up to the user, how they set them up. If nothing is returned, nothing should be shown

			.addDropdown(dropdown => {
				dropdown
					.addOption('gpt-3.5-turbo', 'gpt-3.5-turbo')
					.addOption('gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k')
					.addOption('gpt-4', 'gpt-4')
					if (this.plugin.settings.restAPIUrl) {
						if (models && models.length > 0) {
							models.forEach((model: string) => {
							dropdown.addOption(model, model);
							});
						}
					}

sorry, i have no dev environment at hand, otherwise I would have created the PRs myself

Feature Request - Azure support

User Story:As a BMO Userr, I want to configure an Azure-hosted GPT connection within the BMO application settings so that we can leverage its capabilities for AI service providers.

Acceptance Criteria:

  1. Given that I am configuring the application settings, When I input the Azure connection details for hosted OAI Then the system validates the connection parameters and saves the configuration securely.

Support the following criteria:
Endpoint field
API key field
Vaidate button

  1. Given that a valid azure connection exists, provisioned models can be selected in the setting modal and via the /model command

Ollama support

Ollama serve is working, I can see post but nothing in the BMO response after prompt. System Prompt doesnt make a difference.

llama_print_timings: load time = 5719.89 ms llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: prompt eval time = 576.64 ms / 111 tokens ( 5.19 ms per token, 192.49 tokens per second) llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: total time = 577.65 ms {"timestamp":1700657519,"level":"INFO","function":"log_server_request","line":1240,"message":"request","remote_addr":"127.0.0.1","remote_port":61340,"status":200,"method":"POST","path":"/tokenize","params":{}} [GIN] 2023/11/22 - 07:51:59 | 200 | 6.506123584s | 127.0.0.1 | POST "/api/generate"

Possible Bug - Prompt Generation does not work openrouter

I only get the generation complete dialog and no content generated when using this feature. Works with ollama with MOST of the models I tried. Some that are tuned in the modelfile with a custom SYSTEM dont respond. -- I recommend a small model like Mistral for a quick response.

Verified that it works with Openai GPT4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.