Comments (2)
The UI/UX doesn't give any indication that the cache is skipped when highlighted content is parsed
I haven't thought about this thoroughly. That said, my instinct is that you could/should cache embeddings depending upon the size of the selection (@see still possible that a user "selects all"
and even other large selection cases)
The issue was a legit logging of my experiences. I really thought there was a regression, and it wasn't until I rummaged around in the code that I discovered what was happening.
Bypassing the cache is probably the sort of behaviour that many would find reasonable in its simplicity, provided they knew what has happening.
Which is to say, yeah, some kind of indication would help!
from lumos.
A few comments...
- My thinking behind skipping the cache when highlighted content is selected is that a typical use case would be to highlight "smaller" chunks of text on a page that didn't already have a dedicated content parser (e.g. an infrequently visited site). In this case, embedding should be quick and a user would likely move on to highlight a different part of the page, which means the previous chunk doesn't need to be cached.
- It's still possible that a user "selects all" content (ctrl+a). In this case, vector search is still valuable.
- I've updated the search logic to use a combination of cosine similarity and keyword fuzziness search.
- The UI/UX doesn't give any indication that the cache is skipped when highlighted content is parsed. I can make some quick improvements here (e.g. documentation + messaging in app)
from lumos.
Related Issues (20)
- Lumos with WSL2 HOT 8
- Dark mode HOT 2
- Chat with PDF file in browser? HOT 4
- Skip download of unsupported image formats
- Enable chat model for images (multimodal support) HOT 1
- Sort imports, add functionality to linting
- Add dark mode option to README
- TypeError: Cannot read properties of undefined (reading 'includes') HOT 2
- Support cancel streaming
- Unable to connect to Ollama API on Mac OS HOT 7
- Support uploading image file (multimodal)
- Add shortcut to unattach file
- Add functionality to delete individual message (or regenerate LLM response)
- Add LangChain `YoutubeLoader` HOT 4
- Add audio document loader HOT 1
- WebLLM HOT 2
- Window resolution abnormal HOT 12
- Add support for embedding model `mxbai-embed-large`
- Summarize chat for chat title/preview (chat history view)
- Support `snowflake-arctic-embed` embedding model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lumos.