View Code? Open in Web Editor
NEW
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.
Home Page: https://chat-llm-web.vercel.app
License: MIT License
JavaScript 70.38%
TypeScript 29.55%
CSS 0.07%
chatllm-web's People
chatllm-web's Issues
要怎么切换成自定义模型,比如ChatGLM-6B?
The line break feature can be triggered by shift + enter? This will give us a more fluent prompt writing experience
Had to change directories to run the npm i - patch the readme to include the cd.
模型文件能不能下载到本地从本地加载,从huggingface.co下载有点慢,而且过一段时间浏览器清除缓存之后又要重新下载。
能否给个资源包放在开发服务器里,然后修改配置从本地加载
Logs:
Initialize GPU device: WebGPU - NVIDIA GeForce RTX 3060
Initialize GPU device: WebGPU - NVIDIA GeForce RTX 3060
Initialize GPU device: WebGPU - NVIDIA GeForce RTX 3060
Initialize GPU device: WebGPU - NVIDIA GeForce RTX 3060
[System Initalize] Loading GPU shader modules[54/54]: 100% completed, 10 secs elapsed.
[System Initalize] All initialization finished.
[System Initalize] All initialization finished.
[System Initalize] All initialization finished.
[System Initalize] All initialization finished.
[System Initalize] All initialization finished.
Generate error, OperationError: The operation failed for an operation-specific reason
Generate error, OperationError: The operation failed for an operation-specific reason
Generate error, OperationError: The operation failed for an operation-specific reason
Generate error, OperationError: The operation failed for an operation-specific reason
Generate error, OperationError: The operation failed for an operation-specific reason
use @mlc-ai/web-llm to refactor code
Thanks for building this! Desktop PWA support would be useful. Looking forward to using this more.
how can I choose my Nvidia GPU rather than intel graphics gpu, its really slow when I use default gpu, and I choose Nvidia in Nvidia control panel for google chrome canary, it still uses intel gpu.
setting page about:
model:
models selection
param config: temperature, max-length
device:
ui