Comments (42)
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.
from jarvis.
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
from jarvis.
Are the LFS objects absolutely necessary?
Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
from jarvis.
I guess you still can, but using hybrid
mode only. https://github.com/microsoft/JARVIS#configuration
from jarvis.
I guess you still can, but using
hybrid
mode only. https://github.com/microsoft/JARVIS#configuration
But the server needs Nvidia display card.
from jarvis.
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
I am also a Mac user and I encountered this issue while running this line of code. Could you please tell me what I should do if it is convenient?
from jarvis.
from jarvis.
The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
from jarvis.
Thank you. Your solution is very helpful, but after downloading so many files, the progress is still 0%. Is this a normal situation?The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
from jarvis.
Yes, the LFS objects are rather large. My models folder is 275 GB personally.
from jarvis.
Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
No, you can run the lite.yaml
configuration to use remote models only, although this is quite limited at the moment. I suggest using an external hard drive or SSD to manage these large models.
from jarvis.
@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml
(that is inference_mode: huggingface
) right? Because if we use inference_mode:local
(or inference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
from jarvis.
I have just downloaded the models on my Mac, I don't have the N display card.
And i have started with this models_server.py --config lite.yaml
I got the error messages :
AssertionError: Torch not compiled with CUDA enabled
comment the
"midas-control": {
"model": MidasDetector(model_path=f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt")
}
the models_server started
from jarvis.
did you run git lfs install
?
from jarvis.
yes,git lfs installed
from jarvis.
the version is 3.3.0
from jarvis.
I mean after you installed git-lfs, you need run git lfs install
first
if you did it already, run sh download.sh
again
from jarvis.
Thanks, I'll try it
from jarvis.
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
@Fermain @ethanye77 Did you encountered this error: #67
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
from jarvis.
Thanks, I'll try it
Hello, have you resolved this issue? I also reported the same error.
I executed the following command but still reported an error:
pip install git-lfs
cd models
sh download.sh
from jarvis.
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfs
The error message states that this is not installed.
from jarvis.
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfsThe error message states that this is not installed.
OK,Thank you!
from jarvis.
from jarvis.
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
from jarvis.
How to use it restrictively?
from jarvis.
The readme contains instructions for using the model with the lite.yaml
config file instead of the full config.yaml
file. Add your API keys to this lite file, and run this instead of config
.
from jarvis.
My device is a mackbook M1, how to solve this problem?
checkout my first post in this issue:
you don't need to change config.yaml
to lite.yaml
from jarvis.
Did it work successfully?我的设备是mackbook M1,如何解决这个问题?查看我在本期中的第一篇文章:
你不需要
config.yaml
改成lite.yaml
from jarvis.
it did
from jarvis.
@sirlaurie I missed that comment, very helpful - thanks
from jarvis.
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here
device: cuda:0 # cuda:id or cpu
Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local
on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?
from jarvis.
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.
very helpful - thanks
But encountered another problem~
My hugginggpt not work~
from jarvis.
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here
device: cuda:0 # cuda:id or cpu
Do it mean that if I set the device to "cpu", then I can run the server on
inference_mode =local
on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?
looks like it's a newly added option, but unfortunately, still no
from jarvis.
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.very helpful - thanks
check your network or your api quota
from jarvis.
thanks
How can the generated pictures be accessed?from jarvis.
thanks
How can the generated pictures be accessed?
This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/
, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!
from jarvis.
folder has been createdThis is a bug, you should create "images" and "audios" folder under
/path/to/JARVIS/server/public/
, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!
from jarvis.
from jarvis.
from jarvis.
what's wrong?
Weird, it's should not be like this, please backup you lite.yaml, and force update to the latest commit and try again.
from jarvis.
I think the latest commit has fixed this bug. just pull again
from jarvis.
following command as recommended to use mps(m1, m2 ,max )
conda install pytorch torchvision torchaudio -c pytorch-nightly
from jarvis.
Related Issues (20)
- 希望发布WINDOWS版
- Run command "npm run dev“, get npm Error. HOT 1
- Evaluation Dataset mentioned in Hugging GPT paper is not available HOT 1
- 运行python run_gradio_demo.py --config configs/config.gradio.yaml,报错:
- models_server.py error HOT 1
- Jarvis
- System : Failed to execute 'open' on 'XMLHttpRequest': Invalid URL HOT 1
- 这个项目不再更新了吗? HOT 2
- data目录下hg上llm的元数据文件p0_models.jsonl怎么获取
- No avaiable models, inference_mode: huggingface HOT 1
- Are there plans to release the task planning evaluation procedure and datasets? HOT 2
- Several questions.Whats the differences between the "tprompt" and "prompt" in config.yaml? HOT 1
- Jarvis
- M
- Idea to reduce resource usage in Microsoft/JARVIS
- keep getting Network Error response HOT 2
- windows 执行报错
- Don't retry same tool if already tried
- There is a runtime error in online demo. Please fix it.
- Function names in taskbench/generate_graph.py may be incorrect HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jarvis.