Giter Club home page Giter Club logo

Comments (42)

sirlaurie avatar sirlaurie commented on May 22, 2024 7

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file

"midas-control": {sometmodel here}

you can run without nvidia device.

from jarvis.

Fermain avatar Fermain commented on May 22, 2024 5

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image

from jarvis.

davidjhwu avatar davidjhwu commented on May 22, 2024 1

Are the LFS objects absolutely necessary?
Tryna run this on my macbook air lol (16gb ram, 500gb ssd)

from jarvis.

ekiwi111 avatar ekiwi111 commented on May 22, 2024

I guess you still can, but using hybrid mode only. https://github.com/microsoft/JARVIS#configuration

from jarvis.

xiebruce avatar xiebruce commented on May 22, 2024

I guess you still can, but using hybrid mode only. https://github.com/microsoft/JARVIS#configuration

But the server needs Nvidia display card.
image

image

from jarvis.

ethanye77 avatar ethanye77 commented on May 22, 2024

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image

WX25660405-090513@2x
I am also a Mac user and I encountered this issue while running this line of code. Could you please tell me what I should do if it is convenient?

from jarvis.

ethanye77 avatar ethanye77 commented on May 22, 2024

here‘s my issue
image

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.

from jarvis.

ethanye77 avatar ethanye77 commented on May 22, 2024

The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.

image

Thank you. Your solution is very helpful, but after downloading so many files, the progress is still 0%. Is this a normal situation?

from jarvis.

ErikDombi avatar ErikDombi commented on May 22, 2024

Yes, the LFS objects are rather large. My models folder is 275 GB personally.

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)

No, you can run the lite.yaml configuration to use remote models only, although this is quite limited at the moment. I suggest using an external hard drive or SSD to manage these large models.

from jarvis.

iwoomi avatar iwoomi commented on May 22, 2024

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

from jarvis.

van0303 avatar van0303 commented on May 22, 2024

I have just downloaded the models on my Mac, I don't have the N display card.
And i have started with this models_server.py --config lite.yaml
I got the error messages :
AssertionError: Torch not compiled with CUDA enabled

comment the

"midas-control": {
            "model": MidasDetector(model_path=f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt")
            }

the models_server started

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

did you run git lfs install?

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

yes,git lfs installed

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

the version is 3.3.0

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

I mean after you installed git-lfs, you need run git lfs install first

if you did it already, run sh download.sh again

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

Thanks, I'll try it

from jarvis.

xiebruce avatar xiebruce commented on May 22, 2024

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image

@Fermain @ethanye77 Did you encountered this error: #67

Solving environment: failed with initial frozen solve. Retrying with flexible solve.

from jarvis.

liujie316316 avatar liujie316316 commented on May 22, 2024

Thanks, I'll try it

Hello, have you resolved this issue? I also reported the same error.
image

I executed the following command but still reported an error:
pip install git-lfs
cd models
sh download.sh

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh

git-lfs is not a pip package. You can use homebrew to install it:

brew install git-lfs

The error message states that this is not installed.

from jarvis.

liujie316316 avatar liujie316316 commented on May 22, 2024

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh

git-lfs is not a pip package. You can use homebrew to install it:

brew install git-lfs

The error message states that this is not installed.

OK,Thank you!

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

image

My device is a mackbook M1, how to solve this problem?

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

How to use it restrictively?

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

The readme contains instructions for using the model with the lite.yaml config file instead of the full config.yaml file. Add your API keys to this lite file, and run this instead of config.

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024
image My device is a mackbook M1, how to solve this problem?

checkout my first post in this issue:

#39 (comment)

you don't need to change config.yaml to lite.yaml

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024
图像 我的设备是mackbook M1,如何解决这个问题?

查看我在本期中的第一篇文章:

#39(评论)

你不需要config.yaml改成lite.yaml

image

Did it work successfully?

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

it did

from jarvis.

Fermain avatar Fermain commented on May 22, 2024

@sirlaurie I missed that comment, very helpful - thanks

from jarvis.

iwoomi avatar iwoomi commented on May 22, 2024

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here

device: cuda:0 # cuda:id or cpu

Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file

"midas-control": {sometmodel here}

you can run without nvidia device.

very helpful - thanks

But encountered another problem~
My hugginggpt not work~
image

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here

device: cuda:0 # cuda:id or cpu

Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?

looks like it's a newly added option, but unfortunately, still no

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.

very helpful - thanks

But encountered another problem~ My hugginggpt not work~ image

check your network or your api quota

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

thanks

image

How can the generated pictures be accessed?

from jarvis.

xiebruce avatar xiebruce commented on May 22, 2024

thanks

image

How can the generated pictures be accessed?

This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

thanks
image
How can the generated pictures be accessed?

This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!

image

folder has been created

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

image

image

Why does the path for generating images keep changing?

from jarvis.

xmagicwu avatar xmagicwu commented on May 22, 2024

image

what's wrong?

from jarvis.

iwoomi avatar iwoomi commented on May 22, 2024
image

what's wrong?

Weird, it's should not be like this, please backup you lite.yaml, and force update to the latest commit and try again.

from jarvis.

sirlaurie avatar sirlaurie commented on May 22, 2024

I think the latest commit has fixed this bug. just pull again

from jarvis.

hx9111 avatar hx9111 commented on May 22, 2024

following command as recommended to use mps(m1, m2 ,max )

conda install pytorch torchvision torchaudio -c pytorch-nightly

from jarvis.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.