Giter Club home page Giter Club logo

Comments (19)

Gitterman69 avatar Gitterman69 commented on August 29, 2024

bump - same

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

Thank you for your interest and opening issue, we are working on the demo to be able to run in Windows, and the instructions for Windows will be available soon. We would suggest you to use Linux OS.

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

@SoftologyPro @Gitterman69

Could you please give more details? At which step have you encountered that error? It is when you click 'Run' button? In the recent commit, I have updated the paths such that it should become compatible with Windows. Thanks

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

New error now. I use the settings shown in the screenshot below. When I click Run All I get these errors.
Do I need to manually download some extra models?

C:\Users\Jason\AppData\Local\Temp\gradio\8aad7b5c09060a0858874df8fd3cce2a390d68b2\wolf.mp4
Frame count: 4
vae\diffusion_pytorch_model.safetensors not found
Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1522, in process_api
    result = await self.call_function(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1144, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\utils.py", line 674, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\RAVE\webui.py", line 132, in run
    CN.init_models(input_ns.hf_cn_path, input_ns.hf_path, input_ns.preprocess_name, input_ns.model_id)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 49, in init_models
    pipe = self.__init_pipe(hf_cn_path, hf_path)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 43, in __init_pipe
    pipe.enable_xformers_memory_efficient_attention()
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1442, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1468, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1458, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\attention_processor.py", line 192, in set_use_memory_efficient_attention_xformers
    raise ModuleNotFoundError(
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers
Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1522, in process_api
    result = await self.call_function(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1144, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\utils.py", line 674, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\RAVE\webui.py", line 132, in run
    CN.init_models(input_ns.hf_cn_path, input_ns.hf_path, input_ns.preprocess_name, input_ns.model_id)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 49, in init_models
    pipe = self.__init_pipe(hf_cn_path, hf_path)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 43, in __init_pipe
    pipe.enable_xformers_memory_efficient_attention()
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1442, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1468, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1458, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\diffusers\models\attention_processor.py", line 192, in set_use_memory_efficient_attention_xformers
    raise ModuleNotFoundError(
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 501, in process_events
    response = await self.call_prediction(awake_events, batch)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
    raise Exception(str(error) if show_error else None) from error
Exception: None

image

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

it seems that you have not properly installed 'xformers' library to your environment. Please run
pip install xformers==0.0.20
to install it.

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

OK, thanks, that worked. Got a bit further. The model downloaded OK, then this error
zoedepth key error?

Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1522, in process_api
    result = await self.call_function(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1144, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\utils.py", line 674, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\RAVE\webui.py", line 143, in run
    res_vid, control_vid = CN(input_dict)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 399, in __call__
    img_batch, control_batch = self.process_image_batch(input_dict['image_pil_list'])
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 311, in process_image_batch
    control_pil = self.preprocess_control_grid(image_pil)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 140, in preprocess_control_grid
    list_of_pils = [pu.pixel_perfect_process(np.array(frame_pil, dtype='uint8'), self.preprocess_name) for frame_pil in list_of_image_pils]
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 140, in <listcomp>
    list_of_pils = [pu.pixel_perfect_process(np.array(frame_pil, dtype='uint8'), self.preprocess_name) for frame_pil in list_of_image_pils]
  File "D:\Tests\RAVE\utils\preprocesser_utils.py", line 211, in pixel_perfect_process
    detected_map, _ = preprocessors_dict[p_name](input_image, res=preprocessor_resolution)
  File "D:\Tests\RAVE\utils\preprocesser_utils.py", line 187, in zoe_depth
    result = model_zoe_depth(img)
  File "D:\Tests\RAVE\annotator\zoe\__init__.py", line 38, in __call__
    self.load_model()
  File "D:\Tests\RAVE\annotator\zoe\__init__.py", line 26, in load_model
    conf = get_config("zoedepth", "infer")
  File "D:\Tests\RAVE\annotator\zoe\zoedepth\utils\config.py", line 384, in get_config
    version_name = overwrite_kwargs.get("version_name", config["version_name"])
KeyError: 'version_name'
Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1522, in process_api
    result = await self.call_function(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\blocks.py", line 1144, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\utils.py", line 674, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\RAVE\webui.py", line 143, in run
    res_vid, control_vid = CN(input_dict)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 399, in __call__
    img_batch, control_batch = self.process_image_batch(input_dict['image_pil_list'])
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 311, in process_image_batch
    control_pil = self.preprocess_control_grid(image_pil)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 140, in preprocess_control_grid
    list_of_pils = [pu.pixel_perfect_process(np.array(frame_pil, dtype='uint8'), self.preprocess_name) for frame_pil in list_of_image_pils]
  File "D:\Tests\RAVE\pipelines\sd_controlnet_rave.py", line 140, in <listcomp>
    list_of_pils = [pu.pixel_perfect_process(np.array(frame_pil, dtype='uint8'), self.preprocess_name) for frame_pil in list_of_image_pils]
  File "D:\Tests\RAVE\utils\preprocesser_utils.py", line 211, in pixel_perfect_process
    detected_map, _ = preprocessors_dict[p_name](input_image, res=preprocessor_resolution)
  File "D:\Tests\RAVE\utils\preprocesser_utils.py", line 187, in zoe_depth
    result = model_zoe_depth(img)
  File "D:\Tests\RAVE\annotator\zoe\__init__.py", line 38, in __call__
    self.load_model()
  File "D:\Tests\RAVE\annotator\zoe\__init__.py", line 26, in load_model
    conf = get_config("zoedepth", "infer")
  File "D:\Tests\RAVE\annotator\zoe\zoedepth\utils\config.py", line 384, in get_config
    version_name = overwrite_kwargs.get("version_name", config["version_name"])
KeyError: 'version_name'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 501, in process_events
    response = await self.call_prediction(awake_events, batch)
  File "D:\Tests\RAVE\voc_rave\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
    raise Exception(str(error) if show_error else None) from error
Exception: None

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

@SoftologyPro Hey, could you please pull and try? I realized that I missed the json files required for depthzoe, now it should work. Thanks

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

OK, getting further now. It would be good to have an x/y steps for these stats.

img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]
img_size [384, 512]

It is now at a stage that has an estimated 1 hour to go? Is this normal? I will let it run and see if it completes.
5%|███████████▌ | 1/20 [03:21<1:03:41, 201.13s/it]

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

hey @SoftologyPro it is not normal, it is taking around 1-2s/it at most for the wolf video (512x512 sized video). I think you might be using CPU instead of GPU/CUDA.

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

I thought so too at first, but CPU is 2% and GPU is pegged at 100%.
Still going here (and I accidentally killed it, will let it run again and see if it finishes)
| 12/20 [42:04<28:09, 211.16s/it]
I've got xformers, accelerate and GPU torch.

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

what gpu are you using?

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

4090 24 GB

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

Wait, I just rebooted and restarted the wolf. Now only 7 minutes estimated. 17.22s/it

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

hey, here I am also using 4090 with 24 gbs, here it's around 3s/it. Maybe could you try increasing the batch size?

image

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

hey, here I am also using 4090 with 24 gbs, here it's around 3s/it. Maybe could you try increasing the batch size?

image

What batch size are you using?

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

it's 4 on my end

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

it's 4 on my end

4 here too. That is the default. You are running under Linux though right? May make a difference.

from rave.

SoftologyPro avatar SoftologyPro commented on August 29, 2024

Raising batch size to 10 does not seem to help the speed.
Anyway, it works now. I will do some tetsts on other PCs.

from rave.

ozgurkara99 avatar ozgurkara99 commented on August 29, 2024

Feel free to do tests, I am closing the issue.

from rave.

Related Issues (16)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.