Giter Club home page Giter Club logo

magictime's People

Contributors

eltociear avatar fenglui avatar infaaa avatar linb203 avatar shyuanbest avatar truedat101 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magictime's Issues

safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

i install everything correctly and when i want to run an inference , i get this error.
People on the net with the same error told me that it can be corrupted file so i download everything again ( multiple time and i keep getting this error) here the full error message

Traceback (most recent call last):
File "/home/azureuser/localfiles/MagicTime/inference_magictime.py", line 249, in
main(args)
File "/anaconda/envs/magictime/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/azureuser/localfiles/MagicTime/inference_magictime.py", line 61, in main
text_encoder = CLIPTextModel.from_pretrained(model_config.pretrained_model_path, subfolder="text_encoder").cuda()
File "/anaconda/envs/magictime/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3284, in from_pretrained
with safe_open(resolved_archive_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

torch.cuda.OutOfMemoryError: CUDA out of memory.

Running inference on ubutnu 22.04 with an NVIDIA 3080 (12GB), getting:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacity of 11.75 GiB of which 1.08 GiB is free. Including non-PyTorch memory, this process has 9.95 GiB memory in use. Of the allocated memory 9.51 GiB is allocated by PyTorch, and 139.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Looking for ways to tune configuration so OOM does not happen.

License

Hi,
Thanks for releasing this amazing project!
MagicTime is licensed under Apache 2.0, but it says "The service is a research preview intended for non-commercial use only. Please contact us if you find any potential violations."
Apache 2.0 is an open source license, which inherently allows commercial use. But this statement seems to conflict the license.
Would you mind clarifying?
Thank you!

shell scripts in prepare_weights don't work without modification

Probably there is a difference in the current developer "internal" build vs what got released to OSS. The issues are identified below with solution:

  • git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 ckpts/Base_Model will fail because Base_Model exists. It really should be: git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 ckpts/Base_Model/stable-diffusion-v1-5
  • Without previous change, later when running inference scripts, it will fail because stable-diffusion-v1-5 path is not found. fix is in previous step
  • When re-running these scripts, they will fail because directories exist. TODO: identify a way to "update" ... possibly add update scripts if the models will change.
  • in the prepare_weights/down_magictime_module.sh, you cannot move the repository without first deleting the .git subdirectory inside of MagicTime/.git because the clone repository has a write protected object. Why not use a git submodule? Or simply remove the .git dir in the freshly cloned subdirectory.
  • The mv dir command in prepare_weights/down_magictime_module.sh because the directory exists. Either use cp -r, or simply do mv MagicTime/Magic_Weights/* ckpts/Magic_Weights/

confusing about the Cascade Preprocessing

Hi authors, thank you for your nice work!

I am confused about the Cascade Preprocessing present in the paper. Could you explain more about the motivations as well as the definition of transiation point? Thanks.

[BUG]刚部署好,但是运行有BUG

(magictime) root@autodl-container-e3fa488242-5bb059c5:~/autodl-tmp/MagicTime# python app.py --port 6006

Cleaning cached examples ...

loaded 3D unet's pretrained weights from ./ckpts/Base_Model/stable-diffusion-v1-5/unet ...

missing keys: 560;

unexpected keys: 0;

Motion Module Parameters: 417.1376 M

loaded 3D unet's pretrained weights from ./ckpts/Base_Model/stable-diffusion-v1-5/unet ...

missing keys: 560;

unexpected keys: 0;

Motion Module Parameters: 417.1376 M

2024-04-15 13:52:45,055 - modelscope - INFO - PyTorch version 2.2.2 Found.
2024-04-15 13:52:45,055 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2024-04-15 13:52:45,087 - modelscope - INFO - Loading done! Current index file version is 1.13.3, with md5 1d4b83741562033e1de185bbe18433db and a total number of 972 components indexed
Traceback (most recent call last):
File "/root/autodl-tmp/MagicTime/app.py", line 235, in
controller = MagicTimeController()
File "/root/autodl-tmp/MagicTime/app.py", line 105, in init
self.update_dreambooth(self.dreambooth_list[0])
File "/root/autodl-tmp/MagicTime/app.py", line 149, in update_dreambooth
magic_adapter_s_state_dict = torch.load(magic_adapter_s_path, map_location="cpu")
File "/root/miniconda3/envs/magictime/lib/python3.10/site-packages/torch/serialization.py", line 1040, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/root/miniconda3/envs/magictime/lib/python3.10/site-packages/torch/serialization.py", line 1258, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

app.py的运行完全是按照官网教程部署的。所有缺失的模型都已经安装,但是还是出现了这个错误,麻烦解决一下子。

Windows not supported?

    from multiprocessing.context import ForkProcess
ImportError: cannot import name 'ForkProcess' from 'multiprocessing.context' (D:\Python\lib\multiprocessing\context.py)

ForkProcess only on Unix? Any chance you can patch it for Windows compatibility too?

how to get videos longer than 2 sec?

Also what does the video_length=16 imply? Is that number of frames? Tried changing it to 32 but it just seems to generate blank videos. Example:

saample.mp4

Attempt to open cnn_infer failed: handle=0 error: libcudnn_cnn_infer.so.8: cannot open shared object file: No such file or directory

Triggered "warning" during inference run on Ubuntu 22.04. Considering this issue as a warning unless someone says the warning should be treated as an error. Possibly there are some extra dev / optional modules not installed.

miniforge3/envs/magictime/lib/python3.10/site-packages/torch/nn/modules/conv.py:456: UserWarning: Attempt to open cnn_infer failed: handle=0 error: libcudnn_cnn_infer.so.8: cannot open shared object file: No such file or directory (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:78.)

Exception running inference_magictime.py _pickle.UnpicklingError: invalid load key, 'v'.

Pickle error on Linux ubuntu 22.04.

Will investigate.

10:42 $ python inference_magictime.py --config sample_configs/RealisticVision.yaml
The results will be save to outputs/RealisticVision-7
Use MagicAdapter-S
Use MagicAdapter-T
Use Magic_Text_Encoder
loaded 3D unet's pretrained weights from ./ckpts/Base_Model/stable-diffusion-v1-5/unet ...
### missing keys: 560; 
### unexpected keys: 0;
### Motion Module Parameters: 417.1376 M
load motion module from ./ckpts/Base_Model/motion_module/motion_module.ckpt
load dreambooth model from ./ckpts/DreamBooth/RealisticVisionV60B1_v51VAE.safetensors
load domain lora from ./ckpts/Magic_Weights/magic_adapter_s/magic_adapter_s.ckpt
Traceback (most recent call last):
  File "MagicTime/inference_magictime.py", line 249, in <module>
    main(args)
  File "miniforge3/envs/magictime/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "MagicTime/inference_magictime.py", line 76, in main
    pipeline = load_weights(
  File "MagicTime/utils/util.py", line 137, in load_weights
    magic_adapter_s_state_dict = torch.load(magic_adapter_s_path, map_location="cpu")
  File "miniforge3/envs/magictime/lib/python3.10/site-packages/torch/serialization.py", line 1040, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "miniforge3/envs/magictime/lib/python3.10/site-packages/torch/serialization.py", line 1258, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

Exception: Error while deserializing header: HeaderTooLarge

Header to large error when running example. I suspect this is a problem with how I downloaded the models when running prep scripts. Will retry from scratch.

11:32 $ python inference_magictime.py --config sample_configs/RealisticVision.yaml
The results will be save to outputs/RealisticVision-3
Use MagicAdapter-S
Use MagicAdapter-T
Use Magic_Text_Encoder
Traceback (most recent call last):
  File "myhome/dev/repos/MagicTime/inference_magictime.py", line 249, in <module>
    main(args)
  File "myhome/tools/miniforge3/envs/magictime/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/dkords/dev/repos/MagicTime/inference_magictime.py", line 61, in main
    text_encoder = CLIPTextModel.from_pretrained(model_config.pretrained_model_path, subfolder="text_encoder").cuda()
  File "myhome//tools/miniforge3/envs/magictime/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3284, in from_pretrained
    with safe_open(resolved_archive_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Solving environment: failed, after run conda env create -f environment.yml

Hi,

When I run the command: conda env create -f environment.yml
I got the error below, how to solve it? Thank you.

(base) D:\AI\MagicTime>conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • mkl_random==1.2.4=py310hdb19cb5_0
  • brotli-python==1.0.9=py310h6a678d5_7
  • lerc==3.0=h295c915_0
  • ncurses==6.4=h6a678d5_0
  • pip==23.3.1=py310h06a4308_0
  • mkl==2023.1.0=h213fc3f_46344
  • libtiff==4.5.1=h6a678d5_0
  • typing_extensions==4.9.0=py310h06a4308_1
  • freetype==2.12.1=h4a9f257_0
  • bzip2==1.0.8=h5eee18b_5
  • mkl-service==2.4.0=py310h5eee18b_1
  • zstd==1.5.5=hc292b87_0
  • libwebp-base==1.3.2=h5eee18b_0
  • lcms2==2.12=h3be6417_0
  • nettle==3.7.3=hbbd107a_1
  • python==3.10.13=h955ad1f_0
  • libpng==1.6.39=h5eee18b_0
  • lame==3.100=h7b6447c_0
  • setuptools==68.2.2=py310h06a4308_0
  • openjpeg==2.4.0=h3ad879b_0
  • pillow==10.2.0=py310h5eee18b_0
  • zlib==1.2.13=h5eee18b_0
  • gmp==6.2.1=h295c915_3
  • libcufft==10.7.2.124=h4fbf590_0
  • urllib3==2.1.0=py310h06a4308_1
  • idna==3.4=py310h06a4308_0
  • libidn2==2.3.4=h5eee18b_0
  • libunistring==0.9.10=h27cfd23_0
  • requests==2.31.0=py310h06a4308_1
  • libuuid==1.41.5=h5eee18b_0
  • wheel==0.41.2=py310h06a4308_0
  • _openmp_mutex==5.1=1_gnu
  • ca-certificates==2024.3.11=h06a4308_0
  • readline==8.2=h5eee18b_0
  • intel-openmp==2023.1.0=hdb19cb5_46306
  • gnutls==3.6.15=he1e5248_0
  • mkl_fft==1.3.8=py310h5eee18b_0
  • ld_impl_linux-64==2.38=h1181459_1
  • libdeflate==1.17=h5eee18b_1
  • numpy-base==1.26.4=py310hb5e798b_0
  • libffi==3.4.4=h6a678d5_0
  • libgomp==11.2.0=h1234567_1
  • openh264==2.1.1=h4ff587b_0
  • sqlite==3.41.2=h5eee18b_0
  • pytorch-cuda==11.7=h778d358_5
  • pytorch==1.13.1=py3.10_cuda11.7_cudnn8.5.0_0
  • pysocks==1.7.1=py310h06a4308_0
  • libcufile==1.9.0.20=0
  • libtasn1==4.19.0=h5eee18b_0
  • openssl==3.0.13=h7f8727e_0
  • ffmpeg==4.3=hf484d3e_0
  • tk==8.6.12=h1ccaba5_0
  • tbb==2021.8.0=hdb19cb5_0
  • numpy==1.26.4=py310h5f9d8c6_0
  • jpeg==9e=h5eee18b_1
  • libiconv==1.16=h7f8727e_2
  • libgcc-ng==11.2.0=h1234567_1
  • xz==5.4.6=h5eee18b_0
  • lz4-c==1.9.4=h6a678d5_0
  • libstdcxx-ng==11.2.0=h1234567_1
  • certifi==2024.2.2=py310h06a4308_0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.