Giter Club home page Giter Club logo

invokeai-google-colab's Introduction

InvokeAI-Google-Colab

Colab notebook for InvokeAI 3.x.x

Colab notebook
Open In Colab

How-to: https://www.pogs.cafe/invokeai-colab

Works with the SDXL base model and refiner on a GPU runtime. Other models can be added, as long as they are in diffusers format. The free tier offers little disk space, so I'm using Google Drive to store the base model. Using a 16 bit model works without a connected Google Drive account.

Runs out of RAM on Colab when trying to convert SDXL .safetensors to diffusers and when trying to use LoRAs on the free tier.

Similar notebook, but it downloads the latest dev version.
Open In Colab

invokeai-google-colab's People

Contributors

wandaweb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

invokeai-google-colab's Issues

invoke starting the app isnt working. im not getting url (local tunnel)

this is what shows up instead

/content/invokeai
/tools/node/bin/lt -> /tools/node/lib/node_modules/localtunnel/bin/lt.js

35.240.196.156
2023-10-21 04:23:49.929744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/usr/local/lib/python3.10/dist-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
[2023-10-21 04:23:55,170]::[InvokeAI]::INFO --> Patchmatch initialized
[2023-10-21 04:23:56,218]::[uvicorn.error]::INFO --> Started server process [16404]
[2023-10-21 04:23:56,219]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-10-21 04:23:56,219]::[InvokeAI]::INFO --> InvokeAI version 3.3.0post3
[2023-10-21 04:23:56,219]::[InvokeAI]::INFO --> Root directory = /content/invokeai
[2023-10-21 04:23:56,220]::[InvokeAI]::INFO --> Using database at /content/invokeai/databases/invokeai.db
[2023-10-21 04:23:56,226]::[InvokeAI]::INFO --> GPU device = cuda Tesla T4
[2023-10-21 04:23:56,230]::[InvokeAI]::INFO --> Scanning /content/invokeai/models for new models
[2023-10-21 04:23:56,566]::[InvokeAI]::INFO --> Scanned 9 files and directories, imported 0 models
[2023-10-21 04:23:56,569]::[InvokeAI]::INFO --> Model manager service initialized
[2023-10-21 04:23:56,589]::[InvokeAI]::INFO --> Pruned 0 finished queue items
[2023-10-21 04:23:56,601]::[InvokeAI]::INFO --> Cleaned database
[2023-10-21 04:23:56,602]::[uvicorn.error]::INFO --> Application startup complete.
[2023-10-21 04:23:56,602]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090/ (Press CTRL+C to quit)

server error not implemented error

[2023-11-18 20:40:01,201]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/invokeai/app/services/invocation_processor/invocation_processor_default.py", line 104, in __process outputs = invocation.invoke_internal( File "/usr/local/lib/python3.10/dist-packages/invokeai/app/invocations/baseinvocation.py", line 591, in invoke_internal output = self.invoke(context) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/invokeai/app/invocations/latent.py", line 761, in invoke ) = pipeline.latents_from_embeddings( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 388, in latents_from_embeddings latents, attention_map_saver = self.generate_latents_from_embeddings( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 419, in generate_latents_from_embeddings self._adjust_memory_efficient_attention(latents) File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 295, in _adjust_memory_efficient_attention self.enable_xformers_memory_efficient_attention() File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1999, in enable_xformers_memory_efficient_attention self.set_use_memory_efficient_attention_xformers(True, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 2025, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 2015, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 248, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 248, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 268, in set_use_memory_efficient_attention_xformers raise e File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 262, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/util/hotfixes.py", line 824, in new_memory_efficient_attention return _xformers_memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw return _run_priority_list( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info for more info [email protected] is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see python -m xformers.info for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now cutlassF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 40.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.