[2023-11-18 20:40:01,201]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/invokeai/app/services/invocation_processor/invocation_processor_default.py", line 104, in __process outputs = invocation.invoke_internal( File "/usr/local/lib/python3.10/dist-packages/invokeai/app/invocations/baseinvocation.py", line 591, in invoke_internal output = self.invoke(context) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/invokeai/app/invocations/latent.py", line 761, in invoke ) = pipeline.latents_from_embeddings( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 388, in latents_from_embeddings latents, attention_map_saver = self.generate_latents_from_embeddings( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 419, in generate_latents_from_embeddings self._adjust_memory_efficient_attention(latents) File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 295, in _adjust_memory_efficient_attention self.enable_xformers_memory_efficient_attention() File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1999, in enable_xformers_memory_efficient_attention self.set_use_memory_efficient_attention_xformers(True, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 2025, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 2015, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 248, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 255, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 251, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 248, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 268, in set_use_memory_efficient_attention_xformers raise e File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 262, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/invokeai/backend/util/hotfixes.py", line 824, in new_memory_efficient_attention return _xformers_memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw return _run_priority_list( File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward
with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 decoderF
is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info
for more info [email protected]
is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see python -m xformers.info
for more info tritonflashattF
is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see python -m xformers.info
for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now cutlassF
is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info
for more info smallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info
for more info unsupported embed per head: 40.