Giter Club home page Giter Club logo

comfyscript's Introduction

ComfyScript

PyPI - Version Python Version from PEP 621 TOML License

A Python front end and library for ComfyUI.

It has the following use cases:

  • Serving as a human-readable format for ComfyUI's workflows.

    This makes it easy to compare and reuse different parts of one's workflows.

    It is also possible to train LLMs to generate workflows, since many LLMs can handle Python code relatively well. This approach can be more powerful than just asking LLMs for some hardcoded parameters.

    Scripts can be automatically translated from ComfyUI's workflows. See transpiler for details.

  • Directly running the script to generate images.

    The main advantage of doing this than using the web UI is being able to mix Python code with ComfyUI's nodes, such as doing loops, calling library functions, and easily encapsulating custom nodes. This also makes adding interaction easier since the UI and logic can be both written in Python. And, some people may feel more comfortable with simple Python code than a graph-based GUI.1

    See runtime for details. Scripts can be executed locally or remotely with a ComfyUI server.

  • Using ComfyUI as a function library.

    With ComfyScript, ComfyUI's nodes can be used as functions to do ML research, reuse nodes in other projects, debug custom nodes, and optimize caching to run workflows faster.

    See runtime's real mode for details.

  • Generating ComfyUI's workflows with scripts.

    Scripts can also be used to generate ComfyUI's workflows and then used in the web UI or elsewhere. This way, one can use loops and generate huge workflows where it would be time-consuming or impractical to create them manually. See workflow generation for details. It is also possible to load workflows from images generated by ComfyScript.

  • Retrieving any wanted information by running the script with some stubs.

    See workflow information retrieval for details.

  • Converting workflows from ComfyUI's web UI format to API format without the web UI.

Installation

With ComfyUI

Install ComfyUI first. And then run the following commands:

cd ComfyUI/custom_nodes
git clone https://github.com/Chaoses-Ib/ComfyScript.git
cd ComfyScript
python -m pip install -e ".[default]"

(If you see ERROR: File "setup.py" or "setup.cfg" not found, run python -m pip install -U pip first.)

Update:

cd ComfyUI/custom_nodes/ComfyScript
git pull
python -m pip install -e ".[default]"

With ComfyUI package

Install ComfyUI package first:

  • If PyTorch is not installed:

    python -m pip install git+https://github.com/hiddenswitch/ComfyUI.git
  • If PyTorch is already installed (e.g. Google Colab):

    python -m pip install wheel
    python -m pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git

Install/update ComfyScript:

python -m pip install -U "comfy-script[default]"

[default] is necessary to install common dependencies. See pyproject.toml for other options. If no option is specified, comfy-script will be installed without any dependencies.

If there are problems with the latest ComfyUI package, one can use the last tested version:

python -m pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git@e49c662c7f026f05a5e082d48b629e2b977c0441

Other ways

ComfyScript can also be used without installed ComfyUI. See only ComfyScript package for details. And see uninstallation for how to uninstall.

Transpiler

The transpiler can translate ComfyUI's workflows to ComfyScript.

When ComfyScript is installed as custom nodes, SaveImage and similar nodes will be hooked to automatically save the script as the image's metadata. The script will also be printed to the terminal.

For example, here is a workflow in ComfyUI:

ComfyScript translated from it:

model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
conditioning2 = CLIPTextEncode('text, watermark', clip)
latent = EmptyLatentImage(512, 512, 1)
latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
image = VAEDecode(latent, vae)
SaveImage(image, 'ComfyUI')

If there two or more SaveImage nodes in one workflow, only the necessary inputs of each node will be translated to scripts. For example, here is a 2 pass txt2img (hires fix) workflow:

ComfyScript saved for each of the two saved image are respectively:

  1. model, clip, vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')
    conditioning = CLIPTextEncode('masterpiece HDR victorian portrait painting of woman, blonde hair, mountain nature, blue sky', clip)
    conditioning2 = CLIPTextEncode('bad hands, text, watermark', clip)
    latent = EmptyLatentImage(768, 768, 1)
    latent = KSampler(model, 89848141647836, 12, 8, 'dpmpp_sde', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')
  2. model, clip, vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')
    conditioning = CLIPTextEncode('masterpiece HDR victorian portrait painting of woman, blonde hair, mountain nature, blue sky', clip)
    conditioning2 = CLIPTextEncode('bad hands, text, watermark', clip)
    latent = EmptyLatentImage(768, 768, 1)
    latent = KSampler(model, 89848141647836, 12, 8, 'dpmpp_sde', 'normal', conditioning, conditioning2, latent, 1)
    latent2 = LatentUpscale(latent, 'nearest-exact', 1152, 1152, 'disabled')
    latent2 = KSampler(model, 469771404043268, 14, 8, 'dpmpp_2m', 'simple', conditioning, conditioning2, latent2, 0.5)
    image = VAEDecode(latent2, vae)
    SaveImage(image, 'ComfyUI')

Comparing scripts:

To control these features, see settings.example.toml.

You can also use the transpiler via the CLI.

Runtime

With the runtime, one can run ComfyScript like this:

from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

A Jupyter Notebook example is available at examples/runtime.ipynb. (Files under examples directory will be ignored by Git and you can put your personal notebooks there.)

  • Type stubs will be generated at comfy_script/runtime/nodes.pyi after loading. Mainstream code editors (e.g. VS Code) can use them to help with coding:

    Python enumerations are generated for all arguments provding the value list. So instead of copying and pasting strings like 'v1-5-pruned-emaonly.ckpt', you can use:

    Checkpoints.v1_5_pruned_emaonly
    # or
    CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly

    Embeddings can also be referenced as Embeddings.my_embedding, which is equivalent to 'embedding:my-embedding'.

    See enumerations for details.

  • The runtime is asynchronous by default. You can queue multiple tasks without waiting for the first one to finish. A daemon thread will watch and report the remaining tasks in the queue and the current progress, for example:

    Queue remaining: 1
    Queue remaining: 2
    100%|██████████████████████████████████████████████████| 20/20
    Queue remaining: 1
    100%|██████████████████████████████████████████████████| 20/20
    Queue remaining: 0
    

    Some control functions are also available:

    # Interrupt the current task
    queue.cancel_current()
    # Clear the queue
    queue.cancel_remaining()
    # Interrupt the current task and clear the queue
    queue.cancel_all()
    # Call the callback when the queue is empty
    queue.when_empty(callback)
    
    # With Workflow:
    Workflow(cancel_remaining=True)
    Workflow(cancel_all=True)

See differences from ComfyUI's web UI if you are a previous user of ComfyUI's web UI, and runtime for the details of runtime.

Examples

Plotting

with Workflow():
    seed = 0
    pos = 'sky, 1girl, smile'
    neg = 'embedding:easynegative'
    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)
    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)
    model2 = TomePatchModel(model2, 0.5)
    for color in 'red', 'green', 'blue':
        latent = EmptyLatentImage(440, 640)
        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',
                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),
                          latent_image=latent)
        SaveImage(VAEDecode(latent, vae2), f'{seed} {color}')
        latent = LatentUpscaleBy(latent, scale_by=2)
        latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',
                          positive=CLIPTextEncode(f'{color}, {pos}', clip2), negative=CLIPTextEncode(neg, clip2),
                          latent_image=latent, denoise=0.6)
        SaveImage(VAEDecode(latent, vae2), f'{seed} {color} hires')

Auto queue

Automatically queue new workflows when the queue becomes empty.

For example, one can use comfyui-photoshop (currently a bit buggy) to automatically do img2img with the image in Photoshop when it changes:

def f(wf):
    seed = 0
    pos = '1girl, angry, middle finger'
    neg = 'embedding:easynegative'
    model, clip, vae = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)
    image, width, height = PhotoshopToComfyUI(wait_for_photoshop_changes=True)
    latent = VAEEncode(image, vae)
    latent = LatentUpscaleBy(latent, scale_by=1.5)
    latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',
                        positive=CLIPTextEncode(pos, clip), negative=CLIPTextEncode(neg, clip),
                        latent_image=latent, denoise=0.8)
    PreviewImage(VAEDecode(latent, vae))
queue.when_empty(f)

Screenshot:

Select and process

For example, to generate 3 images at once, and then let the user decide which ones they want to hires fix:

import ipywidgets as widgets

queue.watch_display(False, False)

latents = []
image_batches = []
with Workflow():
    seed = 0
    pos = 'sky, 1girl, smile'
    neg = 'embedding:easynegative'
    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)
    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)
    for color in 'red', 'green', 'blue':
        latent = EmptyLatentImage(440, 640)
        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',
                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),
                          latent_image=latent)
        latents.append(latent)
        image_batches.append(SaveImage(VAEDecode(latent, vae), f'{seed} {color}'))

grid = widgets.GridspecLayout(1, len(image_batches))
for i, image_batch in enumerate(image_batches):
    image_batch = image_batch.wait()
    image = widgets.Image(value=image_batch[0]._repr_png_())

    button = widgets.Button(description=f'Hires fix {i}')
    def hiresfix(button, i=i):
        print(f'Image {i} is chosen')
        with Workflow():
            latent = LatentUpscaleBy(latents[i], scale_by=2)
            latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',
                            positive=CLIPTextEncode(pos, clip2), negative=CLIPTextEncode(neg, clip2),
                            latent_image=latent, denoise=0.6)
            image_batch = SaveImage(VAEDecode(latent, vae2), f'{seed} hires')
        display(image_batch.wait())
    button.on_click(hiresfix)

    grid[0, i] = widgets.VBox(children=(image, button))
display(grid)

This example uses ipywidgets for the GUI, but other GUI frameworks can be used as well.

Screenshot:

Footnotes

  1. I hate nodes. (No offense comfyui) : StableDiffusion

comfyscript's People

Contributors

chaoses-ib avatar lucak5s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyscript's Issues

Naming of nodes

Is it possible to name the nodes in the script and it will show when you load it in the web-ui?

Untitled

Real mode and ImageBatch node

Hi,
I'm getting error running ImageBatch in real mode. It's fine in the virtual mode.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():
    ImageBatch(im, im)
Traceback (most recent call last):
  File "/home/ro/test.py", line 9, in <module>
    ImageBatch(im, im)
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 132, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
  File "/home/ro/ComfyUI/nodes.py", line 1658, in batch
    s = torch.cat((image1, image2), dim=0)
TypeError: Multiple dispatch failed for 'torch.cat'; all __torch_function__ handlers returned NotImplemented:

  - tensor subclass <class 'comfy_script.runtime.real.nodes.RealNodeOutputWrapper'>

For more information, try re-running with TORCH_LOGS=not_implemented

Transpiler and nodes with optional inputs

The empty optional inputs of a node are not set to None when transpiling a workflow which causes an error, can set it to None manually of course but I thought I mentioned it.

Transpiled DetailerForEachPipe from Impact pack:
image5, _, _, _ = DetailerForEachPipe(image3, segs, 1024, True, 1024, 395935899176991, 10, 3, 'lcm', 'ddim_uniform', 0.1, 50, True, True, basic_pipe, '', 0.0, 1, True, 50)
Should be:
image5, _, _, _ = DetailerForEachPipe(image3, segs, 1024, True, 1024, 395935899176991, 10, 3, 'lcm', 'ddim_uniform', 0.1, 50, True, True, basic_pipe, '', 0.0, 1, None, None, True, 50)

Am I doing this right?

I might be missing some fundamental knowledge here, new to Python. I copied a script called test.py:

from script.runtime import *
load()
from script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

Put it in ComfyScript folder, run it like "python test.py", result:

Nodes: 1018
Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 2, in <module>
    load()
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\__init__.py", line 18, in load
    asyncio.run(_load(api_endpoint, vars, watch, save_script_source))
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\nest_asyncio.py", line 31, in run
    return loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\asyncio\tasks.py", line 267, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\__init__.py", line 29, in _load
    nodes.load(nodes_info, vars)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\nodes.py", line 10, in load
    fact.add_node(node_info)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 124, in add_node
    inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 69, in type_and_hint
    enum_c, t = astutil.to_str_enum(name, { _remove_extension(s): s for s in type_info }, '    ')
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 69, in <dictcomp>
    enum_c, t = astutil.to_str_enum(name, { _remove_extension(s): s for s in type_info }, '    ')
                                            ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 14, in _remove_extension
    path = path.removesuffix(ext)
           ^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'removesuffix'

Using Python 3.11.

Also am I suppose to run some kind of IDE and feed it lines runtime, like the queue commands?

Real mode and UltimateSDUpscale node

Hi,
I'm getting error running ultimate upscale in real mode. Virtual mode works fine.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():

    base_model, base_clip, base_vae = CheckpointLoaderSimple('deliberate_v3.safetensors') 
    upscale_model = UpscaleModelLoader('4x-UltraSharp.pth')

    conditioning = CLIPTextEncode('', base_clip)
    
    upscaled_image = UltimateSDUpscale(
        image=im,
        model=base_model,
        positive=conditioning,
        negative=conditioning, 
        vae=base_vae,
        upscale_by=2,
        seed=0,
        steps=18,
        cfg=1,
        sampler_name='ddpm',
        scheduler='karras',
        denoise=0.2,
        upscale_model=upscale_model, 
        mode_type='Linear',
        tile_width=1024,
        tile_height=1024,
        mask_blur=16,
        tile_padding=32,
        seam_fix_mode='Half Tile',
        seam_fix_denoise=0.5,
        seam_fix_width=64,
        seam_fix_mask_blur=16,
        seam_fix_padding=32, 
        force_uniform_tiles=False, 
        tiled_decode=False) 
Traceback (most recent call last):
  File "/home/ro/test.py", line 17, in <module>
    upscaled_image = UltimateSDUpscale(
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 91, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
  File "/home/ro/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py", line 116, in upscale
    sdprocessing = StableDiffusionProcessing(
  File "/home/ro/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py", line 40, in __init__
    self.vae_decoder = VAEDecode()
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 91, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
TypeError: VAEDecode.decode() missing 2 required positional arguments: 'vae' and 'samples'

Enumeration enhancements

Great work with this project so far. I just got started, and it's working great so far. Can't wait to create some really great stuff with this. I had a quick suggestion. Currently, if a checkpoint, vae, or lora, etc. are located in a file they appear as file_path_{checkpoint/vae/lora/etc}_name. Since these enumerations are created dynamically, would it be possible to represent the file structures as nested enumerations?

Why is this convenient? For example, I, and I'm sure many others organize their checkpoints into folders. Namely, I have a folder for inpainting models, and SDXL and SD1.5 models. I would love to be able to access models with
Checkpoints.sd15.dreamshaper8 rather than Checkpoints.sd15_dreamshaper8. The former looks much nicer, and even more importantly, aids in looping, say if I want to loop over all my SD1.5 checkpoints. What do you think?

LoadImage to take as input a Path

hey, very quick issue that you can fix whenever you have time: when using a Path in a LoadImage node, we hit a json encode error saying the path is not serialisable. Right now i fix it by using str(filepath) before giving as input to the node, but maybe it would be worth implementing a JSONEncoder class that simply does it in the background? i think this happens when using a workflow that tries to serialise things at some point:

click here to expand the exception traceback
File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***)
    228 try:
    229     if exc is None:
    230         # We use the `send` method directly, because coroutines
    231         # don't have `__iter__` and `__next__` methods.
--> 232         result = coro.send(None)
    233     else:
    234         result = coro.throw(exc)

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:501, in Workflow._exit(self, source)
    499 nodes.Node.clear_output_hook()
    500 if self._queue_when_exit:
--> 501     if await self._queue(source):
    502         # TODO: Fix multi-thread print
    503         # print(task)
    504         if self._wait_when_exit:
    505             await self.task

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:480, in Workflow._queue(self, source)
    477 elif self._cancel_remaining_when_queue:
    478     await queue._cancel_remaining()
--> 480 self.task = await queue._put(self, source)
    481 for output in self._outputs:
    482     output.task = self.task

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:216, in TaskQueue._put(self, workflow, source)
    210 if _save_script_source:
    211     extra_data = {
    212         'extra_pnginfo': {
    213             'ComfyScriptSource': source
    214         }
    215     }
--> 216 async with session.post(f'{client.endpoint}prompt', json={
    217     'prompt': prompt,
    218     'extra_data': extra_data,
    219     'client_id': _client_id,
    220 }) as response:
    221     if response.status == 200:
    222         response = await response.json()

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/client.py:1187, in _BaseRequestContextManager.__aenter__(self)
   1186 async def __aenter__(self) -> _RetType:
-> 1187     self._resp = await self._coro
   1188     return self._resp

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/client.py:430, in ClientSession._request(self, method, str_or_url, params, data, json, cookies, headers, skip_auto_headers, auth, allow_redirects, max_redirects, compress, chunked, expect100, raise_for_status, read_until_eof, proxy, proxy_auth, timeout, verify_ssl, fingerprint, ssl_context, ssl, server_hostname, proxy_headers, trace_request_ctx, read_bufsize, auto_decompress, max_line_size, max_field_size)
    426     raise ValueError(
    427         \"data and json parameters can not be used at the same time\"
    428     )
    429 elif json is not None:
--> 430     data = payload.JsonPayload(json, dumps=self._json_serialize)
    432 if not isinstance(chunked, bool) and chunked is not None:
    433     warnings.warn(\"Chunk size is deprecated #1615\", DeprecationWarning)

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/payload.py:396, in JsonPayload.__init__(self, value, encoding, content_type, dumps, *args, **kwargs)
    385 def __init__(
    386     self,
    387     value: Any,
   (...)
    392     **kwargs: Any,
    393 ) -> None:
    395     super().__init__(
--> 396         dumps(value).encode(encoding),
    397         content_type=content_type,
    398         encoding=encoding,
    399         *args,
    400         **kwargs,
    401     )

[...]

--> [179](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/clementr/projects/qrpuzz/notebooks/~/miniconda3/envs/comfyui/lib/python3.10/json/encoder.py:179)     raise TypeError(f'Object of type {o.__class__.__name__} '
    [180](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/clementr/projects/qrpuzz/notebooks/~/miniconda3/envs/comfyui/lib/python3.10/json/encoder.py:180)                     f'is not JSON serializable')

TypeError: Object of type PosixPath is not JSON serializable

thanks!

Was Node Suite Image Save is not transpiled

workflow (79).json

Not transpiled, also when I added the "was node suite image save" to my script manually and ran the script it embedded the whole script to the image and not the workflow.

ImageSave(
        image,             # images: Image,
        path,                # output_path: str = '[time(%Y-%m-%d)]',
        'ComfyUI',       #filename_prefix: str = 'ComfyUI',
        '_',                    #filename_delimiter: str = '_',
        4,                      #filename_number_padding: int = 4,
        'false',               #filename_number_start: ImageSave.filename_number_start = 'false',
        'webp',             #extension: ImageSave.extension = 'png',
        80,                    #quality: int = 100,
        'true',                 #lossless_webp: ImageSave.lossless_webp = 'false',
        'false',               #overwrite_mode: ImageSave.overwrite_mode = 'false',
        'false',               #show_history: ImageSave.show_history = 'false',
        'true',               #show_history_by_prefix: ImageSave.show_history_by_prefix = 'true',
        'true',               #embed_workflow: ImageSave.embed_workflow = 'true',
        'true',               #show_previews: ImageSave.show_previews = 'true'
        )

`AttributeError: module 'comfy.cmd.main' has no attribute 'server'`

I tried running it with the comfyui python package however I got this error each time I try to run it.

Log:

ComfyScript: Importing ComfyUI from comfyui package
Total VRAM 12288 MB, total RAM 16297 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py:334: RuntimeWarning: coroutine 'main' was never awaited
  main.main()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
  File "D:\comfyui\test.py", line 2, in <module>
    load("comfyui")
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 31, in load
    asyncio.run(_load(comfyui, args, vars, watch, save_script_source))
  File "D:\comfyui\venv\lib\site-packages\nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
  File "D:\comfyui\venv\lib\site-packages\nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "C:\Python\lib\asyncio\futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Python\lib\asyncio\tasks.py", line 232, in __step
    result = coro.send(None)
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 52, in _load
    start_comfyui(comfyui, args)
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 340, in start_comfyui
    threading.Thread(target=main.server.loop.run_until_complete, args=(main.server.publish_loop(),), daemon=True).start()
AttributeError: module 'comfy.cmd.main' has no attribute 'server'

Code:

from comfy_script.runtime import *
load("comfyui")
from comfy_script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

key error

  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\__init__.py", line 118, in chunks
    comfy_script = transpile.WorkflowToScriptTranspiler(workflow).to_script(end_nodes)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 296, in to_script
    for node in self._topological_generations_ordered_dfs(end_nodes):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 287, in _topological_generations_ordered_dfs
    yield from visit(v)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 273, in visit
    v = G.nodes[node]['v']
        ~~~~~~~^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\networkx\classes\reportviews.py", line 194, in __getitem__
    return self._nodes[n]
           ~~~~~~~~~~~^^^
KeyError: '9'

Using default "glass bottle landscape" workflow, tried with global 3.11 python and embedded 3.12 python versions of ComfyUI.

edit: Transpiler works when using CLI.

Pass configuration options when loading ComfyUI package?

I'm importing, loading, and using the 'real' runtime, using the hiddenswitch package as my ComfyUI. Inside a Docker image, if that matters.

Relevant parts of the Dockerfile:

RUN git clone https://github.com/hiddenswitch/ComfyUI.git && cd ComfyUI && git checkout e49c662c7f026f05a5e082d48b629e2b977c0441 && pip install --no-build-isolation -e .
RUN pip install -U "comfy-script[default]"

And the importing:


from comfy_script.runtime.real import *
load('comfyui')
from comfy_script.runtime.real.nodes import *

So, now suppose I want to pass some additional configuration options to the ComfyUI package when its initialized, for instance, an extra model paths yaml file. Is there a way to do this? I looked into the source a little and found that the load function takes a RealModeConfig object, but it's a very abstract object and I'd just be blindly stumbling in the dark trying to figure out how to use it, if it even is the right solution.

Thanks!

Real mode and IPAdapterApplyFaceID node

Hi,

I'm trying to run IPAdapterApplyFaceID node from IPAdapter_plus package in the real mode and get the error below. IPAdapterApply node from the same package works fine. Both those nodes also worked in virtual mode.
Overall this is a very nice tool, any help or hint would be appreciated.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():
    gen_model, gen_clip, gen_vae = CheckpointLoaderSimple('deliberate_v3.safetensors') 
    gen_ipadapter = IPAdapterModelLoader('ip-adapter-faceid-portrait_sd15.bin') 
    clip_vision = CLIPVisionLoader('model.safetensors') 
    insightface = InsightFaceLoader(provider='CUDA')

    adapter_applied_model = IPAdapterApplyFaceID(
        ipadapter=gen_ipadapter, 
        clip_vision=clip_vision, 
        insightface=insightface, 
        image=im, 
        model=gen_model, 
        weight=1, 
        noise=0, 
        weight_type='original', 
        start_at=0, 
        end_at=1, 
        faceid_v2=False, 
        weight_v2=1, 
        unfold_batch=False,
        ) 
Traceback (most recent call last):
File "/home/ro/test.py", line 16, in <module>
adapter_applied_model = IPAdapterApplyFaceID(
File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 65, in new
obj = orginal_new(cls)
File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 97, in new
outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
TypeError: IPAdapterApply.apply_ipadapter() missing 2 required positional arguments: 'ipadapter' and 'model'

Transpile error with Latent Blend?

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\nodes\__init__.py", line 94, in chunks
    comfy_script = transpile.WorkflowToScriptTranspiler(workflow).to_script(end_nodes)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 338, in to_script
    for node in self._topological_generations_ordered_dfs(end_nodes):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 313, in _topological_generations_ordered_dfs
    yield from visit(v)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 303, in visit
    inputs = passes.multiplexer_node_input_filter(G.nodes[node], self._widget_values_to_dict(v.type, v.widgets_values))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\passes\__init__.py", line 127, in multiplexer_node_input_filter
    if widget_values[k] != value:
       ~~~~~~~~~~~~~^^^
KeyError: 'blend_mode'
Prompt executed in 7.81 seconds

workflow (83).json

Import issues and type stubs

This might be an issue with me lacking python skills.

import random
import sys
import os
#sys.path.insert(0, 'script/runtime')
#from nodes import *
#from script.runtime.nodes import *

sys.path.insert(0, '../../')
import folder_paths
sys.path.insert(0, 'src')
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *

####################
# Randomize script #
####################

#
# Config
#

# set to true if sdxl, false if sd 1.5
xl = True 

# needs to have checkpoints and loras divided in SDXL and SD 1.5 folders, embeddings needs to be divided into positive and negative folders
xl_folder_name = "xl"
sd1_5_folder_name = "sd1.5"

pos_folder_name = "pos"
neg_folder_name = "neg"

# number of images to create
images = 10

# checkpoint
randomize_checkpoint = True
# used if randomize_checkpoint is false
default_checkpoint = "xl\turbovisionxlSuperFastXLBasedOnNew_alphaV0101Bakedvae.safetensors"

# lora
randomize_lora = True
number_of_loras = 3
lora_min_value = 0.1
lora_max_value = 2
# default lora setup using CRLoRAStack, used if randomize_lora is false, if you have more then 3 loras you need to modify the code
default_lora_stack = ('On', r'xl\LCMTurboMix_LCM_Sampler.safetensors', 1, 1, 'On', r'xl\xl_more_art-full_v1.safetensors', 1, 1, 'On', r'xl\add-detail-xl.safetensors', 1, 1)

# prompt
fully_randomized_prompt = False # TODO use https://github.com/adieyal/comfyui-dynamicprompts to generate random prompt
positive_prompt = "Shot Size - extreme wide shot,( Marrakech market at night time:1.5), Moroccan young beautiful woman, smiling, exotic, (loose hijab:0.1)"
negative_prompt = "(worst quality, low quality, normal quality:2), blurry, depth of field, nsfw"
randomize_positive_embeddings = True
randomize_negative_embeddings = True
embeddings_positive_min_value = 0.1
embeddings_positive_max_value = 2
embeddings_negative_min_value = 0.1
embeddings_negative_max_value = 2

# freeu
randomize_freeu = True
min_freeu_values = [0.5, 0.5, 0.2, 0.1]
max_freeu_values = [3, 3, 2, 1]
# used if randomize_freeu is set to false
default_freeu_values = (1.3, 1.4, 0.9, 0.2)


# code taken from impact.utils
def add_folder_path_and_extensions(folder_name, full_folder_paths, extensions):
    # Iterate over the list of full folder paths
    for full_folder_path in full_folder_paths:
        # Use the provided function to add each model folder path
        folder_paths.add_model_folder_path(folder_name, full_folder_path)

    # Now handle the extensions. If the folder name already exists, update the extensions
    if folder_name in folder_paths.folder_names_and_paths:
        # Unpack the current paths and extensions
        current_paths, current_extensions = folder_paths.folder_names_and_paths[folder_name]
        # Update the extensions set with the new extensions
        updated_extensions = current_extensions | extensions
        # Reassign the updated tuple back to the dictionary
        folder_paths.folder_names_and_paths[folder_name] = (current_paths, updated_extensions)
    else:
        # If the folder name was not present, add_model_folder_path would have added it with the last path
        # Now we just need to update the set of extensions as it would be an empty set
        # Also ensure that all paths are included (since add_model_folder_path adds only one path at a time)
        folder_paths.folder_names_and_paths[folder_name] = (full_folder_paths, extensions)
        
model_path = folder_paths.models_dir
add_folder_path_and_extensions("loras", [os.path.join(model_path, "loras")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("loras_xl", [os.path.join(model_path, "loras", xl_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("loras_sd1.5", [os.path.join(model_path, "loras", sd1_5_folder_name)], folder_paths.supported_pt_extensions)

add_folder_path_and_extensions("checkpoints", [os.path.join(model_path, "checkpoints")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("checkpoints_xl", [os.path.join(model_path, "checkpoints", xl_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("checkpoints_sd1.5", [os.path.join(model_path, "checkpoints", sd1_5_folder_name)], folder_paths.supported_pt_extensions)

add_folder_path_and_extensions("embeddings", [os.path.join(model_path, "embeddings")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("embeddings_pos", [os.path.join(model_path, "embeddings", pos_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("embeddings_neg", [os.path.join(model_path, "embeddings", neg_folder_name)], folder_paths.supported_pt_extensions)

def get_random_loras():
    if xl == True:
        loras = [xl_folder_name + "/" + x for x in folder_paths.get_filename_list("loras_xl")]
    else:
        loras = [sd1_5_folder_name + "/" + x for x in folder_paths.get_filename_list("loras_sd1.5")]
    return random.sample(loras, number_of_loras)
    
def get_lora_strength():
    return random.uniform(lora_min_value, lora_max_value)
    
def get_random_checkpoint():
    if xl == True:
        checkpoints = [xl_folder_name + "/" + x for x in folder_paths.get_filename_list("checkpoints_xl")]
    else:
        checkpoints = [sd1_5_folder_name + "/" + x for x in folder_paths.get_filename_list("checkpoints_sd1.5")]
    return random.choice(checkpoints)
    
def get_positive_embedding_strength():
    return random.uniform(embeddings_positive_min_value, embeddings_positive_max_value)
    
def get_random_pos_embedding():
    pos_embeddings = [pos_folder_name + "/" + x for x in folder_paths.get_filename_list("embeddings_pos")]
    num_of_embeddings = random.randint(0, len(pos_embeddings))
    if num_of_embeddings == 0:
        return ""
    samples = random.sample(pos_embeddings, num_of_embeddings)
    string = ""
    for sample in samples:
        string += ", (embedding:" + sample + ":" + str(get_positive_embedding_strength()) + ")"
    return string
    
def get_negative_embedding_strength():
    return random.uniform(embeddings_negative_min_value, embeddings_negative_max_value)
    
def get_random_neg_embedding():
    neg_embeddings = [neg_folder_name + "/" + x for x in folder_paths.get_filename_list("embeddings_neg")]
    num_of_embeddings = random.randint(0, len(neg_embeddings))
    if num_of_embeddings == 0:
        return ""
    samples = random.sample(neg_embeddings, num_of_embeddings)
    string = ""
    for sample in samples:
        string += ", (embedding:" + sample + ":" + str(get_negative_embedding_strength()) + ")"
    return string

def get_random_freeu_values():
    return (random.uniform(min_freeu_values[0],max_freeu_values[0]),random.uniform(min_freeu_values[1],max_freeu_values[1]),
            random.uniform(min_freeu_values[2],max_freeu_values[2]),random.uniform(min_freeu_values[3],max_freeu_values[3]))
        
with Workflow():
    # checkpoint
    if randomize_checkpoint == True:
        model, clip, vae = CheckpointLoaderSimple(get_random_checkpoint())
    else:
        model, clip, vae = CheckpointLoaderSimple(default_checkpoint)
    
    # loras
    if randomize_lora == True:
        loras = get_random_loras()
        for lora in loras:
            model, clip = LoraLoader(model, clip, lora, get_lora_strength(), get_lora_strength())
    else:
        lora_stack, _ = CRLoRAStack(default_lora_stack)
        model, clip, _ = CRApplyLoRAStack(model, clip, lora_stack)
    
    
    # freeu
    if randomize_freeu == True:
        model = FreeUV2(model, get_random_freeu_values())
    else:
        model = FreeUV2(model, default_freeu_values)
    
    # positive prompt
    if randomize_positive_embeddings == True:
        pos_string = positive_prompt + get_random_pos_embedding()
    else:
        pos_string = positive_prompt

    pos_cond = CLIPTextEncode(pos_string, clip)
    
    # negative prompt
    if randomize_negative_embeddings == True:
        neg_string = negative_prompt + get_random_neg_embedding()
    else:
        neg_string = negative_prompt

    neg_cond = CLIPTextEncode(neg_string, clip)

Gives error:

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 170, in <module>
    model = FreeUV2(model, get_random_freeu_values())
            ^^^^^^^
NameError: name 'FreeUV2' is not defined

Uncomment line 4 and 5 and it works

FreeUV2 is not recognized in VS Code, uncomment line 6 and it is recognized but script gives error:

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 6, in <module>
    from script.runtime.nodes import *
ModuleNotFoundError: No module named 'script.runtime.nodes'

Not sure if this is how type stubs should be imported, new to python. The script is work in progress btw.

Doc Request - how to run script.

Hi. Python noob here.

I understand how to use comfyUI, and regular custom nodes. I have installed comfyscript in custom_nodes. I can code (a bit).

I don't use python, but I can follow what the scripts are doing and I'm pretty confident I could write some, so this looks really useful.

I may have missed something, but in the docs says "With the runtime, you can run ComfyScript like this:"... but then just shows another script.

Could I request a line or two in the docs to explain how to actually run one of the examples already given. Its probably obvious when you know - I'm just not seeing it.
Does it matter where the script files are saved?
Do I need to have comfyui running?
Do I need a workflow open?
How do I run the script?

Thanks

`ComfyScript: Failed to load node VHS_VideoCombine` because of `AttributeError: 'list' object has no attribute 'removesuffix'`

Hi there, thanks for this super cool repo, this is exactly what I've been waiting for to really dive into comfyUI!

I'm trying to get an animateDiff workflow working, but I'm running into an issue with the node which outputs the video/gif.

It's from https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite

The workflow is here:
whileaf-lora-workflow.json

It gets transpiled to:

from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.safetensors')
    model, clip = LoraLoader(model, clip, 'whileaf.safetensors', 1, 1)
    motion_model = ADELoadAnimateDiffModel('v3_sd15_mm.ckpt', None)
    m_models = ADEApplyAnimateDiffModel(motion_model, 0, 1, None, None, None, None, None)
    context_opts = ADELoopedUniformContextOptions(16, 1, 4, True, 'pyramid', False, 0, 1, None, None)
    settings = ADEAnimateDiffSamplingSettings(0, 'FreeNoise', 'comfy', 0, None, None, 0, False, None, None)
    model = ADEUseEvolvedSampling(model, 'autoselect', m_models, context_opts, settings)
    conditioning = CLIPTextEncode('whileaf whileaf creepy slime calligraphy graffiti runes', clip)
    conditioning2 = CLIPTextEncode('ugly', clip)
    latent = EmptyLatentImage(512, 512, 48)
    latent = KSampler(model, 820058635513319, 20, 8, 'dpmpp_2m_sde_gpu', 'karras', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    _ = VHSVideoCombine(image, 8, 0, 'AnimateDiff', 'image/gif', False, True, None, None)

but running it gives the following errors:

ComfyScript: Using ComfyUI from http://127.0.0.1:8188/
Nodes: 357
ComfyScript: Failed to load node VHS_VideoCombine
Traceback (most recent call last):
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/nodes.py", line 19, in load
    fact.add_node(node_info)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 392, in add_node
    inputs.append(f'{input_id}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in type_and_hint
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in <dictcomp>
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 19, in _remove_extension
    path = path.removesuffix(ext)
AttributeError: 'list' object has no attribute 'removesuffix'
ComfyScript: Node already exists: {'input': {'required': {'frames_per_batch': ['INT', {'default': 16, 'min': 1, 'max': 128, 'step': 1}]}, 'hidden': {'prompt': 'PROMPT', 'unique_id': 'UNIQUE_ID'}}, 'output': ['VHS_BatchManager'], 'output_is_list': [False], 'output_name': ['VHS_BatchManager'], 'name': 'VHS_BatchManager', 'display_name': 'Batch Manager 🎥🅥🅗🅢', 'description': '', 'category': 'Video Helper Suite 🎥🅥🅗🅢', 'output_node': False}
ComfyScript: Failed to load node List of any [Crystools]
Traceback (most recent call last):
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/nodes.py", line 19, in load
    fact.add_node(node_info)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 420, in add_node
    output_types = [type_and_hint(type, name, output=True)[0] for type, name in output_with_name]
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 420, in <listcomp>
    output_types = [type_and_hint(type, name, output=True)[0] for type, name in output_with_name]
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in type_and_hint
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/astutil.py", line 149, in to_str_enum
    return to_enum(id, dic, indent, StrEnum)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/astutil.py", line 142, in to_enum
    return c, enum_class(id, members)
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 387, in __call__
    return cls._create_(
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 518, in _create_
    enum_class = metacls.__new__(metacls, class_name, bases, classdict)
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 208, in __new__
    raise ValueError('Invalid enum member name: {0}'.format(
ValueError: Invalid enum member name: 
ComfyScript: Failed to queue prompt: <ClientResponse(http://127.0.0.1:8188/prompt) [400 Bad Request]>
<CIMultiDictProxy('Content-Type': 'application/json; charset=utf-8', 'Content-Length': '128', 'Date': 'Sat, 10 Feb 2024 13:57:54 GMT', 'Server': 'Python/3.10 aiohttp/3.9.3')>
<ClientResponse(http://127.0.0.1:8188/prompt) [400 Bad Request]>
<CIMultiDictProxy('Content-Type': 'application/json; charset=utf-8', 'Content-Length': '128', 'Date': 'Sat, 10 Feb 2024 13:57:54 GMT', 'Server': 'Python/3.10 aiohttp/3.9.3')>
{
  "error": {
    "type": "prompt_no_outputs",
    "message": "Prompt has no outputs",
    "details": "",
    "extra_info": {}
  },
  "node_errors": []
}
Traceback (most recent call last):
  File "/home/hans/.conda/envs/hans/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/hans/.conda/envs/hans/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/whileaf-lora-workflow.py", line 18, in <module>
    VHSVideoCombine(image, 8, 0, 'AnimateDiff', 'image/gif', False, True, None, None)
TypeError: VHSVideoCombine() takes no arguments

Any idea what might be going on?

How to get value from the output of a node and perform math or other operations

Hello! I have this code here

async def extend_audio(params: AudioWorkflow):
    async with Workflow(wait=True) as wf:
        model, model_sr = MusicgenLoader()
        audio, sr, duration = LoadAudio(params.snd_filename)
        audio = ConvertAudio(audio, sr, model_sr, 1)
        audio = ClipAudio(audio, duration - 10.0, duration, model_sr) # I would like to perform this math
        raw_audio = MusicgenGenerate(model, audio, 4, duration + params.duration, params.cfg, params.top_k, params.top_p, params.temperature, params.seed or random.randint(0, 2**32 - 1))
        audio = ClipAudio(audio, 0.0, duration - 10.0, model_sr)
        audio = ConcatAudio(audio, raw_audio)
        spectrogram_image = SpectrogramImage(audio, 1024, 256, 1024, 0.4)
        spectrogram_image = ImageResize(spectrogram_image, ImageResize.mode.resize, True, ImageResize.resampling.lanczos, 2, 512, 128)
        video = CombineImageWithAudio(spectrogram_image, audio, model_sr, CombineImageWithAudio.file_format.webm, "final_output")
        await wf.queue()._wait()
    results = await video._wait()
    return await get_data(results)

And I would like to perform math on the resulting duration from LoadAudio. If I try to use it raw, it will just throw the error TypeError: unsupported operand type(s) for -: 'Float' and 'float' Is it possible to get the result here?

Custom node that fail to load can make the whole loading fail

I found this node make the whole load to fail:
IFRNet VFI
It's from ComfyUI-Frame-Interpolation

I added and exception catch to ignore it in nodes.py and the rest of the nodes loaded properly
image

Edit:
The same thing happens in the nodes.py of the real mode.

Generation preview

Is it possible to get previews of an in-progress generation, similar to what is shown in the web ui, using comfyscript? Ideally I'd like some form of stream I can process per-step.

Can't import plugin ,Error is: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'

Installed the lastest version. had installed requirements.txt
Error Info:

ComfyScript: Loading nodes...
Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1810, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyScript\__init__.py", line 20, in <module>
    NODE_CLASS_MAPPINGS.update(ComfyUI_Ib_CustomNodes.NODE_CLASS_MAPPINGS)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'

Cannot import E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyScript module for custom nodes: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'
Adding E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes to sys.path

160413

Trouble Running Example Script with Custom Model in Conda Environment on Windows 10

Hello,

Firstly, I want to express my gratitude for the effort and dedication you've put into developing this project. It's evident that it holds substantial promise. However, I've encountered some challenges while attempting to execute scripts.

I followed the installation process outlined for ComfyScript using the "Package and nodes with ComfyUI" method, opting for a standalone (non-portable) version of comfyui. For your information, my setup involves running comfy within a conda environment on a Windows 10 system.

I attempted to run the example script provided in the README.md file of the repository, making a minor modification to use a different model that I possess. You can find the script I used here: test.py.txt.

Encountering errors, I've captured and shared the log for your review: ComfyScript.log.

Could you offer any guidance or suggestions on how to resolve these issues and successfully run the script? Any advice would be greatly appreciated.

Thank you for your support.

[Bug Report] Misrecognition of nodes with same name

Thanks for your great job, but I found a bug that really confuse me.

It seems like that ComfyScript can't recognize some nodes correctly. In my workflow, I have a node (Image Resize from Image Resize for ComfyUI)
image
In the output script, it gave me "image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)"
Run it, error occurred:
"Traceback (most recent call last):
File "/home/ruanxy/work/gen_tools/main.py", line 10, in
image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)
ValueError: too many values to unpack (expected 2)
"
I tried to fix it to add a return value like this: "image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)"
new error occurred:
"
ERROR:root:Failed to validate prompt for output SaveImage.0:
ERROR:root:* ImageResize+ ImageResize+.0:
ERROR:root: - Failed to convert an input value to a INT value: width, crop to ratio, invalid literal for int() with base 10: 'crop to ratio'
ERROR:root: - Value not in list: interpolation: '1024' not in ['nearest', 'bilinear', 'bicubic', 'area', 'nearest-exact', 'lanczos']
ERROR:root: - Value not in list: condition: 'any' not in ['always', 'only if bigger', 'only if smaller']
"
Through the error infomation I realized that ComfyScript recognized the node as (Image Resize from ComfyUI Essentials)
image

I want to know how to deal with this kind of situation.
Thank you.

[Question] Could ComfyScript be used to automate activation(Unbypass) of a group of nodes?

Thanks for releasing, I'm looking forward to trying this out.

I've made a request on the main ComfyUI repo comfyanonymous/ComfyUI#2357 hoping for an Unbypass Group Nodes function for the right click menu. I've been planning out a workflow that would require being able to automate bypass/unbypass of group nodes, currently they can be bypassed as a group selection from the right click menu, but Unbypass doesn't exist as an option presently.

If this functionality was implemented within ComfyUI, would ComfyScript in it's current form allow me to set conditions for when a grouped set of nodes is active?

Import main fail in real mode

image

In my case the value of file is:
'C:\work\misc_rep\python\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\runtime\real\init.py'

And the value of comfy_ui is WindowsPath('C:/work/misc_rep/python/ComfyUI/custom_nodes')
It should be C:/work/misc_rep/python/ComfyUI/

I replaced with comfy_ui = Path(file).resolve().parents[6] and it worked

Turn off transpiler option/transpile on command

I have a habit of printing out seed numbers so I can go back to specific seeds that I liked. The transpiler really crowds up my command window, so that I have to scroll a lot. Is there an easy way to disable automatic transpiling, maybe by adding an option in settings? I think we can always use the cmdline version of the transpiler in case we need it.

请教一下 real 模式下如何进行内存管理,避免输入参数没改变的节点仍然重新推理一遍呢?

我缺乏 Python 或者其它代码开发方面的基础知识,自己搜索了一下,看到有用@cached @lru_cache 等 decorator 去做缓存的,不知道这是否是文档中提到的自己管理缓存的正确方法呢?如果不是,能否麻烦给一下相关方法的关键词我去查一查,谢谢。

另外,感谢一下做这个repo,我之前用 huggingface 的 diffusers ,感觉对于我这种菜鸟来说,里面的 class function 设计的抽象化程度要么有点整要么有点过细,没有 comyui 的颗粒度适合我。之前就在想要是 comfyui 有 library 的话就好了,但是我自己无从下手,没想到真的有人做出来了。

AssertionError when using `load()` in example notebook

Hey, thanks very much for your work, it looks awesome.

I hit an error, when trying to make the example notebook runtime.ipynb work:

from script.runtime import *

load()
Click here to see the error stack trace
Nodes: 529
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[1], line 4
      1 from script.runtime import *
      3 # load('http://127.0.0.1:8188/')
----> 4 load()
      6 # Nodes can only be imported after load()
      7 from script.runtime.nodes import *

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/__init__.py:18, in load(api_endpoint, vars, watch, save_script_source)
     17 def load(api_endpoint: str = 'http://127.0.0.1:8188/', vars: dict | None = None, watch: bool = True, save_script_source: bool = True):
---> 18     asyncio.run(_load(api_endpoint, vars, watch, save_script_source))

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/nest_asyncio.py:30, in _patch_asyncio.<locals>.run(main, debug)
     28 task = asyncio.ensure_future(main)
     29 try:
---> 30     return loop.run_until_complete(task)
     31 finally:
     32     if not task.done():

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future)
     95 if not f.done():
     96     raise RuntimeError(
     97         'Event loop stopped before Future completed.')
---> 98 return f.result()

File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/futures.py:201, in Future.result(self)
    199 self.__log_traceback = False
    200 if self._exception is not None:
--> 201     raise self._exception.with_traceback(self._exception_tb)
    202 return self._result

File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***)
    228 try:
    229     if exc is None:
    230         # We use the `send` method directly, because coroutines
    231         # don't have `__iter__` and `__next__` methods.
--> 232         result = coro.send(None)
    233     else:
    234         result = coro.throw(exc)

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/__init__.py:29, in _load(api_endpoint, vars, watch, save_script_source)
     26 nodes_info = await api._get_nodes_info()
     27 print(f'Nodes: {len(nodes_info)}')
---> 29 nodes.load(nodes_info, vars)
     31 # TODO: Stop watch if watch turns to False
     32 if watch:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/nodes.py:14, in load(nodes_info, vars)
     12 fact = VirtualRuntimeFactory()
     13 for node_info in nodes_info.values():
---> 14     fact.add_node(node_info)
     16 globals().update(fact.vars())
     17 __all__.extend(fact.vars().keys())

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:218, in RuntimeFactory.add_node(self, info)
    215                 config = {}
    216         inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
--> 218 output_types = [type_and_hint(type, output=True)[0] for type in info['output']]
    220 outputs = len(info['output'])
    221 if outputs >= 2:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:218, in <listcomp>(.0)
    215                 config = {}
    216         inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
--> 218 output_types = [type_and_hint(type, output=True)[0] for type in info['output']]
    220 outputs = len(info['output'])
    221 if outputs >= 2:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:117, in RuntimeFactory.add_node.<locals>.type_and_hint(type_info, name, optional, default, output)
    115 if isinstance(type_info, list):
    116     if output: print(type_info)
--> 117     assert not output
    118     if is_bool_enum(type_info):
    119         t = bool

AssertionError: 

From what i can tell, everything looks ok, my comfyui is running and works correctly.

When I investigate a bit i can add a line to script/runtime/factory.py>RuntimeFactory>add_node (at line L116):

[...]
L115            if isinstance(type_info, list):
L116                if output: print(type_info)
L117                assert not output
[...]

and i get the following list being printed, which appears to be the models i have downloaded from civitai

['cyberrealistic_v41BackToBasics.safetensors', 'dreamshaper_8.safetensors', 'realisticVisionV60B1_v20Novae.safetensors']

I don't understand the code enough to debug further, could you help me please?

Thanks very much,
Clement

[Question] About lifecycle of models

Hi I haven't tried it yet but genuinely interested, my question would be about the loaded model's lifecycles. In comfy whenever i switch workflows it will have to reload the models because they have different node IDs, just wondering if using this library would fix that if I were to create a repository of workflows..

AttributeError: module 'comfy.cmd.server' has no attribute 'nodes'

Hi there -- I'm attempting to use ComfyScript with the ComfyUI package in a Docker image. I've followed the Readme as best I could, paired with a little reading of the source code, but I'm now stuck. Take a look at my imports and setup:

from comfy_script.runtime import *
import random
import boto3
import os
import tarfile

load('comfyui')
from comfy_script.runtime.nodes import *

Pretty standard stuff, I think. However, when the script hits the load line, it dies and spits out the following error:

Traceback (most recent call last):
  File "/app/main.py", line 7, in <module>
    load('comfyui')
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 31, in load
    asyncio.run(_load(comfyui, args, vars, watch, save_script_source))
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "/home/user/micromamba/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/home/user/micromamba/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 54, in _load
    start_comfyui(comfyui, args)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 340, in start_comfyui
    loop.run_until_complete(main.main())
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "/home/user/micromamba/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/home/user/micromamba/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy/cmd/main.py", line 201, in main
    exit(0)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 271, in exit_hook
    setup_comfyui_polyfills(outer.f_locals)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 235, in setup_comfyui_polyfills
    exported_nodes = comfy.cmd.server.nodes
AttributeError: module 'comfy.cmd.server' has no attribute 'nodes'

Pasted the whole stack trace for context, but the error is evident at the bottom there.

The relevant parts of my Dockerfile -- just installing the required packages, nothing special:

RUN pip install wheel
RUN pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git
RUN pip install -U "comfy-script[default]"

Any ideas on where this is going wrong? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.