Giter Club home page Giter Club logo

nodegpt's Introduction

NodeGPT

Implementation of AutoGen inside ComfyUI.

This repository is under development, and not everything is functioning correctly yet.

Dont share your workflow with any API keys inside!!!

Example of Agents

Screenshot 2023-10-16 171331

Screenshot 2023-10-16 165601 The example above should work mostly after installation. The output is displayed in the terminal, and the workflow can be found in the repository (Task_Solving_with_Code_Generation.json).

Screenshot 2023-10-16 165434

Features

  • autogen: Automated Multi Agent Chat

  • Automated Task Solving with Code Generation, Execution & Debugging

  • Automated Complex Task Solving by Group Chat

  • New: llama-cpp (slow), llava

To Do

  • Functions

  • MemGPT (NotWorkingFolder)

Installing

please let me now if you have any issus installing.

Install: https://github.com/comfyanonymous/ComfyUI

I would recomend using LM Studio: https://lmstudio.ai/

Git clone the repository: git clone https://github.com/xXAdonesXx/NodeGPT into the "custom nodes" folder inside ComfyUI

Upon starting, ComfyUI should install the requirements.

Troubleshooting

Try running the Update.bat inside ComfyUI\custom_nodes\NodeGPT

Usage

Start the LM Studio Server

Start ComfUI and place nodes.

For llava:

https://huggingface.co/mys/ggml_llava-v1.5-7b

Contributing

Pull requests, suggestions, and issue reports are welcome.

Credits

LM Studio

ComfyUI: https://github.com/comfyanonymous/ComfyUI

oobabooga: https://github.com/oobabooga/text-generation-webui

chatGPT: https://chat.openai.com/chat

autogen: https://github.com/microsoft/autogen

nodegpt's People

Contributors

guizmus avatar jjohare avatar ltdrdata avatar xxadonesxx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodegpt's Issues

[Feature request] - Expose request_timeout parameter in the LM_Studio node

I added NodeGPT suite to the AP Workflow 6.0. It's working very well, but some users are reporting a timeout error generated by ComfyUI if LM Studio doesn't respond quickly enough.

They solve the problem by increasing the timeout_parameter in /ComfyUI/venv/lib/python3.11/site-packages/autogen/oai/completion.py. Which is great, but not exactly convenient.

I wonder if it's possible to expose that parameter in the LM_Studio node itself or, at least, increase its default for new installations.

Thank you.

ComfyUI 'widgetInputs.js ' Errors when loading Task_Solving_with_Code_Generation.json

Really cool project idea, NodeGPT!

I followed the installation instructions with a fresh and current install on MacOS for everything listed. However, when trying to load Task_Solving_with_Code_Generation.json within ComfyUI there are errors such as

TypeError: undefined is not an object (evaluating 'this.widgets_values.length')
onAfterGraphConfigured@http://127.0.0.1:8188/extensions/core/widgetInputs.js:322:46
@http://127.0.0.1:8188/scripts/app.js:1221:34
@http://127.0.0.1:8188/lib/litegraph.core.js:2260:20
@http://127.0.0.1:8188/scripts/app.js:1201:27
loadGraphData@http://127.0.0.1:8188/scripts/app.js:1451:24
@http://127.0.0.1:8188/scripts/app.js:1298:23
This may be due to the following script:
/extensions/core/widgetInputs.js

Any thoughts what is needed to solve it?

Thanks!

ModuleNotFoundError: No module named 'NodeGPT.Output2String'

hi / i get this error

i've fallowed the steps for installation , but i'm getting the red nodes all the time.

ComfyUI Revision: 1595 [f1062be6] | Released on '2023-10-20'

Traceback (most recent call last):
File "V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT_init
.py", line 32, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'NodeGPT.Output2String'

Cannot import V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'NodeGPT.Output2String'

Import times for custom nodes:
0.0 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
0.4 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
0.4 seconds (IMPORT FAILED): V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT
0.5 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Starting server

Questions on NodeGPT

  1. How flexible is it with model file format compared to Oobabooga regarding GPTQ vs GGUF vs regular model file?
  2. Can this read documents like GPT4All or Khoj or PrivateGPT (and maybe others)?
  3. Are there nodes for downloading models from HF, caching them, and loading them if they are locally stored?
  4. Can this support Langchain and LlamaIndex? What about the online functions of BabyAGI and AgentGPT?
  5. What is with Flowise and other "drag and drop" service?

DisplayText node seems to output to stdout via print() rather than displaying text within ComfyUI

Issue:

When configuring nodes as such:

image

The DisplayText node uses the print() function, which is printing to stdout on my machine and probably most other peoples.

Desired Behavior:

For it to display it to the node contents, preferably in a multiline text widget.

Solution:

Not yet 100% sure, but I can tell the problem likely lies in this usage in DisplayText.py

image

I'll update in the below comments if I come up with the necessary code.

[FEATURE REQUEST] - A node to interact with GPT-4V

Programmatic access to GPT-4V would allow to caption an uploaded image better than with the BLIP model (which is the current approach AFAIK). Which would be useful for inpainting and upscaling tasks in ComfyUI.

Thanks for considering it.

Seems not to play nice with Linux?

I tried a git clone and manual requirements install, and a manager install. Same error with the repo's workflow json in both respects

Loading aborted due to error reloading workflow data

TypeError: widget[GET_CONFIG] is not a function
TypeError: widget[GET_CONFIG] is not a function
at #onFirstConnection (http://serverip/extensions/core/widgetInputs.js:389:54)
at PrimitiveNode.onAfterGraphConfigured (http://serverip/extensions/core/widgetInputs.js:318:29)
at app.graph.onConfigure (http://serverip/scripts/app.js:1221:34)
at LGraph.configure (http://serverip/lib/litegraph.core.js:2260:9)
at LGraph.configure (http://serverip/scripts/app.js:1201:22)
at ComfyApp.loadGraphData (http://serverip/scripts/app.js:1451:15)
at ComfyApp.setup (http://serverip/scripts/app.js:1298:10)
at async http://serverip/:14:4
This may be due to the following script:
/extensions/core/widgetInputs.js

Startup Error

Traceback (most recent call last):
File "D:\dev\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1734, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT_init
.py", line 32, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Agents\Assistant.py", line 7, in
import autogen
ModuleNotFoundError: No module named 'autogen'

Cannot import D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'autogen'

Import times for custom nodes:
0.0 seconds: D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\Zho_Main_Nodes_Chinese.py
0.0 seconds (IMPORT FAILED): D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT

[Feature Request] Increase TextGeneration node interoperability with other nodes

Right now, the TextGeneration node outputs a text and, despite that, I couldn't get a single 3rd party node (that accepts text as input) to connect with it. For example, as you can see in the screenshot below, it won't connect to a ttn textDebug node, even if that one accepts text as input.

It seems like the only way to export the text is by using the Output2String node, which I'm using:

Screenshot 2023-12-01 at 13 50 06

It would be very helpful if the TextGeneration node would be compatible with things like ttn textDebug node and ImpactSwitch nodes out of the box, without the need for extra nodes to do conversions.

Thanks!

import failed. I want to run nodegpt on local lm or ooba but am jammed.

I tried the update.bat

F:\comfyNodeGPT>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
** ComfyUI startup time: 2023-12-12 13:10:18.471906
** Platform: Windows
** Python version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
** Python executable: F:\comfyNodeGPT\python_embeded\python.exe
** Log path: F:\comfyNodeGPT\comfyui.log

Prestartup times for custom nodes:
0.0 seconds: F:\comfyNodeGPT\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 12288 MB, total RAM 64821 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention

Loading: ComfyUI-Manager (V1.11.5)

ComfyUI Revision: 1810 [b0aab1e4] | Released on '2023-12-11'

FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.jsonF:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\AutoUpdate.json

FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
Traceback (most recent call last):
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\API_Nodes\llama-cpp.py", line 9, in
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\comfyNodeGPT\ComfyUI\nodes.py", line 1800, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT_init
.py", line 127, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\API_Nodes\llama-cpp.py", line 21, in
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

Cannot import F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'llama_cpp'

Import times for custom nodes:
0.0 seconds (IMPORT FAILED): F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT
0.3 seconds: F:\comfyNodeGPT\ComfyUI\custom_nodes\ComfyUI-Manager

Starting server

image

[Feature Request] ChatGPT node pointing to a file with the API Key + Seed

I already implemented the NodeGPT suite in the AP Workflow 6.0, but I still cannot completely rely on it because the ChatGPT node exposes the API directly:

Screenshot 2023-12-01 at 13 49 56

I am not sure it's possible in ComfyUI, but the best possible implementation (even better than the Quality of Life Suite one) would be if I could choose a file from a dropdown menu inside the node and that file contains my API Key. Just like the stock LoadImage node in ComfyUI.

In that way, not only I would avoid showing my API key when I take screenshots of the workflow or when I export it, but I would also be able to quickly switch between different API keys if I need to.

Another thing that is helpful for reproducibility is the capability to control the Seed. It's less important that the previous suggestion, but it would be a very nice-to-have.

Thank you!

P.s.: we don't have this problem yet with the LM_Studio node, but I'd expect that in a future we will (for example, if the author creates a hosted version). So, this same feature request would apply to that node in the future.

Fail to install, pyautogen conflicts with pymemgpt

The conflict is caused by:
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.6 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.5 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.4 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.3 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.2 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.1 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.0 depends on openai<0.29.0 and >=0.28.1

Since the memgpt is in TODO, I removed it and successfully finished installation.

The autogen.Completion class requires openai<1 and diskcache.

Good day. I keep getting this error message every time. Please help me. Can't find anything.

Error occurred when executing TextGeneration:

(Deprecated) The autogen.Completion class requires openai<1 and diskcache.

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR

Model name/path doesn't conform to ComfyUI conventional behavior for local models

Issue:
Local models such as loaded via the llama-cpp node behave in an unconventional way and don't have the dropdown like all the other comfy nodes. This will result in low adoption rates and higher false bug reports as people struggle with the unique aspects of NodeGPT's eccentricity here.

Desired Behavior:
image

Solution:
I have asked comfyanonymous to add an 'llm' folder to 'models', and two lines to the folder_paths.py file so that work with LLMs can be done using ComfyUI conventions used for other kinds of models.

Once that has been adopted (if it is) I would recommend the following changes to llama-cpp.py to use the normal model dropdown style that everyone's familiar with:

image

ComfyUI nodes in red

Hi there,

I am trying to install NodeGPT inside ComfyUI,

I have comfyUI already installed and working,( I updated it today with the python scripts.)
I did Gitclone this repository
cd NodeGPT
install.bat
pip install -r requirements.txt
(I did not installed LM Studio yet, but I dont think it is a problem at this step.

Here is the folder structure : ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT

I have the venv folder inside. So it look everything was installed

If I load one of your workflow, the Nodes from NodesGPT appear in red the other nodes are fine tho.

Here is the error message at startup
When loading the graph, the following node types were not found:
UserProxy
DisplayString
Chat
Assistant
PrimitiveNode
LM_Studio

Any idea what could be wrong ? I am not too technical, so I am lost myself.

Thank you!

TextGeneration node seems to be ignoring the cache setting

I'm using the TextGeneration node of your suite. It works great, but I'm not certain it follows the cache = false setting. Or, perhaps, I don't understand the intended behavior.

Screenshot 2023-11-06 at 12 07 59

The behavior I would want, or expect from cache = false is that, even if the submitted input remains identical, the node will still invoke the LLM.
Instead, if nothing changes in the system (including the input to the node), the node doesn't do anything, regardless of the setting for the cache value.

The only way for me to trigger a new generation is either by killing ComfyUI, or changing the cache value from false to true or from true to false. Irrespective of the actual setting, the change of state forces a new LLM generation.

Error generating text: {"error":"This app has no endpoint /api/textgen/."}

Hi there,

could you please share a workflow.json from comfy as I can't make this work.
I get this error:
got prompt
Error generating text: {"error":"This app has no endpoint /api/textgen/."}
Traceback (most recent call last):
File "G:\ComfyUI\ComfyUI\execution.py", line 184, in execute
executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
File "G:\ComfyUI\ComfyUI\execution.py", line 62, in recursive_execute
input_data_all = get_input_data(inputs, class_def, unique_id, outputs, prompt, extra_data)
File "G:\ComfyUI\ComfyUI\execution.py", line 25, in get_input_data
obj = outputs[input_unique_id][output_index]
KeyError: 0

MyComfy.bat is:

.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --listen
pause

My ooba bat is:

@echo off

@echo Starting the web UI...

cd /D "%~dp0"

set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env

if not exist "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" (
  call "%MAMBA_ROOT_PREFIX%\micromamba.exe" shell hook >nul 2>&1
)
call "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end )
cd text-generation-webui

call python server.py --auto-devices --chat --listen --no-stream --model vicuna-13b-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type llama

:end
pause

Thank you in advance

Customizable Agent roles

I would love to have a node that is a definable agent. The system message should be a text box so the agent can be customized beyond the defaults.

llama-cpp model path fails to handle absolute

Background:
Because the llama-cpp node's model-path is different from all the other model loaders and I couldn't get it to load the .gguf I put in the ComfyUI/custom_nodes/NodeGPT/models folder, I tried using an absolute path.

Bug:
I tried using an absolute path. That results in the assert on line 46 tripping, as the model is None type and doesn't load as per the images.
image

image

Desired Behavior:
For absolute paths to successfully load the model.

Transcending the problem though:
Make it function like other model loaders, for example the one for AnimateDiff:

image

It doesn't allow a model_path as a string, but instead provides an easy to use drop down if your model is in the correct folder.

Perhaps a setting somewhere to define the path on a more permanent basis?

Error occurred when executing TextGeneration: 'model'

I'm getting this error when trying to use the TextGeneration node. I think it has to do with this line in TextGeneration.py
config_list = LLM['LLM'],
not adhering to the correct format required by https://microsoft.github.io/autogen/docs/reference/oai/completion/

I'm a little confused because it was working a week or so ago then it just stopped working with this error.

Error occurred when executing TextGeneration:

'model'

File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 45, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^

If it helps, last update broke it

I update daily at 5pm UTC on the dot, and the version released yesterday broke it.
It goes on a rampage trying to download everything under the sun (fig 1)
and eventually runs into problems (fig2)

Fig1
image

Fig2
image

Windows 11, portable install updated 3/28.

Small bug in Output2String.py

It only returns the first letter of the generated prompt, I'm no coder but changing the return to

return(text,);

seems to work for me :)

autogen is called pyautogen

Installing "autogen" via pip did not work for me, the correct package is called "pyautogen" in the pypi repository.

Cannot connect Local LLM to Ollama Node

Hello,

Firstly, thank you for this repo. When I try to connect the Ollama node to the Mistral7B model which is locally served by Ollama serve. I am getting this error again and again. Is using the LM Studio the only solution for utilizing NodeGPT in ComfyUI? If so, how can I achieve serving the models on Ubuntu Server that have no GUI?

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.