Giter Club home page Giter Club logo

Comments (6)

bssrdf avatar bssrdf commented on September 13, 2024 1

Actually the render part of the code enabled GPUs, regardless of whether enable_gpu flag is passed.

# render/render.py
def enable_gpu(engine_name = 'CYCLES'):
    # from: https://github.com/DLR-RM/BlenderProc/blob/main/blenderproc/python/utility/Initializer.py
    compute_device_type = None
    prefs = bpy.context.preferences.addons['cycles'].preferences
    # Use cycles
    bpy.context.scene.render.engine = engine_name
    bpy.context.scene.cycles.device = 'GPU'

    preferences = bpy.context.preferences.addons['cycles'].preferences
    for device_type in preferences.get_device_types(bpy.context):
        preferences.get_devices_for_type(device_type[0])

    for gpu_type in ['OPTIX', 'CUDA']:#, 'METAL']:
        found = False
        for device in preferences.devices:
            if device.type == gpu_type and (compute_device_type is None or compute_device_type == gpu_type):
                bpy.context.preferences.addons['cycles'].preferences.compute_device_type = gpu_type
                logger.info('Device {} of type {} found and used.'.format(device.name, device.type))
                found = True
                break
        if found:
            break

    # make sure that all visible GPUs are used
    for device in prefs.devices:   
            device. Use = True
   
    return prefs.devices


def render_image(
    camera_id,
    min_samples,
    num_samples,
    time_limit,
    frames_folder,
    adaptive_threshold,
    exposure,
    passes_to_save,
    flat_shading,
    use_dof=False,
    dof_aperture_fstop=2.8,
    motion_blur=False,
    motion_blur_shutter=0.5,
    render_resolution_override=None,
    excludes=[],
):
    tic = time.time()

    camera_rig_id, subcam_id = camera_id

    for exclude in excludes:
        bpy.data.objects[exclude].hide_render = True

    with Timer(f"Enable GPU"):
        devices = enable_gpu()

If I use tools/manage_datagen_jobs.py to generate images, GPU/CUDA is not being used in rendering step, even though the log file indicates GPU was used:

[00:08:01.074] [times] [INFO] | [Enable GPU]
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used.
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used = True.
[00:08:01.314] [rendering.render] [INFO] | Device Intel Core i7-6700 CPU @ 3.40GHz of type CPU found and used = False.
[00:08:01.314] [times] [INFO] | [Enable GPU] finished in 0:00:00.240083

However, if I manually execute render step as listed inrun_pipeline.sh, e.g.

nice -n 20 $BLENDER --background -y -noaudio --python generate.py -- --input_folder outputs/seaice6/0/fine_0_0_0048_0 --output_folder outputs/seaice6/0/frames_0_0_0048_0 --seed 0 --task render --task_uniqname short_0_0_0048_0 -g arctic intermediate -p render.render_image_func=@full/render_image LOG_DIR='outputs/seaice6/0/logs' execute_tasks.frame_range=[48,48] execute_tasks.camera_id=[0,0] execute_tasks.resample_idx=0

blender will use GPU and rendering is significantly faster (25 min on GTX 1070 vs. 4 hours on Intel i7 6700 3.4G)
I don't know why tools/manage_datagen_jobs.py is not working for GPUs. What is difference between running tools/manage_datagen_jobs.py and directly executing commands in run_pipeline.sh?

Update

I finally figured out why tools/manage_datagen_jobs.py is not using GPUs for rendering.

The culprit is local_16GB.gin has this line LocalScheduleHandler.use_gpu=False. This will turn off GPUs.

So if you, like me, want to do all other steps in CPU but only rendering on GPU, do this:

  • Create a enable_gpu_rendering.gin file in tools/pipeline_configs/ and put the following in it:
LocalScheduleHandler.use_gpu=True
  • At command line, call something like this:
python -m tools.manage_datagen_jobs --output_folder outputs/hello_world --num_scenes 1 
--pipeline_configs local_16GB enable_gpu_rendering monocular blender_gt --specific_seed 0 --configs desert simple

This will do coarse, populate, and fine_terrain using CPUs but short (rendering) on GPUs.

For my machine which is very old, this is the only way to get a decent performance out of it. In particular, cycles render is very slow with CPU only. But switching to CUDA really made a difference, even with a generations old GTX1070.

If you have beefy GPUs (3090/4090), turn on enable_gpu to accelerate terrain generation as well.

Thanks to @badgids for LocalScheduleHandler.use_gpu=True tip.

from infinigen.

David-Yan1 avatar David-Yan1 commented on September 13, 2024

enable_gpu really means enable cuda-accelerated terrain meshing and will be renamed. There is supposed to be GPU-accelerated rendering in the short step regardless of whether this flag is passed

from infinigen.

luoluoluooo avatar luoluoluooo commented on September 13, 2024

ok,i got it,not all processes have GPU acceleration

from infinigen.

WellTung666 avatar WellTung666 commented on September 13, 2024

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.

image

image

I also have the same problem,how can i solved it.

from infinigen.

luoluoluooo avatar luoluoluooo commented on September 13, 2024

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.
image
image

I also have the same problem,how can i solved it.

Perhaps this is not a problem. In some stages, the program will use GPU acceleration, in other stage,the program is not。

from infinigen.

araistrick avatar araistrick commented on September 13, 2024

You should expect to see GPU usage briefly during the fine_terrain stage, and for a decent duration during any rendering stage. 0 GPU usage during coarse/populate stages is expected and typical. Confusion RE LocalScheduleHandler.use_gpu will be cleared up via PR

from infinigen.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.