Giter Club home page Giter Club logo

Comments (4)

szalpal avatar szalpal commented on May 31, 2024

Hello @austingg !
I'm not sure if you are referring to dynamic batching or model concurrency. Anyhow, dali_backend supports both.

In the dynamic batching case, all you need to do is to specify sufficiently large max_batch_size in model configuration and specify the same value in the batch_size argument of DALI pipeline, e.g.

config.pbtxt
------------
backend: dali
max_batch_size: 256


dali_pipeline.py
----------------
@dali.pipeline_def(batch_size=256, num_threads=1, device_id=0)
def pipe():
    images = dali.fn.extenral_source(device="cpu", name="DALI_INPUT_0")
    images = dali.fn.decoders.image(images, device="mixed", output_type=types.RGB)
    images = dali.fn.resize(images, resize_x=299, resize_y=299)
    images = dali.fn.crop_mirror_normalize(images,
                                           dtype=types.FLOAT,
                                           output_layout="HWC",
                                           crop=(299, 299),
                                           mean=[0.485 * 255, 0.456 * 255, 0.406 * 255],
                                           std=[0.229 * 255, 0.224 * 255, 0.225 * 255])
    return images

In case of model concurrency, no special actions are necessary, it should just work. If it doesn't, please let us know.

from dali_backend.

austingg avatar austingg commented on May 31, 2024

Thanks for your help, @szalpal
I have already used max_batch_size, however other inference backend also specify dynamic batching {} , you mean dali_backend doesn't need this ? I used max_batch_size 256 and instance_group: 10 I have checked get_inference_statisitcs() all dali back batchsize is 1

from dali_backend.

szalpal avatar szalpal commented on May 31, 2024

@austingg ,

My apologies. In fact, dynamic batching in dali_backend is not fully supported yet. We expect to handle this in the nearest future, most probably it's going to be included in tritonserver:21.06 release. This feature might be available earlier, but for the main branch build.

I'll keep this issue open until we ship the dynamic batching support

from dali_backend.

austingg avatar austingg commented on May 31, 2024

looking forward to it. since I used dali_backend as the preprocessing part for an ensemble model, the inference backend can use dynamic batch, the bottleneck is preprocessing.

from dali_backend.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.