Giter Club home page Giter Club logo

Comments (4)

pyquest1 avatar pyquest1 commented on September 27, 2024

bump

from vimbapython.

NiklasKroeger-AlliedVision avatar NiklasKroeger-AlliedVision commented on September 27, 2024

Hi and sorry for the wait. I was on vacation.

The example you copied from section 5.4 is the very minimal overview of how asynchronous image acquisition works with our cameras. A good overview of the difference between synchronous and asynchronous acquisition can be found in section 3.6 of the Vimba Manual. This can be found in your Vimba installation directory under Documentation/Vimba Manual.pdf.

What is cam.queue_frame(frame) ? Where is it defined? I've looked everywhere in the \Source\vimba folder and in the documentation.

An explanation of what queue_frame does can also be found in the Vimba Manual I mentioned above. It is essentially a way for you to tell vimba, that you are done with the image data contained in the frame and you want vimba to use that memory to store the next image. So you take that frame and put it back into a queue, from which vimba takes buffers that are filled with incoming images.

Why do I need this function callback? How do I access the frames after the camera has stopped streaming? Are they still in the buffer? Cam does not seem to be the same as the camera.py module...

The function callback is the way how you are handed back control of the frame in asynchronous image acquisition. Again, please see the Vimba Manual section I mentioned above. The idea is that you prepare a number of frame buffers that can be filled by vimba with new images. You then put all these frame buffers into a queue from which vimba can take them to fill them with incoming images. This happens in a separate thread. This is why the transfer method is called asynchronous, because from the user-code perspective there is no blocking function call waiting for a frame to arrive. Once the image transfer into that buffer is completed, the registered callback function is called with the filled frame as argument. So you can perform your image processing in that callback function. This would again happen in a thread that is not the main thread. Once you are done with your processing you put the frame buffer back into the queue so vimba can reuse the memory.

If you take a look at our asynchronous acquisition example provided with VimbaPython, you can see that only in the frame handler function data from the frame can be used. In that example the frame is simply printed to give the user some feedback, that a frame arrived an in what state, but the same would be true if you want to process the actual image data. You either have to do that in the callback function, or copy the image data over to some other memory so you can use it in a different thread (like the main thread).

I have looked through the more complicated examples and am just completely lost as to how to stream a camera. (And yes I got the examples to work. They stream wonderfully.) But I don't want to use open csv. Let's say I want to use matplotlib. How do I do that? Ultimately, I want to stream two or more cameras do a streamlit/flask type environment. The examples have no documentation of what is going on.

Again, this becomes easier once the concept of asynchronous image acquisition is fully understood. As the filling of the image buffer and the running of the callback function happen in separate threads, not the main thread. This means, that you need to perform some kind of synchronization and thread safe copying of the image data if you want to use your image data in the main thread. If you only want to perform some light image processing, this can be done directly in the callback function.

As a first try: The program I would like to write would look something like this: [...]

If you only require a single image from the camera it might be easier for you to start out with synchronous image acquisition. The asynchronous image acquisition is more performant, but this only really comes into play if you want to continuously record images. For single images synchronous acquisition would be fine. This also comes closer to the code you are aiming for. For single images there is cam.get_frame(). Here is a minimum working example that you can use to get a frame from your camera into the variable named image.

import sys
from typing import Optional
import vimba


def abort(reason: str, return_code: int = 1):
    print(reason + '\n')

    sys.exit(return_code)


def get_camera(camera_id: Optional[str]) -> vimba.Camera:
    with vimba.Vimba.get_instance() as vmb:
        if camera_id:
            try:
                return vimba.get_camera_by_id(camera_id)

            except vimba.VimbaCameraError:
                abort('Failed to access Camera \'{}\'. Abort.'.format(camera_id))

        else:
            cams = vmb.get_all_cameras()
            if not cams:
                abort('No Cameras accessible. Abort.')

            return cams[0]


def setup_camera(cam: vimba.Camera):
    with cam:
        # Try to adjust GeV packet size. This Feature is only available for GigE - Cameras.
        try:
            cam.GVSPAdjustPacketSize.run()

            while not cam.GVSPAdjustPacketSize.is_done():
                pass

        except (AttributeError, vimba.VimbaFeatureError):
            pass


def main():
    with vimba.Vimba.get_instance():
        with get_camera(None) as cam:
            setup_camera(cam)

            image = cam.get_frame()

            print(image)


if __name__ == "__main__":
    main()

I hope this helps.

from vimbapython.

pyquest1 avatar pyquest1 commented on September 27, 2024

Hi Niklas,

Thanks so much for the detailed response. I have got a very nice script working that uses synchronous communication. However, the update rate of the frames is too slow. Therefore, I am diving into your asynchronous version.

So section 3.6 of your manual is informative but not detailed enough to be useful? Note if I type in Ctrl-f queue_frame, there are no results in the manual discussing what this object is.

So let's take the asynchronous example that you provided and look at this code snippet. If I wanted to modify frame handler so that it was updating an array in a python workspace/environment. How would I do this? Please see the following comments in the code to see what I understand about the commands.

global array

def frame_handler(cam: Camera, frame: Frame):
print('{} acquired {}'.format(cam, frame), flush=True) #Print the camera and frame number?
global array
array = frame.as_numpy_ndarray() #Convert frame to an array
cam.queue_frame(frame) #Put the frame back into the queue? Why put it back?

if name == 'main':
main() #Run main program

I don't see how I ever would since the command to frame handler is found here:
cam.start_streaming(handler=frame_handler, buffer_count=10) #camera should start streaming with a buffer of 10 frames. Again, where/how are these frames accessible external to some function call.

What I want to do is the following. In some other program I would do

import vimba_asynchronous as va
import matplotlib.pyplot as plt

if (arg) == True:
va.main()
plt.plot(array)
else:
...

Thanks

from vimbapython.

NiklasKroeger-AlliedVision avatar NiklasKroeger-AlliedVision commented on September 27, 2024

So section 3.6 of your manual is informative but not detailed enough to be useful? Note if I type in Ctrl-f queue_frame, there are no results in the manual discussing what this object is

Could you clarify what you feel is missing in section 3.6 of the Vimba Manual? From my point of view there is enough information to understand how the buffer management in Vimba works, but I am of course possibly biased as I have been working with this system for some time. Let me try and expand on the steps a bit more here.

Quotes taken from the Vimba Manual section 3.6:

User:

  1. Allocate memory for the frame buffers on the host PC.
  2. Announce the buffer (this hands the frame buffer over to the API).
  3. Queue a frame (prepare buffer to be filled).

In VimbaPython steps 1 to 3 is performed for you automatically when you call cam.start_streaming(handler, buffer_count). The passed number of buffers is allocated so that enough memory is available to store the recorded images, the allocated buffers are announced (made available) to vimba and then queued. The queuing puts the respective buffer into a "Input Buffer Pool" from which buffers are taken when a new image becomes available. To be able to do this, vimba obviously needs access to the respective buffer, which is why they have to be announced first. This means vimba is only able to give you back image data, if there is a free buffer in that "Input Buffer Pool" where the image data can be put.

Vimba:
4. Vimba fills the buffer with an image from the camera.

Vimba takes one of the buffers currently in the "Input Buffer Pool" and stores the image data from the camera inside it. This removes the buffer from the "Input Buffer Pool". If the "Input Buffer Pool" is empty, the image can not be stored and is "lost".

  1. Vimba returns the filled buffer (and hands it over to the user).

This is where the frame callback (sometimes named handler) comes into play. Once the entire image data has been filled, vimba calls the callback function you register as handler in the call to cam.start_streaming. The arguments with which this callback is called, are the camera from which the frame was recorded, and the frame itself. This gives you access to the image data and allows you to do your image processing/handling.

User:
6. Work with the image.

This would be what you are doing in your callback function. You can use the image data to perform your analysis or copy the data over to some other place if you just want to record it for now. But remember, that vimba took the buffer you are currently working with here from the "Input Buffer Pool". That means while you are still using it, vimba cannot add new image data to it. So once you are done with your image processing, you need to explicitly allow vimba to reuse the buffer. That is why you again need to queue the frame to add it into the "Input Buffer Pool" for vimba to use.

  1. Requeue the frame to hand it over to the API.

This essentially signals to vimba, that you no longer need the data inside that buffer and it can be overwritten with new image data from the camera. You put the buffer back into the "Input Buffer Pool" by using it as a parameter when calling cam.queue_frame(frame).
If you want further information on this, you might be interested in reading about the GenTL standard. Especially section 5.2 defines the steps performed in the so called "Acquisition Chain" and might give you some more insight.

This has the advantage, that you can control how many buffers you want to use and they do not have to be allocated for every frame you want to record. Instead you handle your image data and tell vimba to just reuse the memory it already knows about. This makes asynchronous acquisition so much faster than synchronous acquisition. However, this does come at the cost of a more complex program structure, as your frame handler function you registered as callback will be called in a separate thread from vimba. If I interpret your question correctly this is what you are asking further down in your comment.

Again, where/how are these frames accessible external to some function call.

As vimba (and all GenTL frameworks) return your image data in the frame handler callback, you only have access to your image data in that function. The idea is that you perform your analysis of the image data here and only pass on the results of that analysis. In our asynchronous_grab_opencv.py example, the frame handler for example takes the image data, and displays it in an OpenCV image. The image data is only used inside that function and not passed on.

A more involved example and probably closer to what you want to do can be found in multithreading_opencv.py. Here we have a main thread, that is responsible for spawning several producer threads (one for each detected camera), and a consumer thread. All of these threads share a frame queue object, which is thread save and can be used to store objects in.

The Producers register themselves as frame handler callbacks to the cameras and tell them to start streaming. Their frame handler callback does nothing but take the image it receives from the camera and copy it over to the frame queue (if the queue is not already full). After that it considers the frame to be done and it is put back into the "Input Buffer Pool" of vimba.

The Consumer is just an infinite loop, that takes images from that frame queue and displays them again. Only if a certain key is pressed, the infinite loop ends and the main thread continues.

Perhaps this example will help you to figure out a working program structure for you.

What I want to do is the following. In some other program I would do
[...]

To me it is not quite clear what exactly you expect from va.main(). Should it just record a single image and make it available as array for you to plot or is there some kind of processing you want to do? If you want to process multiple images I would suggest you try to write a va.main that follows a similar structure as our multi-threaded example I linked above:

  • Define a recorder, that takes your images and either
    • processes them immediately and stores the result of the processing in some thread save object (like a queue)
    • or stores the image in a thread save object and leaves the processing for a separate thread. This separate processor thread would then have to store the result of the processing in some thread save object
  • Have you main function run in an infinite loop until some kind of condition is met, that signals to you that you are done. This can be something like a number of processed images, the length of your output array or an elapsed time. Whatever is most appropriate for your use case
  • Take the result of your processing from the thread save object and transform it into the form you want to use for your plotting (I assume a numpy array or something similar)
  • End your main function so you can plot your result as you planned.

I hope this (quite long by now) comment clears up your questions. As you can see the steps that are missing from the documentation in your mind are steps that become clear once you take a deeper dive into the GenTL standard which our cameras follow. This also means, that it will later be easy for you to move your program over to a more performant programming language like C++ as the general structure would be the same and you could simply switch to using our VimbaC++ API.

If you do have further questions regarding a proposed structure for your program feel free to contact our support team. They have a lot of experience with customer use cases and are very well equipped to help you in general programming questions. This would also allow you to share more details of your use case in a less public place.

I hope this clears up the problem of "missing documentation". The Vimba Python Manual should only be considered a small part of our documentation. It only aims to get you up to speed on some intricacies of VimbaPython (like the use of context managers that are specific to this API) and not as a full manual to all functionalities provided by all of Vimba. For this the general Vimba Manual is better suited. Further details can, as mentioned above, be found in the GenTL and GenICam standards we follow with our APIs. I would therefore consider this issue closed but please feel free to open a new one if you encounter problems with VimbaPython.

from vimbapython.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.