Giter Club home page Giter Club logo

vimbapython's Introduction

Caution

VmbPy is the successor of VimbaPython and is actively maintained. VimbaPython is no longer actively developed and not recommended for new projects. It is therefore archived.

Vimba Python API

Vimba Python API is contained in Allied Vision's Vimba Suite as of Vimba 4.0. We provide VimbaPython on GitHub additionally.
Vimba runs on Windows, Linux, and ARM and also contains C, C++, and .NET APIs. Vimba contains extensive documentation and examples for each API.

Prerequisites and installation

To use this software, you need:

  1. Python version 3.7 or higher
  2. An Allied Vision camera
  3. VimbaSDK for Window, Linux, or ARM. Please download the latest version. To install and use VimbaPython, please follow the instructions in the Vimba_x.x_VimbaPython folder you installed on your system.

vimbapython's People

Contributors

carolaschoenrock avatar fklostermann avatar niklaskroeger-alliedvision avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vimbapython's Issues

Unable to Run Vimba Viewer Jetson Nano

/home/sei/Downloads/Vimba_4_1/Tools/Viewer/Bin/arm_64bit/VimbaViewer: error while loading shared libraries: libpng12.so.0: cannot open shared object file: No such file or directory
I get the error above when I try to run Vimba Viewer on jetson Nano

Support for python 3.6

I have wanted to install this package on JetPack 4.2.2, that comes with python 3.6.9 and found that installing python 3.7 clashes with something in JetPack.

Would it be possible to have support for 3.6.9?

VimbaPython wrongly registered as -0.3.0- makes poetry crash

Here is the output for the installation on an x64 Ubuntu 18.04.4 LTS inside a python 3.7.6 conda env

$ python -m pip install .
Processing /home/user/code/alvium-camera/VimbaPython
Building wheels for collected packages: VimbaPython
  Building wheel for VimbaPython (setup.py) ... done
  Created wheel for VimbaPython: filename=VimbaPython-_0.3.0_-py3-none-any.whl size=76321 sha256=eb348698775d61a3fc9822912f7bc0756e4d0731873f3c74294262a1e71faff1
  Stored in directory: /tmp/pip-ephem-wheel-cache-fh8pg4wy/wheels/64/08/f9/5776929ea5dfcd62957a1b643f071612dbd8e863b4aa9338ca
Successfully built VimbaPython
Installing collected packages: VimbaPython
Successfully installed VimbaPython--0.3.0-

and the package listing

$ pip list
Package     Version            
----------- -------------------
certifi     2019.11.28         
pip         20.0.2             
setuptools  45.2.0.post20200210
VimbaPython -0.3.0-            
wheel       0.34.2 

while accesing the version gives the right answer

$ python -c "import vimba; print(vimba.__version__)"
0.3.0

The version number creates problems when using poetry instead of conda on Jetson TX2 with Ubuntu 18.04.4 LTS inside a python 3.7.6 env. No matter how VimbaPython is installed with or without poetry

$ poetry add git+https://github.com/alliedvision/VimbaPython.git

[ParseVersionError]
Unable to parse "-0.3.0-".

If installed independently inside the poetry project $ poetry show crashes with the same output.

vimba.error.VimbaCameraError: Accessed Camera 'DEV_1AB22C003FC4' with invalid Mode 'AccessMode.Full'. Valid modes are: (<AccessMode.Full: 1>, <AccessMode.Read: 2>)

When I attempt to save camera settings using the following 2 lines:

            with tempfile.NamedTemporaryFile(suffix='.xml') as fp:
                c.save_settings(fp.name, PersistType.All)

I get an exception with the following stack trace:

Traceback (most recent call last):
File "./camera_allied_vision.py", line 72, in getRescannedConnectedCameras
c.enter()
File "/opt/conda/lib/python3.8/site-packages/vimba/util/tracer.py", line 134, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/camera.py", line 359, in enter
self._open()
File "/opt/conda/lib/python3.8/site-packages/vimba/util/tracer.py", line 134, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/util/context_decorator.py", line 44, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/camera.py", line 907, in _open
raise exc from e
vimba.error.VimbaCameraError: Accessed Camera 'DEV_1AB22C000D01' with invalid Mode 'AccessMode.Full'. Valid modes are: (<AccessMode.Full: 1>, <AccessMode.Read: 2>)
Traceback (most recent call last):
File "./camera_allied_vision.py", line 72, in getRescannedConnectedCameras
c.enter()
File "/opt/conda/lib/python3.8/site-packages/vimba/util/tracer.py", line 134, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/camera.py", line 359, in enter
self._open()
File "/opt/conda/lib/python3.8/site-packages/vimba/util/tracer.py", line 134, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/util/context_decorator.py", line 44, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/camera.py", line 909, in _open
self.__feats = discover_features(self.__handle)
File "/opt/conda/lib/python3.8/site-packages/vimba/util/tracer.py", line 134, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/vimba/feature.py", line 1242, in discover_features
call_vimba_c('VmbFeaturesList', handle, None, 0, byref(feats_count), sizeof(VmbFeatureInfo))

Any idea on what's going on here? Every once in a while the error manifests and won't go away until I unplug the camera's usb cable and reconnect. Then the camera will be fine for a while but the error will eventually come back.

python version 3.7 or higher

I understand the first requirement is python version 3.7 or higher, but is there any way to run it with an earlier python? I am trying to run a neural network code, but it only supports python up to 3.6. Basically I am having to do something like
python3.8 video.py | python3.6 ml.py where I am piping through images a byte at a time. This works for one image, but doing a stream of images also causes problems.

any ideas?

VimbaCError(<VmbError.BadParameter: -7>)

The issue is with Vimba 4.2 Python API. When I try to run the list_features.py, it throws me the VimbaCError(<VmbError.BadParameter: -7>) exception in vimba_c.py file at line 671. I'm running it in Anaconda environment. The error occurs at 'with' initialization at line 111 in the list_features.py example code. with get_camera(cam_id) as cam: The list_camera.py example seems to be working fine as it can recognize the camera.

Nothing was changed in the code.

Trigger Line1 - Using an external sensor

I'm using a sensor to trigger the camera and acquire a new frame, but when i active the sensor in the first time the image doesn't appear in the openCV window. In the second time the image that appears in the window is the image corresponding to the first trigger, so I'm always exhibiting one delay image.

This is the code that i'm using:

import cv2
from vimba import *

def frame_handler(cam: Camera, frame: Frame):

	if frame.get_status() == FrameStatus.Complete:
		imgTrigger.append(frame.as_opencv_image()) #as_opencv_image
	cam.queue_frame(frame) 
			
def setupTrigger(cam: Camera):

	#To use with frame.as_opencv_image()
	cam.set_pixel_format(PixelFormat.Mono8)	

	cam.TriggerMode.set("On")

	# Freerun | Line1 | FixedRate | Software
	cam.TriggerSource.set('Line1')

	# FrameStart | AcquisitionStart | AcquisitionEnd | AcquisitionRecord
	cam.TriggerSelector.set('FrameStart')

	# RisingEdge | FallingEdge | AnyEdge | LevelHigh | LevelLow
	cam.TriggerActivation.set("RisingEdge")

	cam.start_streaming(frame_handler, buffer_count=1)

if __name__ == '__main__':

	cam_id = 'DEV_000F315C0DF0'

	imgTrigger = []

	with Vimba.get_instance() as vimba:

		with vimba.get_camera_by_id(cam_id) as cam:
			setupTrigger(cam)

			while True:
				if len(imgTrigger) >= 1:
					cv2.namedWindow('Captured image', cv2.WINDOW_NORMAL)
					cv2.resizeWindow('Captured image', 500, 500)
					cv2.imshow('Captured image', imgTrigger.pop())
					cv2.waitKey(1)

Another doubt is about clear functions, how can i clear the buffer of the frame?

Hardware trigger

Under Ubuntu 20.0.4 with the Alvium 1800 U-500m camera, how do I go about setting up a hardware trigger for Line0 with the Python API?

Error invoking "startstreaming/1AB2%3A0001/DEV_1AB22C003FC4/3": Invalid access while calling 'set()' of Feature 'TriggerMode'. Read access: allowed. Write access: allowed.

I randomly get the following error when I attempt to change the trigger mode property on a camera:

Error invoking "startstreaming/1AB2%3A0001/DEV_1AB22C003FC4/3": Invalid access while calling 'set()' of Feature 'TriggerMode'. Read access: allowed. Write access: allowed.

In fact, that error occurs on attempting to change any camera property. This even happened on a fresh computer restart. This problem can be temporarily fixed by issuing the reset command to the camera, but after a while the error start to manifest again.

Using separate functions to start and stop streaming, also fixing invalid access mode due to session not properly closed

Hi,
I'm using this library to display a video feed inside a Qt application. I have buttons to start and stop the video.

If I press the start button only one frame is acquired.

@Slot()
def start_preview(self):
    with self.vimba:
        with self.cam:
            self.cam.start_streaming(handler=self.frame_handler, buffer_count=10)
    self.is_running = True

@Slot()
def stop_preview(self):
    with self.vimba:
        with self.cam:
            self.cam.stop_streaming()
    self.is_running = False

I guess the streaming stops when I'm leaving the context manager?
Can I make the cam object persistent somehow? Or just keep the stream going?

Async grab for 2 cameras results in incomplete frames being grabbed

Hello, my problem is the following:
First I connected an ALVIUM 1800 U-501M NIR to a jetson Xavier nx without any problems. I can reach 67fps without any incomplete frames (I had to change the MaxTransferSize to the windows value). Now the problem appears when I connect 2 cameras, when I do that I only recieve incomplete frames.

What I tried so far:
I modified the height and width of the cameras to capture 1920x1080 frames, this way I can actually recieve frames, but some incomplete frames are still mixed in (30% of the total frames aprox). Trying to work with the full image (2592 x 1944) I changed the DeviceLinkThrougputLimit to the default value, which of course reduced the fps, but now I can recieve complete frames, but again some incomplete frames are still being delivered.

How to get frame out

I see that Handler() has frame in asynchronous_grab_opencv. But I am not sure how to get it out and pass it on to a different part of the code. Do I need to add another method to Handler() so I can do something like frame.Handler.returnFrame()? or is there a smarter way to get the image frame out?

Missing Documentation?

Hi,

Thanks for a working API with helpful examples and some description of the function calls. However, I cannot find helpful descriptions of what is going on in the API. It seems I am supposed to reference the C documentation but can't find it there either.

Let's take for instance the follownig code from the documentation.

from vimba import *
import time

def frame_handler(cam, frame):
cam.queue_frame(frame)

with Vimba.get_instance() as vimba:
cams = vimba.get_all_cameras()
with cams[0] as cam:
cam.start_streaming(frame_handler)
time.sleep(5)
cam.stop_streaming()


What is cam.queue_frame(frame) ? Where is it defined? I've looked everywhere in the \Source\vimba folder and in the documentation.

Why do I need this function callback? How do I access the frames after the camera has stopped streaming? Are they still in the buffer? Cam does not seem to be the same as the camera.py module...

I have looked through the more complicated examples and am just completely lost as to how to stream a camera. (And yes I got the examples to work. They stream wonderfully.) But I don't want to use open csv. Let's say I want to use matplotlib. How do I do that? Ultimately, I want to stream two or more cameras do a streamlit/flask type environment. The examples have no documentation of what is going on.

As a first try: The program I would like to write would look something like this:

fig,ax = plt.subplots(1,1)

def frame_handler(cam, frame,ax):
cam.queue_frame(frame) #Some code doing something

ax.imshow(frame)

with Vimba.get_instance() as vimba:
cams = vimba.get_all_cameras()
with cams[0] as cam:
image = cam.start_streaming(frame_handler)
time.sleep(5)
cam.stop_streaming()


python2 Support

I understand this library requires Python 3.7, but a library in Python 2.7 would be greatly appreciated as I need to use python 2.7 to interact with other computers, sensors, ...

Recurrent IndexError: pop from empty list

This issue is reproducible running the synchronous_grab.py or asynchronous_grab.py example with an Alvium USB camera.

Traceback (most recent call last):
  File "_ctypes/callbacks.c", line 232, in 'calling callback function'
  File "/home/user/VimbaPython/vimba/feature.py", line 315, in __feature_cb_wrapper
    raise e
  File "/home/user/VimbaPython/vimba/feature.py", line 307, in __feature_cb_wrapper
    handler(self)
  File "/home/user/VimbaPython/vimba/vimba.py", line 537, in __cam_cb_wrapper
    cam = [c for c in self.__cams if cam_id == c.get_id()].pop()

Jetson Nano Slow Image Acquisition

Hello,
I am trying to use the VimbaPythonAPI with an Allied Vision ALVIUM 1800 U-1236c color camera alongside with a jetson nano.
I've written a asynchronous acquisition script and it runs just fine on desktop, solid 7 fps. Trying to run the same code on nano it makes my code freeze after a few frames.
I've edited '/boot/extlinux/extlinux.conf' by adding usbcore.usbfs_memory_mb=1024 but it did not solved the problem.

Changing pixel formats.

Hello.

There is no available pixel formats that is both compatible with Color and OpenCV.

Can you provide how to convert bayerRG pixel formats to opencv Umat or the image file so i can save the images in color?

image display slow

I am trying to use a 1800u500c camera with an Xavier AGX using python. When I launch asynchronous_grab_opencv.py, it gives me a giant image which runs slowly (5fps?). I thought it was running slow because the Xavier was trying to display an image with such high resolution. So I resized the image to 640x480. Unfortunately the fps is still relatively slow. This makes me think that either the camera is capturing frames slowly, or the information is taking a long time to get back through the usb. Any ideas on how to speed this up?

I also tried the settings:
cam.BinningHorizontal = 4
cam.BinningVertical = 4
cam.Height = 480
cam.Width = 640
which it seemed to accept, but did not change anything.

Weird color and low fps.

I am trying to use the VimbaPythonAPI with an Allied Vision ALVIUM 1800 U-1236c color camera. I have 2 issues.
The main problem is that my image is greenish when I try to use an xml file (created with vimba viewer) to load_setting(), cause I don't want things like exposure and white balance to change.
The second problem is that i want to capture a frame rapidly because this camera is gonna be used in a moving vehicle. I am using the synchronous grab with the lowest timeout I could set (200ms) but it is not enough. The frames are blurry. The fps is low too.

It would be helpful if someone helped me with these two problems.

Python requires different C-Version of Vimba?

Today we installed the recent Vimba SDK (including all necessary libraries).
After installing VimbaPython, the following error occurs:

>>> import vimba
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\UC2\Anaconda3\lib\site-packages\vimba\__init__.py", line 103, in <module>
    from .vimba import Vimba
  File "C:\Users\UC2\Anaconda3\lib\site-packages\vimba\vimba.py", line 30, in <module>
    from .c_binding import call_vimba_c, VIMBA_C_VERSION, VIMBA_IMAGE_TRANSFORM_VERSION, \
  File "C:\Users\UC2\Anaconda3\lib\site-packages\vimba\c_binding\__init__.py", line 107, in <module>
    from .vimba_c import VmbInterface, VmbAccessMode, VmbFeatureData, \
  File "C:\Users\UC2\Anaconda3\lib\site-packages\vimba\c_binding\vimba_c.py", line 674, in <module>
    _lib_instance = _check_version(_attach_signatures(load_vimba_lib('VimbaC')))
  File "C:\Users\UC2\Anaconda3\lib\site-packages\vimba\c_binding\vimba_c.py", line 664, in _check_version
    raise VimbaSystemError(msg.format(EXPECTED_VIMBA_C_VERSION, VIMBA_C_VERSION))
vimba.error.VimbaSystemError: Invalid VimbaC Version: Expected: 1.8.3, Found:1.8.2

Is there a way to match the installed VimbaC version with the expected version from Python?

Thanks for your help!

How to obtain frame timestamp?

We have tried to use frame.get_timestamp() and dividing this by the timestamp tick frequency. However there is drift in relation to the real-world time. What is the most accurate way to get the actual time when each frame was obtained?

How to get color images through opencv๏ผŸ

Hello, I am using an AVT GT2000c camera, but I found that the color pixel format of vimba cannot match the color pixel format of opencv. How can I get a color image?thank you very much.

  1. cam.get_pixel_formats():(PixelFormat.Mono8, PixelFormat.BayerGB8, PixelFormat.BayerGB12, PixelFormat.BayerGB12Packed)
  2. OPENCV_PIXEL_FORMATS:(PixelFormat.Mono8, PixelFormat.Bgr8, PixelFormat.Bgra8, PixelFormat.Mono16, PixelFormat.Bgr16, PixelFormat.Bgra16)

Exposure time&Gain

Hello everyone,
I do not find in the methods of pymba how I can control the exposure time and the gain of the camera.
In addition, how can I acquire the information single frame, for example, 10 photographs and that the method returns to me a matrix of a column row and a dimension of ten, each dimension being a specific image.
Thanks to all the helpers :)

How to get BayerRG12 into an opencv mat?

Not sure if this is an issue or just lack in documentation.
I'm trying to read BayerRG12 frames and convert them into opencv mat.

# Synchronous grab
from vimba import *
import cv2

with Vimba.get_instance () as vimba:
    cams = vimba.get_all_cameras ()
    with cams [0] as cam:

        #Camera is set to BayerRG12
        frame = cam.get_frame ()
        frame = frame.as_opencv_image() #Here it will fail 
        frame = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2BGR)
        cv2.imshow("test", frame.as_opencv_image ())
        cv2.waitKey(0)

Error Message: ValueError: Current Format 'BayerRG8' is not in OPENCV_PIXEL_FORMATS

Now, opencv could, as far as I see, convert images from BayerRG12 down eg. a RGB format. However, the Vimba function as_opencv_image() won't accept it.

If I look into the error, it seems it will only support specific formats:

OPENCV_PIXEL_FORMATS = (
    PixelFormat.Mono8,
    PixelFormat.Bgr8,
    PixelFormat.Bgra8,
    PixelFormat.Mono16,
    PixelFormat.Bgr16,
    PixelFormat.Bgra16
)

I also tried to convert it first to a supported format: frame.convert_pixel_format(PixelFormat.Bgr16) and then use the convert to mat function:

#Camera is set to BayerRG12
        frame = cam.get_frame ()
        frame.convert_pixel_format(PixelFormat.Bgr16)
        frame = frame.as_opencv_image() #Here it will fail 
        frame = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2BGR)

This fails with:

  File "C:\Users\justRandom\anaconda3\envs\tf-gpu-cuda8\lib\site-packages\vimba\util\runtime_type_check.py", line 60, in wrapper
    return func(*args, **kwargs)

  File "C:\Users\justRandom\anaconda3\envs\tf-gpu-cuda8\lib\site-packages\vimba\frame.py", line 762, in convert_pixel_format
    raise ValueError('Current PixelFormat can\'t be converted into given format.')

ValueError: Current PixelFormat can't be converted into given format.

Is there another way to get the bayer frame directly to a opencv mat function and do the conversions there?

Wrapping of logger calls

While I see the convenience in wrapping Python's logging facilities, I would recommend to not wrap everything. Wrapping the actual logging calls, masks the origin of the messages.
We use verbose logging messages for development with file, function, lineno information (see: https://docs.python.org/3/library/logging.html#logrecord-attributes), but messages from the Vimba module are displayed as originating from the wrapping functions.
Debbuging a 12:31:30,575 | log.py:234 | Error | <VmbError.BadParameter: -7> message would be easier, if I can see directly where in the code it's from.
My recommendation: Keep the whole logging configuration mechanism as it is, but use direct calls to the Python logger throughout the code.

Also: You redefined the logger's "debug" level as "trace". If you want something more verbose as "debug" you could define a new level and keep the default ones untouched: https://stackoverflow.com/questions/47888855/python3-add-logging-level

Having said that ... I really like the new VimbaPython bindings. Makes prototyping and working with the Vimba drivers a lot easier. :-)

Error import vimba

Hello everyone I installed the vimba sdk app I have python 3.8 on my computer and I still can't install the VimbaPython library through the pycharm so I can not import vimba I would be happy to help I do not understand why I can not install the library through the pip.

DEPRECATION: The -b/--build/--build-dir/--build-directory option is deprecated and has no effect anymore. pip 21.1 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with --no-clean. You can find discussion regarding this at pypa/pip#8333.
ERROR: Could not find a version that satisfies the requirement VimbaPython
ERROR: No matching distribution found for VimbaPython

no TL detected

installed vimba successfully:
Installed /usr/local/lib/python3.6/dist-packages/VimbaPython-1.0.1-py3.6.egg
Processing dependencies for VimbaPython==1.0.1
Finished processing dependencies for VimbaPython==1.0.1

but still got:
raise VimbaSystemError('No TL detected. Please verify Vimba installation.')
vimba.error.VimbaSystemError: No TL detected. Please verify Vimba installation.

Invalid VimbaC Version error

Hello,
I downloaded latest VimbaPython, and tried to run the following code through pyCharm:

import cv2
from vimba import *

def print_hi(name):
# Use a breakpoint in the code line below to debug your script.
print(f'Hi, {name}') # Press Ctrl+F8 to toggle the breakpoint.

if name == 'main':
print_hi('PyCharm')

with Vimba.get_instance () as vimba:
    cams = vimba.get_all_cameras ()
    with cams [0] as cam:
        frame = cam.get_frame ()
        frame.convert_pixel_format(PixelFormat.Mono8)
        cv2.imwrite('frame.jpg', frame.as_opencv_image ())

I got the following error:

Traceback (most recent call last):
File "C:\Users\dlab\PycharmProjects\pythonProject\main.py", line 6, in
from vimba import *
File "C:\Users\dlab\AppData\Local\Programs\Python\Python39\Scripts\VimbaPython-master\vimba_init_.py", line 103, in
from .vimba import Vimba
File "C:\Users\dlab\AppData\Local\Programs\Python\Python39\Scripts\VimbaPython-master\vimba\vimba.py", line 30, in
from .c_binding import call_vimba_c, VIMBA_C_VERSION, VIMBA_IMAGE_TRANSFORM_VERSION,
File "C:\Users\dlab\AppData\Local\Programs\Python\Python39\Scripts\VimbaPython-master\vimba\c_binding_init_.py", line 107, in
from .vimba_c import VmbInterface, VmbAccessMode, VmbFeatureData,
File "C:\Users\dlab\AppData\Local\Programs\Python\Python39\Scripts\VimbaPython-master\vimba\c_binding\vimba_c.py", line 674, in
_lib_instance = _check_version(_attach_signatures(load_vimba_lib('VimbaC')))
File "C:\Users\dlab\AppData\Local\Programs\Python\Python39\Scripts\VimbaPython-master\vimba\c_binding\vimba_c.py", line 664, in _check_version
raise VimbaSystemError(msg.format(EXPECTED_VIMBA_C_VERSION, VIMBA_C_VERSION))
vimba.error.VimbaSystemError: Invalid VimbaC Version: Expected: 1.8.3, Found:1.8.2

How to detect incomplete frames properly?

When running the below code, sometimes the produced frames look like this:

image

I'm not quite sure if this is due to the TIF framewriter or due to the camera interface. I tried catching bad frames by counting zero-pixels before and after writing to the disk. Do you have any idea what could cause this issue? The value framestate gives complete even if the frames are obviously not correctly written or produced.

import numpy as np
import matplotlib.pyplot as plt
import cv2
import time
import matplotlib
import sys
from typing import Optional
from vimba import *
from vimba.frame import FrameStatus
import tifffile as tif
import os

# Setup Camera
timestr = time.strftime("%Y%m%d-%H%M%S")

# Prepare Camera for ActionCommand - Trigger
myexposure = 3000/1000 # in ms 
BLACKLEVEL = 100
mygain = 0
mybasepath = "./"
myfolder = timestr + "_Texp-" + str(myexposure) + "_gain-" + str(mygain)
iiter = 0

# helper functions
def abort(reason: str, return_code: int = 1, usage: bool = False):
    print(reason + '\n')

    if usage:
        print_usage()

    sys.exit(return_code)

def get_camera(camera_id: Optional[str]) -> Camera:
    with Vimba.get_instance() as vimba:
        if camera_id:
            try:
                return vimba.get_camera_by_id(camera_id)

            except VimbaCameraError:
                abort('Failed to access Camera \'{}\'. Abort.'.format(camera_id))

        else:
            cams = vimba.get_all_cameras()
            if not cams:
                abort('No Cameras accessible. Abort.')

            return cams[0]

def setup_camera(cam: Camera):
    with cam:
        # Try to adjust GeV packet size. This Feature is only available for GigE - Cameras.
        try:
            cam.GVSPAdjustPacketSize.run()

            while not cam.GVSPAdjustPacketSize.is_done():
                pass

        except (AttributeError, VimbaFeatureError):
            pass
                
        #cam.TriggerSelector.set('FrameStart')
        #cam.TriggerActivation.set('RisingEdge')
        #cam.TriggerSource.set('Line0')
        #cam.TriggerMode.set('On')
        cam.BlackLevel.set(BLACKLEVEL)
        cam.ExposureAuto.set("Off")
        cam.ContrastEnable.set("Off")

        cam.ExposureTime.set(myexposure*1e3)
        #cam.PixelFormat.set('Mono12')
        cam.GainAuto.set("Off")
        cam.Gain.set(mygain)
        cam.AcquisitionFrameRateEnable.set(False)
        cam.get_feature_by_name("PixelFormat").set("Mono12")


try:
    os.mkdir(mybasepath+myfolder)
except:
    print("Already crated the folder?")

cam_id = 0
frameiter = 0

# Acquire Ptychograms
iiter = 0
with Vimba.get_instance():
    with get_camera(cam_id) as cam:

        setup_camera(cam)
        print('Press <enter> to stop Frame acquisition.')

        input("Plug off the laser")
        myframe = cam.get_frame().as_numpy_ndarray()
        myfilename = mybasepath+myfolder+"/Background.tif"
        tif.imsave(myfilename, myframe) #, imagej=True)
        input("Ready?")
               
        while(True):
            try:
                # take a snapshot of the secondary camera for tracking the position
                while(True):
                    myframe = cam.get_frame().as_numpy_ndarray()
                    myfilename = mybasepath+myfolder+"/"+str(iiter)+".tif" #/"+str(iiter)+"_ix_"+str(ix)+"iy_"+str(iy)+".tif"
                    frame_written = True
                    tif.imwrite(myfilename, myframe)
                    print("Frame STatus:" + str(cam.get_frame().get_status()))
                    print("Frame written:" + str(frame_written))
                    print("Frame mean:" + str(np.mean(myframe)))
                    
                    # check if data has been written to the disk correctly
                    testframe=tif.imread(myfilename)
                    N_pix_dead = np.mean(testframe<=(BLACKLEVEL-20)) # account for noise +/-
                    if  cam.get_frame().get_status() == FrameStatus.Complete and frame_written and N_pix_dead < 1000 :
                        break
                    
                    print("Detected a corrupted frame")

                iiter += 1
            except Exception as e:
                print(e)
                cam.stop_streaming()
                break

Is there a good practice to catch these frames? Or is it a hardware problem?
We use a Jetson Nano with a proper USB3 cable (from AlliedVision) and the latest driver + Python version.

Thank you!

Unable to recover from crash

Here's my problem, I have 2 cams connected to my Jetson Xavier NX, and they were running just fine until an unexpected crash in one of my python apps, and now the 2 cams won't grab any frames in the callback.

If I try to run the cams with modied parameters i.e. I modify the roi I get this message:

counter_arm64_1     | vimba.error.VimbaFeatureError: Invalid access while calling 'set()' of Feature 'Width'. Read access: allowed. Write access: allowed.
counter_arm64_1     | terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >'
counter_arm64_1     |   what():  boost: mutex lock failed in pthread_mutex_lock: Invalid argument

I already tried to reboot but that didn't work, I also tried to delete the memory shared file in /dev/shm but I see that no longer applies. By the way I'm using vimba 4.2 arm64.

Action command stops working after 5 frames

I have a setup with 8 cameras, connected in two groups of 4, each group connected to a different switch and to a separate network interface.

The system has MTU 9000 on each of the two network interfaces.

Running the following code will take the first 5 frames and then ignore the action command. Am i doing something wrong?

Using the adjust packet size code will not work, so I set the BytesPerSecond manually.

from vimba import *
import threading
import time

device_key = 1
group_key = 1
group_mask = 1

def get_command_sender(interface_id):
    # If given interface_id is ALL, ActionCommand shall be sent from all Ethernet Interfaces.
    # This is achieved by run ActionCommand on the Vimba instance.
    if interface_id == 'ALL':
        return Vimba.get_instance()

    with Vimba.get_instance() as vimba:
        # A specific Interface was given. Lookup via given Interface id and verify that
        # it is an Ethernet Interface. Running ActionCommand will be only send from this Interface.
        try:
            inter = vimba.get_interface_by_id(interface_id)

        except VimbaInterfaceError:
            abort('Failed to access Interface {}. Abort.'.format(interface_id))

        if inter.get_type() != InterfaceType.Ethernet:
            abort('Given Interface {} is no Ethernet Interface. Abort.'.format(interface_id))

    return inter

class ImageReader(threading.Thread):
    def __init__ (self, cam: Camera):
        threading.Thread.__init__(self)
        self.cam = cam

    def frame_handler(self, cam: Camera, frame: Frame):
        print (frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received.'.format(frame.get_id()), flush=True)

    # cam.queue_frame(frame)

    def run(self):
        with self.cam as cam:
            print (cam)

            cam.StreamBytesPerSecond.set (20_000_000)

            # for frame in cam.get_frame_generator(limit=2, timeout_ms=3000):
            #     print (f"{cam} frame {frame}")

            cam.TriggerSelector.set('FrameStart')
            cam.TriggerSource.set('Action0')
            cam.TriggerMode.set('On')
            cam.ActionDeviceKey.set(device_key)
            cam.ActionGroupKey.set(group_key)
            cam.ActionGroupMask.set(group_mask)

            cam.start_streaming(self.frame_handler)

            while True:
                time.sleep (1)

            # sender.ActionDeviceKey.set(device_key)
            # sender.ActionGroupKey.set(group_key)
            # sender.ActionGroupMask.set(group_mask)
            # sender.ActionCommand.run()

            # cam.stop_streaming()

if __name__ == '__main__':
    readers = []
    vimba = Vimba.get_instance()
    with vimba:
        for cam in vimba.get_all_cameras():
            reader = ImageReader(cam)
            readers.append (reader)

        for reader in readers:
            reader.start ()

        time.sleep (2)
        interface_id = "ALL"
        sender = get_command_sender(interface_id)

        while True:
            sender.ActionDeviceKey.set(device_key)
            sender.ActionGroupKey.set(group_key)
            sender.ActionGroupMask.set(group_mask)
            sender.ActionCommand.run()
            print ("Command sent")
            time.sleep (2)
        
        for reader in readers:
            reader.join ()


What format is the frame timestamp in?

Hey there!
I am trying to build a FPS display.
I am planning on using the timestamp of each frame received to calculate the actual FPS after processing.
What time unit is the Frame.get_timestamp() return value?
Or how do i get the frame time from two timestamps?

Thanks in advance!

Vimba Hangs on call_vimba_c('VmbShutdown')

Occasionally after a crash the vimba hangs on exit until the computer has been power cycled.

In this minimal example it never reaches the print("ended") statment

with Vimba.get_instance() as vimba:
    print(vimba.get_all_cameras())
print("Ended")

On debugging it hangs forever on call_vimba_c('VmbShutdown') in the Vimba._shutdown method. And does this for all the examples the only way to restore functionality is to restart the system.
Logging out does not fix the error.
dmesg shows that the cameras are being redetected on disconnection and reconnection

Running Vimba 4_0_0 , Ubuntu 20.04

Thanks

Keep stream going outside of context manager

Hi,
I'm using this library to display a video feed inside a Qt application. I have buttons to start and stop the video.

If I press the start button only one frame is acquired.

@Slot()
def start_preview(self):
    with self.vimba:
        with self.cam:
            self.cam.start_streaming(handler=self.frame_handler, buffer_count=10)
    self.is_running = True

@Slot()
def stop_preview(self):
    with self.vimba:
        with self.cam:
            self.cam.stop_streaming()
    self.is_running = False

I guess the streaming stops when I'm leaving the context manager?
Can I make the cam object persistent somehow? Or just keep the stream going?

Problems running VimbaPython in Jupyter Notebook

As the title says, I have problems connecting to a camera from a Jupyter Notebook. Exporting the Notebook to a python file and running it from the same environment has no problems.
Can you reproduce it? It is running in Python 3.7 on a Jetson Nano.

Can't find TriggerSource?

see Vimba Python Manual 1.0.0, page 17.

Trying to switch to cam.TriggerSource.set('Software') without success, because I can't find cam.TriggerSoftware.run()?

I just grepped for 'trigger' in the source code:

/src/vimbapython$ ag -i trigger
Tests/real_cam_tests/feature_test.py
443:            # Trigger change handler and wait for callback execution.
574:            # Trigger change handler and wait for callback execution.
697:            # Trigger change handler and wait for callback execution.

Tests/basic_tests/interface_test.py
163:        # are triggered then called Outside of the with block.

Tests/basic_tests/vimba_test.py
111:        # are triggered then called Outside of the with block.

Examples/event_handling.py
147:            # Acquire a single Frame to trigger events.

Examples/action_commands.py
131:            # Prepare Camera for ActionCommand - Trigger
140:            cam.TriggerSelector.set('FrameStart')
141:            cam.TriggerSource.set('Action0')
142:            cam.TriggerMode.set('On')

Best regards

Discover cameras that have a wrong IP address

I was wondering if there is any way to discover GigE cameras that have a wrong IP? By wrong IP I mean:

  1. the camera was connected to another system and has an IP/mask combination for the new system
  2. the camera lost its IP settings and has reset to the defaults

I would like to be able to detect the camera and set a correct IP.

Memory increasing

Hello, for each frame captured inside the scope the memory increases, but I didn't find any documentation showing how to release this memory, for example:

with cams[0] as cam:
    while True:
        plc.write_by_name(PLC_CAMERA_STATUS, StatusCode.READY, pyads.PLCTYPE_INT)  # no info
        current_trigger_count = plc.read_by_name(PLC_IMAGE_TRIGGER, pyads.PLCTYPE_UINT)
        if current_trigger_count != plc_trigger_counter:
            frame = cam.get_frame(6000)
            frame.convert_pixel_format(PixelFormat.Bgr8)
            frame = frame.as_opencv_image()
            fname = locate_classify(frame, fname)
            cv2.imwrite(fname, cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY))
            logger.info("Image {} wrote.".format(fname))
            ...

But if I put the while True outside from the scope, forcing the code access the scope every loop, the memory doesnt increase, but the problem is that accessing the camera is slower and consumes lot more of cpu.

while True:
    with cams[0] as cam:
        plc.write_by_name(PLC_CAMERA_STATUS, StatusCode.READY, pyads.PLCTYPE_INT)  # no info
        current_trigger_count = plc.read_by_name(PLC_IMAGE_TRIGGER, pyads.PLCTYPE_UINT)
        if current_trigger_count != plc_trigger_counter:
            frame = cam.get_frame(6000)
            frame.convert_pixel_format(PixelFormat.Bgr8)
            frame = frame.as_opencv_image()
            fname = locate_classify(frame, fname)
            cv2.imwrite(fname, cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY))
            logger.info("Image {} wrote.".format(fname))
            ...

Could you guys help me?

Examples synchronous_grab.py

Hello friends,
How can I stop the streaming in the example synchronous_grab.py.
I try to click 13 but it doesn't work.

I can't find a way to connect (physically) and detect Cameras after vimba instance was created

Hello,

if I start my program with one GigE camera alive and connected, everythings goes OK: I can detect disconnections and reconnections.

In the case where the camera is not connected when the vimba object is created, subsequent calls to vimba.get_camera_by_id(cameraid) or vimba.get_all_cameras() do not return any Camera.

Is this expected? Is there a way around this?

Best regards

Trigger problem with Alvium 1800 U-500m camera

I am developing a Python app to trigger an Alvium 1800 U-500m camera to grab frames but am experiencing unusual behaviour. When using software triggers, it always take 2 triggers to get the camera to grab a frame. On the first trigger, the camera's status led transitions from being steady to a fast blinking. On the second software trigger, the led goes back to being steady and a frame is grabbed. The frame ID numbers are also coming in as even numbers. For instance, the first frame is frame 0, the second frame is frame 2 and so on. With hardware triggering, the first trigger causes the led to transition from steady to blinking but no frame is ever grabbed nor does the led go back to steady, no matter how many subsequent hardware triggers are made. My operating system is Ubuntu 20.0.4 and the camera is connected to a USB 3.0 port with a 1.0m cable. Here is example code I use for software triggering:

with Vimba.get_instance() as vimba:
    with vimba.get_camera_by_id(self.id) as c:
        c.AcquisitionMode.set('Continuous')
        c.TriggerSelector.set('FrameStart')
        c.TriggerMode.set('On')
        c.TriggerSource.set('Software')
        c.start_streaming(handler=frame_handler)
        for k in range(1,10):
            c.TriggerSoftware.run()
            print("Software trigger performed")
            times.sleep(1)

slow image acquisition

Hello

I'm trying to use an ALVIUM 1800 U-501M NIR camera to capture some frames on a Jetson Nano. The problem is that the image adquision is way too slow barely 5 FPS when tested on synchronous_grab.py, while the details on the camera specify Max. frame rate at full resolution: 67 fps at โ‰ฅ350 MByte/s, Mono8, so I'm a little puzzled about what's the problem.
Also tried to switch the Jetson into MAXN mode but didn't help.

could you help me with this?

FrameStatus.Incomplete when acquiring frames from Raspberry Pi 4

Hi

I have a setup with a Raspberry Pi 4 connected to a Allied Vision Alvium 1800 U-507m via one of the USB 3.0 ports. I am making a python script for acquiring images from the camera and storing them on a external drive, also connected to the Raspberry Pi via a USB 3.0 port. The higher acquisition rate I get the better.

The problem is that I often get "FrameStatus.Incomplete" errors, even when running the Python script with nice -20 and ionice 3. For my application it is critical that frames are not lost.

So far I've tried grabbing frames asynchronously, synchronously and with software trigger, where the code is mostly copy-paste from this repository (implemented a queue in order to take the load of the callback function). I've tried different acquisition rates and even not storing to the external drive at all and only to memory card at low acquisition rates.

The method that has been the most promising so far is synchronous acquisition at 0.5 FPS and storing to SD-card, but even then I get "FrameStatus.Incomplete" errors at a rate of 1-2% of the received frames.

Also, when I asynchronously acquire frames at 10 FPS and store to the external SSD, I usually do not see any "FrameStatus.Incomplete" errors before the 1018th frame, after this the camera stream halts, this does not happen at e.g 5- and 20 FPS, but I do get much more errors.

I am using the latest version of Vimba (4.0.0) and VimbaPython (1.0.0)

Example for setting camera gain in python SDK

Hello,
I am using the gold eye G008 camera and want to programmatically change the gain setting of the camera in the python SDK. Looking over the documentation it's not obvious to me how to do this. Can an example be added which shows how to change the camera gain?
Best,
-Alex

Camera' object has no attribute 'PixelFormat

If I try to read out the pixel format (Mono8) via:

cam.PixelFormat.set("Mono12")

I get an no attribute error.
The attribute should be implemented in the camera AV 1800 U-158C (?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.