Giter Club home page Giter Club logo

deeplabcut-live-gui's Introduction

Welcome! 👋

DeepLabCut™️ is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. Read a short development and application summary below.

Very quick start: pip install "deeplabcut[gui,tf]" that includes all functions plus GUIs, or pip install deeplabcut[tf] (headless version with PyTorch and TensorFlow).

  • We recommend using our conda file, see here or the new deeplabcut-docker package. Please note that currently we support Python 3.9 (see conda files for guidance).

Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at Nature Protocols paper.

For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org

🐭 pose tracking of single animals demo Open in Colab

🐭🐭🐭 pose tracking of multiple animals demo Open in Colab

  • See more demos here. We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab.

Why use DeepLabCut?

In 2018, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to rats, humans, various fish species, bacteria, leeches, various robots, cheetahs, mouse whiskers and race horses. DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see Pretraining boosts out-of-domain robustness for pose estimation and Lauer et al 2022). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

Left: Due to transfer learning it requires little training data for multiple, challenging behaviors (see Mathis et al. 2018 for details). Mid Left: The feature detectors are robust to video compression (see Mathis/Warren for details). Mid Right: It allows 3D pose estimation with a single network and camera (see Mathis/Warren). Right: It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see Nath* and Mathis* et al. 2019).

DeepLabCut is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See DLC-Utils for some helper code.

Code contributors:

DLC code was originally developed by Alexander Mathis & Mackenzie Mathis, and was extended in 2.0 with the core dev team consisting of Tanmay Nath (2.0-2.1), and currently (2.1+) with Jessy Lauer and (2.3+) Niels Poulsen. DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the 100+ contributors. Please see AUTHORS for more details!

This is an actively developed package and we welcome community development and involvement.

Get Assistance & be part of the DLC Community✨:

🚉 Platform 🎯 Goal ⏱️ Estimated Response Time 📢 Support Squad
Image.sc forum
🐭Tag: DeepLabCut
To ask help and support questions👋 Promptly🔥 DLC Team and The DLC Community
GitHub DeepLabCut/Issues To report bugs and code issues🐛 (we encourage you to search issues first) 2-3 days DLC Team
Gitter To discuss with other users, share ideas and collaborate💡 2 days The DLC Community
GitHub DeepLabCut/Contributing To contribute your expertise and experience🙏💯 Promptly🔥 DLC Team
🚧 GitHub DeepLabCut/Roadmap To learn more about our journey✈️ N/A N/A
Twitter Follow To keep up with our latest news and updates 📢 Daily DLC Team
The DeepLabCut AI Residency Program To come and work with us next summer👏 Annually DLC Team

References:

If you use this code or data we kindly ask that you please cite Mathis et al, 2018 and, if you use the Python package (DeepLabCut2.x) please also cite Nath, Mathis et al, 2019. If you utilize the MobileNetV2s or EfficientNets please cite Mathis, Biasi et al. 2021. If you use versions 2.2beta+ or 2.2rc1+, please cite Lauer et al. 2022.

DOIs (#ProTip, for helping you find citations for software, check out CiteAs.org!):

Please check out the following references for more details:

@article{Mathisetal2018,
    title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
    author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe  and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
    journal = {Nature Neuroscience},
    year = {2018},
    url = {https://www.nature.com/articles/s41593-018-0209-y}}

 @article{NathMathisetal2019,
    title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
    author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
    journal = {Nature Protocols},
    year = {2019},
    url = {https://doi.org/10.1038/s41596-019-0176-0}}
    
@InProceedings{Mathis_2021_WACV,
    author    = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
    title     = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2021},
    pages     = {1859-1868}}
    
@article{Lauer2022MultianimalPE,
    title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
    author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and     Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
    journal={Nature Methods},
    year={2022},
    volume={19},
    pages={496 - 504}}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170}}

Review & Educational articles:

@article{Mathis2020DeepLT,
    title={Deep learning tools for the measurement of animal behavior in neuroscience},
    author={Mackenzie W. Mathis and Alexander Mathis},
    journal={Current Opinion in Neurobiology},
    year={2020},
    volume={60},
    pages={1-11}}

@article{Mathis2020Primer,
    title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
    author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
    journal={Neuron},
    year={2020},
    volume={108},
    pages={44-65}}

Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
    author = {Mathis, Alexander and Warren, Richard A.},
    title = {On the inference speed and video-compression robustness of DeepLabCut},
    year = {2018},
    doi = {10.1101/457242},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
    eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
    journal = {bioRxiv}}

License:

This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission.

SuperAnimal models are provided for research use only (non-commercial use).

Major Versions:

  • For all versions, please see here.

VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience.

VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects).

VERSION 2.0-2.1: This is the Python package of DeepLabCut that was originally released in Oct 2018 with our Nature Protocols paper (preprint here). This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see panel b).

VERSION 1.0: The initial, Nature Neuroscience version of DeepLabCut can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11

News (and in the news):

💜 The DeepLabCut Model Zoo launches SuperAnimals, see more here.

💜 DeepLabCut supports multi-animal pose estimation! maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the new 2.2+ releases for what's new & how to install it, please see our new paper, Lauer et al 2022, and the new docs on how to use it!

💜 We support multi-animal re-identification, see Lauer et al 2022.

💜 We have a real-time package available! http://DLClive.deeplabcut.org

  • January 2024: Our original paper 'DeepLabCut: markerless pose estimation of user-defined body parts with deep learning' in Nature Neuroscience has surpassed 3,000 Google Scholar citations!

  • December 2023: DeepLabCut hit 600,000 downloads!

  • October 2023: DeepLabCut celebrates a milestone with 4,000 🌟 in Github!

  • July 2023: The user forum is very active with more than 1k questions and answers: Image.sc forum

  • May 2023: The Model Zoo is now fully integrated into the DeepLabCut GUI, making it easier than ever to access a variety of pre-trained models. Check out the accompanying paper: SuperAnimal pretrained pose estimation models for behavioral analysis by Ye et al.

  • December 2022: DeepLabCut hits 450,000 downloads and 2.3 is the new stable release

  • August 2022: DeepLabCut hit 400,000 downloads

  • August 2021: 2.2 becomes the new stable release for DeepLabCut.

  • July 2021: Docs are now at https://deeplabcut.github.io/DeepLabCut and we now include TensorFlow 2 support!

  • May 2021: DeepLabCut hit 200,000 downloads! Also, Our preprint on 2.2, multi-animal DeepLabCut is released!

  • Jan 2021: Pretraining boosts out-of-domain robustness for pose estimation published in the IEEE Winter Conference on Applications of Computer Vision. We also added EfficientNet backbones to DeepLabCut, those are best trained with cosine decay (see paper). To use them, just pass "efficientnet-b0" to "efficientnet-b6" when creating the trainingset!

  • Dec 2020: We released a real-time package that allows for online pose estimation and real-time feedback. See DLClive.deeplabcut.org.

  • 5/22 2020: We released 2.2beta5. This beta release has some of the features of DeepLabCut 2.2, whose major goal is to integrate multi-animal pose estimation to DeepLabCut.

  • Mar 2020: Inspired by suggestions we heard at this weeks CZI's Essential Open Source Software meeting in Berkeley, CA we updated our docs. Let us know what you think!

  • Feb 2020: Our review on animal pose estimation is published!

  • Nov 2019: DeepLabCut was recognized by the Chan Zuckerberg Initiative (CZI) with funding to support this project. Read more in the Harvard Gazette, on CZI's Essential Open Source Software for Science site and in their Medium post

  • Oct 2019: DLC 2.1 released with lots of updates. In particular, a Project Manager GUI, MobileNetsV2, and augmentation packages (Imgaug and Tensorpack). For detailed updates see releases

  • Sept 2019: We published two preprints. One showing that ImageNet pretraining contributes to robustness and a review on animal pose estimation. Check them out!

  • Jun 2019: DLC 2.0.7 released with lots of updates. For updates see releases

  • Feb 2019: DeepLabCut joined twitter Twitter Follow

  • Jan 2019: We hosted workshops for DLC in Warsaw, Munich and Cambridge. The materials are available here

  • Jan 2019: We joined the Image Source Forum for user help: Image.sc forum

  • Nov 2018: We posted a detailed guide for DeepLabCut 2.0 on BioRxiv. It also contains a case study for 3D pose estimation in cheetahs.

  • Nov 2018: Various (post-hoc) analysis scripts contributed by users (and us) will be gathered at DLCutils. Feel free to contribute! In particular, there is a script guiding you through importing a project into the new data format for DLC 2.0

  • Oct 2018: new pre-print on the speed video-compression and robustness of DeepLabCut on BioRxiv

  • Sept 2018: Nature Lab Animal covers DeepLabCut: Behavior tracking cuts deep

  • Kunlin Wei & Konrad Kording write a very nice News & Views on our paper: Behavioral Tracking Gets Real

  • August 2018: Our preprint appeared in Nature Neuroscience

  • August 2018: NVIDIA AI Developer News: AI Enables Markerless Animal Tracking

  • July 2018: Ed Yong covered DeepLabCut and interviewed several users for the Atlantic.

  • April 2018: first DeepLabCut preprint on arXiv.org

deeplabcut-live-gui's People

Contributors

alexemg avatar antortjim avatar gkane26 avatar hausmanns avatar jeylau avatar mmathislab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplabcut-live-gui's Issues

Pose point trajectory

hello

I want to draw the trajectory of a pose point, using img_draw() to draw the trajectory, but I can't get the coordinates of that point from the previous frame.

I created the queue and wanted to put the coordinates of the point in the queue to call, but it didn't seem to work

Thank you in advance for your answers

微信图片_20220811171630

Problem loading model

I decided to start slow...
Installed on dlc-live on Windows -- no problem. Tensor flow versions match (1.15.4)
I initialize the camera... it comes up Ok. I have no Processor for now.
I select the model and click "Init DLC" and I get the out of range error in the image below.
Did I do something wrong?

Capture

When using the GUI with RTSP Stream Freezing after 30s

I am using the DLC Live GUI and connecting to an RTSP stream of a camera. It seems that after a certain period of time (30s - 2 minutes) the camera that is initialized freezes on the frame within the DLC window.

If it matters at all, I am using a GPU (NVIDIA RTX 3080) and also have 32GB of RAM on this machine.

Would there be any reason why this is happening?

Deeplabcut-live-gui install

I'm having difficulties installing the deeplabcut-live-gui on my windows laptop. Any ideas if this is compatible with windows and what is necessary steps to take.
I've already tried installing through conda using the conda activate dlc-live and pip install deeplabcut-live-gui, through normal conda install and miniconda install. However, I'm met with the error: "ERROR: Could not find a version that satisfies the requirement deeplabcut-live-gui (from versions: none)" and "ERROR: No matching distribution found for deeplabcut-live-gui"
Any idea how to resolve this issue?

White Matter e3Vision Camera not available on GUI

Hi,
I am trying to integrate e3Vision Camera into the deeplabcut-live GUI, and even found a file in the company's website on how to run the camera using OpenCV and it runs pretty fine, https://docs.white-matter.com/docs/e3vision/sample-scripts/opencv/.
However, I am trying to load the same from the GUI but I don't find any option to feed in the file or anything else, or even put in the IP address for the camera. Can you recommend me the changes that I need to make in any of the scripts? I have already looked at camera_support.md and tried to make similar changes, but it didn't seem to work. Thank you!

Installation problem

All seemed to have installed fine, but when I run dlclivegui in the environment I get:

image

Is this just a missing imaging source driver?

Basler Camera feed into GUI

Hello,

I have a Basler USB camera I am wanting to add to the DLC live GUI.

In order to get the DLC live GUI to open, I made the changes specified in issue #4. The Basler camera is compatible with OpenCV, but for an unknown reason I am not able to open the Basler camera in the GUI. I was able to open the Basler camera through python using the following script on the Basler Github: https://github.com/basler/pypylon/blob/master/samples/opencv.py

However, I am not sure how to integrate this into the DLC Live GUI Camera opencv.py script. Do you have an idea on how to go about integrating the Basler opencv.py script into your opencv.py script of the DLC live GUI work flow?

Thank you!

How to set up RTSP camera?

Is there any documentation on setting up an RTSP camera using the dlc-live-gui? I have looked for examples of doing this through using OpenCV with RTSP but have not had any luck.

Problems setting resolution and cropping for camera

Hello, I’m new to DeepLabCut-Live as well as to GitHub. Please kindly forgive me if I miss any guidelines. Thank you in advance for your patience and help.

Recently while trying to set up camera in the GUI, I found that if the resolution is set other than 640,480, the pop-up window would be entirely black when I click on the “Init Cam” button. However, the default settings (with a resolution of 640, 480) work well. Similar problems also arise when values are entered into “crop” (under the condition that resolution is set as 640, 480 as default).

I've posted this issue on image.sc (https://forum.image.sc/t/deeplabcut-live-gui-camera-cannot-change-resolution-or-crop/61488) about two weeks ago, and just found that GitHub also provides this Issues section. Sorry for not have noticed this. During these two weeks I have managed to solve some of the problems but encountered more. I'll describe them in detail below (those described in the post on image.sc are also included).

(For debugging purpose, I have created another environment and made some modifications to the codes. Please accept my apology if this is inappropriate.)

System information
Operating system: Windows 10 64-bit
Python version: 3.7.11
OpenCV version: originally 4.5.4.60 but changed to 4.5.5.62 (Please see below for details)
Camera: USB cameras purchased from local manufacturer (it seems that they do not have a brand name in English), the cameras can be controlled by OpenCV

(1) Problem setting resolution
I’ve first tested possible resolutions for my camera as in https://www.learnpythonwithrune.org/find-all-possible-webcam-resolutions-with-opencv-in-python/. 640 * 480, 800 * 600, 1024 * 768, 1280 *720, 1280 * 960, and 1920 * 1080 are all possible resolutions. (This test was run in the dlc-live environment)

However, if the resolution is set as 800, 600 in the GUI, there would be error messages:

Traceback (most recent call last):
  File "C:\Users\zoosh\anaconda3\envs\dlc-live\lib\site-packages\multiprocess\process.py", line 297, in _bootstrap
    self.run()
  File "C:\Users\zoosh\anaconda3\envs\dlc-live\lib\site-packages\multiprocess\process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\zoosh\anaconda3\envs\dlc-live\lib\site-packages\dlclivegui\camera_process.py", line 99, in _run_capture
    self._capture_loop()
  File "C:\Users\zoosh\anaconda3\envs\dlc-live\lib\site-packages\dlclivegui\camera_process.py", line 120, in _capture_loop
    np.copyto(self.frame, frame)
  File "<__array_function__ internals>", line 6, in copyto
ValueError: could not broadcast input array from shape (480,640,3) into shape (600,800,3)

If the resolution is set as other values, similar errors would also arise like:
ValueError: could not broadcast input array from shape (480,640,3) into shape (height,width,3)

I created another environment for debugging and added some codes to check if self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.im_size[0]) and self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.im_size[1]) actually works. It turns out that although self.im_size is successfully set as 800, 600, self.cap.get(cv2.CAP_PROP_FRAME_WIDTH) and self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT) still returned 640, 480. However, when I go into python -i the self.cap.set() commands are able to work, so I'm not sure about what's going wrong.

After that I upgraded the opencv-python from version 4.5.4.60 to version 4.5.5.62, and the CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT can be successfully set as values other that 640,480 now. However, another problem showed up:

Traceback (most recent call last):
  File "C:\Users\zoosh\anaconda3\envs\dlc-live-debug\lib\site-packages\multiprocess\process.py", line 297, in _bootstrap
    self.run()
  File "C:\Users\zoosh\anaconda3\envs\dlc-live-debug\lib\site-packages\multiprocess\process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\zoosh\anaconda3\envs\dlc-live-debug\lib\site-packages\dlclivegui\camera_process.py", line 99, in _run_capture
    self._capture_loop()
  File "C:\Users\zoosh\anaconda3\envs\dlc-live-debug\lib\site-packages\dlclivegui\camera_process.py", line 116, in _capture_loop
    frame, frame_time = self.device.get_image_on_time()
  File "C:\Users\zoosh\anaconda3\envs\dlc-live-debug\lib\site-packages\dlclivegui\camera\opencv.py", line 154, in get_image_on_time
    ret, frame = self.cap.read()
cv2.error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\core\src\matrix.cpp:438: error: (-215:Assertion failed) _step >= minstep in function 'cv::Mat::Mat'

This error showed up for every acceptable resolution value except for 640, 480. However, when I switched to another USB camera that allowed a resolution of 320, 240, this error would not show up, but what the pop-up window showed was the upper-left part of the 640*480 frame.

It seemed to me that there might be something wrong related to OpenCV. However, I have no idea how to fix it. It would really be of great help if there's a solution to it. Many thanks in advance.

(2) Problem cropping
Under the condition that resolution is set as 640, 480, when crop is set as 0, 320, 0, 240, error messages showed:
ValueError: could not broadcast input array from shape (240,320,3) into shape (320,240,3)
After inspecting into the codes, I found that in the function set_im_size of Camera object, the variable self.im_size would be set as [240,320] if 0,320,0,240 were passed to ‘‘crop’’ in the GUI. This seems to eventually cause the dimensions of self.frame to be set as (320,240,3), which would be incompatible with the dimensions of variable frame as it should be (240,320,3).

In the environment for debugging, I tried to modify the below codes:

self.im_size = (
    (int(res[0]), int(res[1]))
    if self.crop is None
    else (self.crop[3] - self.crop[2], self.crop[1] - self.crop[0])
)

into:

self.im_size = (
    (int(res[0]), int(res[1]))
    if self.crop is None
    else (self.crop[1] - self.crop[0], self.crop[3] - self.crop[2])
)

and the cropping finally works.

However, when the opencv-python was upgraded to 4.5.5.62, another issue showed up that self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.im_size[0]) and self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.im_size[1]) would try to set the frame size as the size after cropping, which would actually lead to a nearest possible resolution being set.

(This would not happen in opencv-python 4.5.4.60 since the camera resolution cannot be set other than 640*480.)

For example, if I use the camera that also allows 320*240, when setting resolution as 640, 480 and crop as 210,530,90,410, the error message would be:
ValueError: could not broadcast input array from shape (150,110,3) into shape (320,320,3)
And self.cap.get(cv2.CAP_PROP_FRAME_WIDTH) and self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT) gave a resolution of 320, 240.

I tried to resolve this by modifying the values passed to self.im_size again:

self.im_size = (
    (int(res[0]), int(res[1]), int(res[0]), int(res[1]))
    if self.crop is None
    else (self.crop[1] - self.crop[0], self.crop[3] - self.crop[2], int(res[0]), int(res[1]))
)

and went to set_capture_device under opencv.py to set:

if self.im_size:
    self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.im_size[2])
    self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.im_size[3])

This solved the problem. self.cap.get(cv2.CAP_PROP_FRAME_WIDTH) and self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT) now give a resolution of 640, 480 and the frame can be cropped and displayed adequately.

Summary
The resolution problem still cannot be solved, probably due to some problems related to OpenCV. The cropping problem seemed to be able to be solved via above-mentioned modifications to the codes. As for now I have no idea what I can do for the resolution problem. I would really be grateful if these problems can be fixed. Also, I'm not sure whether my modifications for solving the cropping problem can be compatible with other functions in the packages. If you would like to consider fixing these problems in the official codes, it would really be of great help. Many many thanks and best regards.

How to generate a csv file

I have used dlc before, and dlc can generate csv files about the coordinates and positions of animals. But it seems that there is no such function in dlclivegui, can you help me solve this problem.
螢幕擷取畫面 2023-05-23 115227
Thank you so much!

dlclivegui.create_labeled_video

envs\dlclg\lib\site-packages\dlclivegui\video.py", line 194, in create_labeled_video
this_bp = this_pose[this_pose["bodyparts"] == bodyparts[j]][
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

Pass parameter

Hello,
I saw in the previous question that DLC-Live does not support multi-object detection at present.

But I really need the coordinates of another object, so I add OpencV-based object recognition to display_frame(self),

It succeeded and was able to identify the desired object coordinates, but I was unable to call this parameter in Process

And you can see that at the end it always returns the initial value

I'm not sure what caused it, any suggestions I'd appreciate

The identification code, the calling function, and the output are in the following TXT file

Identify and call functions.txt

Using DLC-Live with Virtual Camera

Hi there,

I'm hoping to use DLC-Live with a virtual camera. I'm currently using pyvirtualcam (https://pypi.org/project/pyvirtualcam/) to create a virtual camera. By combining this with OBS, I'm able to have my virtual camera detected by programs like skype, zoom, etc.

When I try and find my virtual camera in the DLC-live dropdown, it doesn't show up.

Any help would be greatly appreciated!

Best,
Joe

using Basler camera with deeplabcut-live-GUI

Hi,

I cannot make our Basler camera work with deeplabcut-live-GUI. I installed dlclivegui from the Basler branch, installed pylon viewer, pypylon, swig. the Gui opens, but I cannot see my camera.

Windows 11
python 3.9

Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\TDT\anaconda3\envs\dlc_live\lib\tkinter_init_.py", line 1885, in call
return self.func(*args)
File "C:\Users\TDT\anaconda3\envs\dlc_live\lib\site-packages\dlclivegui\dlclivegui.py", line 317, in edit_cam_settings
arg_names, arg_vals, arg_dtypes, arg_restrict = self.get_cam_args()
File "C:\Users\TDT\anaconda3\envs\dlc_live\lib\site-packages\dlclivegui\dlclivegui.py", line 380, in get_cam_args
cam_obj = getattr(camera, this_cam["type"])
AttributeError: module 'dlclivegui.camera' has no attribute 'Add Camera'

What am I missing?

Any advice would be very helpful. Thank you
Nadine

OSError: [WinError 126] The specified module could not be found

Describe the bug:
Running deeplabcut-live-gui on Windows 10 in a conda environment fails due to "OSError: [WinError 126] The specified module could not be found"(this words is just the same as "找不到指定模块" in my screenshot), but i just used "dlclivegui" after installing.

image

I will be very appreciated if you could give me an reply.

Problems during installation

Hi,

I had difficulties with the installation but the solution to one of the previous issues helped. Thank you for that!

However, I cannot install tensorflow 2.0-2.10:

(dlc_live) C:\Users\TDT>pip install "tensorflow>=2.0,<=2.10" ERROR: Could not find a version that satisfies the requirement tensorflow<=2.10,>=2.0 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0) ERROR: No matching distribution found for tensorflow<=2.10,>=2.0

I am using python 2.9.0

I just installed "tenserflow" and it used version 2.15. and of course this caused compatibility issues when installing dlclivegui:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-intel 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 2.10.0 which is incompatible.
tensorflow-intel 2.15.0 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.19.6 which is incompatible.
tensorflow-intel 2.15.0 requires tensorboard<2.16,>=2.15, but you have tensorboard 2.10.1 which is incompatible.
tensorflow-intel 2.15.0 requires tensorflow-estimator<2.16,>=2.15.0, but you have tensorflow-estimator 2.10.0 which is incompatible.

How can I fix this?

Thanks
Nadine

Sudden slow down in performance

I am experiencing a sudden slow-down in performance in DLC Live, and I was wondering if anyone has seen this before and knows how to fix it.

I am running a simple python script that receives imagery data over memory mapped files:

Capture

The pose estimation takes normally about 12-14msec... but the, all of sudden, it jumps to ~140ms:

Capture2

Any ideas what this could be? It happens after this processing loops has been working for 2-3 minutes. It always starts Ok and then it slows down. This is running on a Quadro M6000 24GB.

Thank you!

Dario

Installing on Jetson Xavier AGX

All good until last step... and then:

(dlc-live) dario@dario-jetson:~$ pip install deeplabcut-live
Collecting deeplabcut-live
Downloading deeplabcut_live-1.0-py3-none-any.whl (28 kB)
Collecting numpy<1.19.0
Downloading numpy-1.18.5.zip (5.4 MB)
|████████████████████████████████| 5.4 MB 13.5 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting ruamel.yaml
Downloading ruamel.yaml-0.16.13-py2.py3-none-any.whl (111 kB)
|████████████████████████████████| 111 kB 9.9 MB/s
Collecting pillow
Downloading Pillow-8.1.2-cp36-cp36m-manylinux2014_aarch64.whl (2.1 MB)
|████████████████████████████████| 2.1 MB 14.9 MB/s
Collecting colorcet
Downloading colorcet-2.0.6-py2.py3-none-any.whl (1.6 MB)
|████████████████████████████████| 1.6 MB 15.5 MB/s
Collecting opencv-python
Downloading opencv_python-4.5.1.48-cp36-cp36m-manylinux2014_aarch64.whl (34.5 MB)
|████████████████████████████████| 34.5 MB 22 kB/s
Collecting tqdm
Downloading tqdm-4.59.0-py2.py3-none-any.whl (74 kB)
|████████████████████████████████| 74 kB 2.0 MB/s
Collecting tables
Downloading tables-3.6.1.tar.gz (4.6 MB)
|████████████████████████████████| 4.6 MB 12.9 MB/s
Collecting deeplabcut-live
Downloading deeplabcut_live-0.0.3-py3-none-any.whl (39 kB)
Collecting numpy
Downloading numpy-1.19.5-cp36-cp36m-manylinux2014_aarch64.whl (12.4 MB)
|████████████████████████████████| 12.4 MB 10.8 MB/s
Collecting deeplabcut-live
Downloading deeplabcut_live-0.0.2-py3-none-any.whl (42 kB)
|████████████████████████████████| 42 kB 404 kB/s
Downloading deeplabcut_live-0.0.1-py3-none-any.whl (29 kB)
Downloading deeplabcut_live-0.0-py3-none-any.whl (39 kB)
Collecting py-cpuinfo==5.0.0
Downloading py-cpuinfo-5.0.0.tar.gz (82 kB)
|████████████████████████████████| 82 kB 221 kB/s
Collecting pandas
Downloading pandas-1.1.5-cp36-cp36m-manylinux2014_aarch64.whl (9.5 MB)
|████████████████████████████████| 9.5 MB 9.4 MB/s
Collecting pyct>=0.4.4
Downloading pyct-0.4.8-py2.py3-none-any.whl (15 kB)
Collecting param>=1.7.0
Downloading param-1.10.1-py2.py3-none-any.whl (76 kB)
|████████████████████████████████| 76 kB 3.8 MB/s
Collecting pytz>=2017.2
Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB)
|████████████████████████████████| 510 kB 14.2 MB/s
Collecting python-dateutil>=2.7.3
Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
|████████████████████████████████| 227 kB 14.5 MB/s
Collecting six>=1.5
Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting ruamel.yaml.clib>=0.1.2
Downloading ruamel.yaml.clib-0.2.2-cp36-cp36m-manylinux2014_aarch64.whl (540 kB)
|████████████████████████████████| 540 kB 12.8 MB/s
Collecting numexpr>=2.6.2
Downloading numexpr-2.7.3-cp36-cp36m-manylinux2014_aarch64.whl (481 kB)
|████████████████████████████████| 481 kB 15.4 MB/s
Using legacy 'setup.py install' for py-cpuinfo, since package 'wheel' is not installed.
Using legacy 'setup.py install' for tables, since package 'wheel' is not installed.
Installing collected packages: six, param, numpy, ruamel.yaml.clib, pytz, python-dateutil, pyct, numexpr, tqdm, tables, ruamel.yaml, py-cpuinfo, pillow, pandas, colorcet, deeplabcut-live
Running setup.py install for tables ... error
ERROR: Command errored out with exit status -4:
command: /home/dario/dlc-live/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-p9_amnr6/tables_db97b7244bac4c15a79f2d6898028816/setup.py'"'"'; file='"'"'/tmp/pip-install-p9_amnr6/tables_db97b7244bac4c15a79f2d6898028816/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-wtlkb57w/install-record.txt --single-version-externally-managed --compile --install-headers /home/dario/dlc-live/include/site/python3.6/tables
cwd: /tmp/pip-install-p9_amnr6/tables_db97b7244bac4c15a79f2d6898028816/
Complete output (188 lines):
* Using Python 3.6.9 (default, Jan 26 2021, 15:33:00)
* USE_PKGCONFIG: True
* pkg-config header dirs for HDF5: /usr/include/hdf5/serial
* pkg-config library dirs for HDF5: /usr/lib/aarch64-linux-gnu/hdf5/serial
* Found HDF5 headers at /usr/include/hdf5/serial, library at /usr/lib/aarch64-linux-gnu/hdf5/serial.
.. WARNING:: Could not find the HDF5 runtime.
The HDF5 shared library was not found in the default library
paths. In case of runtime problems, please remember to install it.
/tmp/lzo_version_datesb8__dcy.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
main (int argc, char **argv) {
^~~~
/tmp/lzo_version_datesb8__dcy.c: In function ‘main’:
/tmp/lzo_version_datesb8__dcy.c:2:5: warning: implicit declaration of function ‘lzo_version_date’ [-Wimplicit-function-declaration]
lzo_version_date();
^~~~~~~~~~~~~~~~
/usr/bin/ld: cannot find -llzo2
collect2: error: ld returned 1 exit status
* Could not find LZO 2 headers and library; disabling support for it.
/tmp/lzo_version_datewqm9562s.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
main (int argc, char **argv) {
^~~~
/tmp/lzo_version_datewqm9562s.c: In function ‘main’:
/tmp/lzo_version_datewqm9562s.c:2:5: warning: implicit declaration of function ‘lzo_version_date’ [-Wimplicit-function-declaration]
lzo_version_date();
^~~~~~~~~~~~~~~~
/usr/bin/ld: cannot find -llzo
collect2: error: ld returned 1 exit status
* Could not find LZO 1 headers and library; disabling support for it.
/tmp/BZ2_bzlibVersionr8ji2103.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
main (int argc, char **argv) {
^~~~
/tmp/BZ2_bzlibVersionr8ji2103.c: In function ‘main’:
/tmp/BZ2_bzlibVersionr8ji2103.c:2:5: warning: implicit declaration of function ‘BZ2_bzlibVersion’ [-Wimplicit-function-declaration]
BZ2_bzlibVersion();
^~~~~~~~~~~~~~~~
/usr/bin/ld: cannot find -lbz2
collect2: error: ld returned 1 exit status
* Could not find bzip2 headers and library; disabling support for it.
/tmp/blosc_list_compressorsv4to05p7.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
main (int argc, char **argv) {
^~~~
/tmp/blosc_list_compressorsv4to05p7.c: In function ‘main’:
/tmp/blosc_list_compressorsv4to05p7.c:2:5: warning: implicit declaration of function ‘blosc_list_compressors’ [-Wimplicit-function-declaration]
blosc_list_compressors();
^~~~~~~~~~~~~~~~~~~~~~
/usr/bin/ld: cannot find -lblosc
collect2: error: ld returned 1 exit status
* Could not find blosc headers and library; using internal sources.
/usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'extra_require'
warnings.warn(msg)
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/tables
copying tables/idxutils.py -> build/lib.linux-aarch64-3.6/tables
copying tables/leaf.py -> build/lib.linux-aarch64-3.6/tables
copying tables/vlarray.py -> build/lib.linux-aarch64-3.6/tables
copying tables/node.py -> build/lib.linux-aarch64-3.6/tables
copying tables/flavor.py -> build/lib.linux-aarch64-3.6/tables
copying tables/filters.py -> build/lib.linux-aarch64-3.6/tables
copying tables/carray.py -> build/lib.linux-aarch64-3.6/tables
copying tables/atom.py -> build/lib.linux-aarch64-3.6/tables
copying tables/conditions.py -> build/lib.linux-aarch64-3.6/tables
copying tables/parameters.py -> build/lib.linux-aarch64-3.6/tables
copying tables/utils.py -> build/lib.linux-aarch64-3.6/tables
copying tables/file.py -> build/lib.linux-aarch64-3.6/tables
copying tables/registry.py -> build/lib.linux-aarch64-3.6/tables
copying tables/indexes.py -> build/lib.linux-aarch64-3.6/tables
copying tables/link.py -> build/lib.linux-aarch64-3.6/tables
copying tables/init.py -> build/lib.linux-aarch64-3.6/tables
copying tables/index.py -> build/lib.linux-aarch64-3.6/tables
copying tables/earray.py -> build/lib.linux-aarch64-3.6/tables
copying tables/array.py -> build/lib.linux-aarch64-3.6/tables
copying tables/description.py -> build/lib.linux-aarch64-3.6/tables
copying tables/attributeset.py -> build/lib.linux-aarch64-3.6/tables
copying tables/exceptions.py -> build/lib.linux-aarch64-3.6/tables
copying tables/table.py -> build/lib.linux-aarch64-3.6/tables
copying tables/group.py -> build/lib.linux-aarch64-3.6/tables
copying tables/expression.py -> build/lib.linux-aarch64-3.6/tables
copying tables/unimplemented.py -> build/lib.linux-aarch64-3.6/tables
copying tables/undoredo.py -> build/lib.linux-aarch64-3.6/tables
copying tables/req_versions.py -> build/lib.linux-aarch64-3.6/tables
copying tables/path.py -> build/lib.linux-aarch64-3.6/tables
creating build/lib.linux-aarch64-3.6/tables/nodes
copying tables/nodes/filenode.py -> build/lib.linux-aarch64-3.6/tables/nodes
copying tables/nodes/init.py -> build/lib.linux-aarch64-3.6/tables/nodes
creating build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_expression.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_indexvalues.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/create_backcompat_indexes.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_tablesMD.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/common.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_timestamps.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_do_undo.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_timetype.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_suite.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_enum.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_hdf5compat.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_carray.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_attributes.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_garbage.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_earray.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_basics.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_tables.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_indexes.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_aux.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_types.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/init.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_queries.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_array.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_utils.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_create.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_nestedtypes.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_lists.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_index_backcompat.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_backcompat.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_numpy.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_vlarray.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/check_leaks.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_tree.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_links.py -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_all.py -> build/lib.linux-aarch64-3.6/tables/tests
creating build/lib.linux-aarch64-3.6/tables/misc
copying tables/misc/proxydict.py -> build/lib.linux-aarch64-3.6/tables/misc
copying tables/misc/enum.py -> build/lib.linux-aarch64-3.6/tables/misc
copying tables/misc/init.py -> build/lib.linux-aarch64-3.6/tables/misc
creating build/lib.linux-aarch64-3.6/tables/scripts
copying tables/scripts/pttree.py -> build/lib.linux-aarch64-3.6/tables/scripts
copying tables/scripts/ptrepack.py -> build/lib.linux-aarch64-3.6/tables/scripts
copying tables/scripts/ptdump.py -> build/lib.linux-aarch64-3.6/tables/scripts
copying tables/scripts/init.py -> build/lib.linux-aarch64-3.6/tables/scripts
copying tables/scripts/pt2to3.py -> build/lib.linux-aarch64-3.6/tables/scripts
creating build/lib.linux-aarch64-3.6/tables/nodes/tests
copying tables/nodes/tests/test_filenode.py -> build/lib.linux-aarch64-3.6/tables/nodes/tests
copying tables/nodes/tests/init.py -> build/lib.linux-aarch64-3.6/tables/nodes/tests
copying tables/tests/itemsize.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/vlstr_attr.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/elink.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/bug-idx.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/idx-std-1.x.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/zerodim-attrs-1.3.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/indexes_2_0.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/ex-noattr.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/float.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_i64le.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_unsupptype.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/Tables_lzo1_shuffle.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/times-nested-be.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/Tables_lzo2.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_i32be.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/Tables_lzo2_shuffle.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/vlunicode_endian.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/blosc_bigendian.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/Table2_1_lzo_nrv2e_shuffle.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/issue_368.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/time-table-vlarray-1_x.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/nested-type-with-gaps.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_enum.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/python2.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/non-chunked-table.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/oldflavor_numeric.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_f64le.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/array_mdatom.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_f64be.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_szip.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/scalar.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/slink.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_SDSextendible.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_i64be.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/issue_560.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_compound_chunked.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/indexes_2_1.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/Tables_lzo1.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/flavored_vlarrays-format1.6.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/smpl_i32le.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/attr-u16.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/elink2.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/python3.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/zerodim-attrs-1.4.h5 -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_ref_array2.mat -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/matlab_file.mat -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/tests/test_ref_array1.mat -> build/lib.linux-aarch64-3.6/tables/tests
copying tables/nodes/tests/test_filenode.dat -> build/lib.linux-aarch64-3.6/tables/nodes/tests
copying tables/nodes/tests/test_filenode.xbm -> build/lib.linux-aarch64-3.6/tables/nodes/tests
copying tables/nodes/tests/test_filenode_v1.h5 -> build/lib.linux-aarch64-3.6/tables/nodes/tests
running build_ext
----------------------------------------
ERROR: Command errored out with exit status -4: /home/dario/dlc-live/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-p9_amnr6/tables_db97b7244bac4c15a79f2d6898028816/setup.py'"'"'; file='"'"'/tmp/pip-install-p9_amnr6/tables_db97b7244bac4c15a79f2d6898028816/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-wtlkb57w/install-record.txt --single-version-externally-managed --compile --install-headers /home/dario/dlc-live/include/site/python3.6/tables Check the logs for full command output.

Some pre-trained models doesn't work with dlclivegui

I have downloaded all eight pretrained models from Deeplabcut Model Zoo and tried to use the pretrained model with a webcam.

First I tried the primate_face model to mark key points on my face and it worked like a charm. However when I tried the human_fullbody model I got some error messages.

(dlc-live) C:\Users\dell>dlclivegui
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Process SpawnProcess-2:
Traceback (most recent call last):
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\importer.py", line 426, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node DLC/resnet_v1_101/conv1/Conv2D}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\multiprocess\process.py", line 297, in _bootstrap
    self.run()
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\multiprocess\process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\dlclivegui\pose_process.py", line 73, in _run_pose
    ret = self._open_dlc_live(dlc_params)
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\dlclivegui\pose_process.py", line 96, in _open_dlc_live
    self.frame, frame_time=self.frame_time[0], record=False
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\dlclive\dlclive.py", line 289, in init_inference
    graph = finalize_graph(graph_def)
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\dlclive\graph.py", line 52, in finalize_graph
    tf.import_graph_def(graph_def, name="DLC")
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\importer.py", line 430, in import_graph_def
    raise ValueError(str(e))
ValueError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node DLC/resnet_v1_101/conv1/Conv2D}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

Below are the messages I got when I use the primate_face model.

(dlc-live) C:\Users\dell>dlclivegui
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (666) CvCapture_MSMF::initStream Failed to reset streams
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
c:\users\dell\anaconda3\envs\dlc-live\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2021-01-30 08:19:01.758240: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2021-01-30 08:19:01.962996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:65:00.0
totalMemory: 11.00GiB freeMemory: 9.90GiB
2021-01-30 08:19:01.963161: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2021-01-30 08:19:02.486712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-01-30 08:19:02.486873: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2021-01-30 08:19:02.488392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2021-01-30 08:19:02.489246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9541 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:65:00.0, compute capability: 7.5)

Cannot start

hello,
i finished the installation and passed the dlc-live-test. it worked well
but when i start the gui, something was wrong with it.
1

blank "Processor_args" causes gui error

Hi,

We meet some problems when set the processor.

When we click "set proc" button in the GUI, the code in lines 1006-1010 fails to execute because self.cfg['processor_args'] is None, as indicated in the variable panel. This issue is from the config file, where "processor_arg" was blank (as shown in top vscod panel). After commenting out lines 1006-1010, the error no longer appears. Is it acceptable to leave these lines commented, or should we initialize processor_args in the JSON file from the start?

Thank you!


error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.