Giter Club home page Giter Club logo

stytra's Introduction

Stytra

A modular package to control stimulation and track behavior in zebrafish experiments.

Logo

https://github.com/portugueslab/stytra/actions/workflows/main.yml/badge.svg?branch=linting

If you are using Stytra for your own research, please cite us!

Stytra is divided into independent modules which can be assembled depending on the experimental requirements. For a complete description, look at the full documentation.

Instructions to create your first experiment in Stytra and usage examples can be found in the example gallery.

Quick installation guide

Stytra relies on opencv for some of its fish tracking functions. If you don't have it installed, open the Anaconda prompt and type:

pip install opencv-python

If you are using Windows, git (used for tracking software versions) might not be installed. Git can also be easily installed with conda:

conda install git

This should be everything you need to make ready before installing stytra.

> PyQt5 is not listed as an explicit requirement because it should come with the Anaconda package. If you are not using Anaconda, make sure you have it installed and updated before installing Stytra!

The simplest way to install Stytra is with pip:

pip install stytra

You can verify the installation by running one of the examples in stytra examples folder. To run a simple looming stimulus experiment, you can type:

python -m stytra.examples.looming_exp

If the GUI opens correctly and pressing the play button starts the stimulus: congratulations, installation was successful! If it crashes, check if you have all dependencies correctly installed. If it still does not work, open an issue on the Stytra github page.

Editable installation

On the other hand, if you want to modify the internals of stytra or use the unreleased features, clone or download stytra from github and install it with:

pip install path_to_stytra/stytra

If you want to be able to change the stytra code and use the changed version, install using the -e argument:

pip install -e path_to_stytra/stytra

Now you can have a look at the stytra example gallery, or you can start configuring a computer for Stytra experiments. In the second case, you might want to have a look at the camera APIs section below first.

Note

Stytra might raise an error after quitting because of a bug in the current version of pyqtgraph (a package we are using for online plotting). If you are annoyed by the error messages when closing the program you can install the develop version of pyqtgraph from their github repository. The problem will be resolved once the next pyqtgraph version is released.

For further details on the installation please consult the relative documentation page.

stytra's People

Contributors

anki-xyz avatar elenadragomir avatar ericthomson avatar fedem-p avatar goraj avatar hlavian avatar jmmanley avatar kkoetter avatar nvbln avatar otprat avatar vigji avatar vilim avatar vpalieri avatar ywkure avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stytra's Issues

Using another filename convention

Thank you for all your work regarding the last update! It is great that video files are not overwritten anymore.

However, I now found out that one of my own changes to the code is not working anymore as I intended, probably because I implemented it wrongly. What I would like is for all generated files, that the name does not only consist of the timestamp of recording start, but also the date and animal number. Basically, adding the name of the parent folder to the filename itself, like so:

C:\Users\zuidinga\Data\protocol_name\220331_f60\220331_f60_110908_video.mp4 instead of
C:\Users\zuidinga\Data\protocol_name\220331_f60\110908_video.mp4

(this change used to be enough for it to work for all files, since the write.py of the video process also used self.filename_base: birtezuidinga@7984edd)

Now that different methods are used in write.py to get the filename_base, I cannot get it to work anymore for the video and video_times files. I tried changing the __generate_filename code to the following:

def __generate_filename(self, filename: str) -> str:
        """
        Generates the filename dependent on the given filename and the extension.

        Parameters
        ----------
        filename
            a unique identifier to be used in the filename for saving the video file.
        """

        detailed_name = str(filename.parts[-2]) + "_" + str(filename.name)
        directory = filename.parent
        new_filename_base = directory / detailed_name

        return str(new_filename_base) + "video." + self._extension

Sometimes, it works correctly, and other times it gives this error:

Process :
Traceback (most recent call last):
  File "C:\Users\zuidinga\Miniconda3\envs\stytra_env_github\lib\multiprocessing\process.py", line 297, in _bootstrap
    self.run()
  File "C:\Users\zuidinga\PycharmProjects\stytra_birte\stytra\stytra\hardware\video\write.py", line 91, in run
    self._configure(current_frame.shape)
  File "C:\Users\zuidinga\PycharmProjects\stytra_birte\stytra\stytra\hardware\video\write.py", line 295, in _configure
    self._container = av.open(self.__container_filename, mode="w")
  File "av\container\core.pyx", line 364, in av.container.core.open
  File "av\container\core.pyx", line 146, in av.container.core.Container.__cinit__
ValueError: Could not determine output format

Do you have any idea how I could fix this? I probably don't understand the process of the filename queue correctly, I think I'd need to change some default string there, but I cannot find it.

Replaying stimulus

It would be great, if one can re-load an experiment and see exactly the stimulus, that actually has been shown to the fish using the log-files saved. Including the effect of closed-loop behavior on the stimulus.

Data and metadata saving

Hello,
I'm looking at the files included in the package but I can't seem to find where the data and metadata are saved. I assume that it would be a csv file in the folder where camera captures are saved, but somehow I can't seem to find any csv files saved during test runs at all.
I appreciate your help.

Wrong importation in combined_exp.py

In the first line, you import Stimulus Combiner from stytra.stimulation.stimuli.visual that is not there.
I solved the problem importing CombinerStimulus from stytra.stimulation.stimuli

Problems with offline tracking

Hi! I'm interested in using this software for offline tracking of larval fish behavior. I've successfully installed Stytra on my Mac through Anaconda (OS 10.13.6), and I have been able to load in videos for free behaving fish (selecting "fish" for tracking). At this point though, I've run into a few problems:

  1. it only sometimes will play through the video, and with no apparent progress bar or way to pause it or move to a period of interest. Am I missing something? If the movie plays, once it gets to the end of the movie is there a way to restart it?
  2. I've opened the Tracking Controls menu, but I'm not clear on what all the parameters are to optimize the tracking - is this listed/described somewhere? For example, I have a video where the fish is tracked reasonably well when in the middle of the frame, but near the edges it is sometimes lost - I can't figure out how to focus on these time periods and then to optimize the tracking (if possible)
  3. None of the plots in the GUI Plot panel seem to display anything ever, even when a fish seems to be identified with a red dot and blue tail and is moving.
  4. I can see information on the csv file - is there a clear description of what each parameter is somewhere? For example, is the fish head orientation one of these parameters? Cumulative tail curvature? What does "biggest area" mean? It seems like the csv file gets generated starting right as the video has been loaded - is there a way to make the entire csv file info match tracking of the entire video if I adjust some of the tracking controls "midstream" to optimize the tracking while the video is actively playing?

I apologize if these are basic questions.
Thank you for your help and clarification!
~Roshan

Cleanest way to log additional stimulus information.

Hi,

I would like to log additional information for an experiment. A good example would be the stimulus color and direction that is presented to the fish.

I tried adding two dynamic columns to dynamic_parameters in my stimulus class, that way I can use these columns to store some information as float in df_param.

Unfortunately this does not seem to work correctly and it seems not very clean to me as it ends up in np.interp. It would be great if you could suggest the best way to log information/conditions like this.

I attached a small example that tries to log color and direction using the df_param.

from collections import namedtuple

from stytra import Stytra, Protocol
from stytra.stimulation.stimuli.visual import Pause, FullFieldVisualStimulus, PaintGratingStimulus
from stytra.stimulation.stimuli import MovingGratingStimulus, InterpolatedStimulus
from lightparam import Param
import pandas as pd
import numpy as np
from pathlib import Path

from enum import IntEnum
class Direction(IntEnum):
    LEFT = 0
    RIGHT = 1

class Color(IntEnum):
    MILDGREY = 0
    GREY = 1
    SUPERGREY = 2

class MovingGratingStimulusTest(PaintGratingStimulus, InterpolatedStimulus):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.dynamic_parameters.append("x")

        self.dynamic_parameters.append("direction")
        self.dynamic_parameters.append("stimulus_color")


# 1. Define a protocol subclass
class GratingsProtocol(Protocol):
    name = "gratings_protocol"

    video_path = str(Path(r'C:\Users\user\PycharmProjects\stytra-private\stytra\examples\assets\fish_compressed.h5'))
    print(video_path)
    stytra_config = dict(
        tracking=dict(method="tail", estimator="vigor"),
        camera=dict(
            video_file=video_path
        ),
        # Replace this example file with the desired camera config, such as
        # camera_config = dict(type="ximea")
        # for a ximea camera, etc. Not needed if the setup already has the
        # # stytra_setup_config.json file
        # camera_config=dict(
        #     video_file=r"J:\_Shared\stytra\fish_tail_anki.h5"
        # ),
    )

    def __init__(self):
        super().__init__()
        self.t_pre = Param(1.)  # time of still gratings before they move
        self.t_move = Param(1.)  # time of gratings movement
        self.grating_vel = Param(-10.)  # gratings velocity
        self.grating_period = Param(10)  # grating spatial period
        self.grating_angle_deg = Param(90.)  # grating orientation
        self.num_repetitions = Param(1)  # number of repetitions


    def get_stim_sequence(self):
        # Use six points to specify the velocity step to be interpolated:
        t_phase = np.array([
            0,
            self.t_pre,
            self.t_pre,
            self.t_pre + self.t_move,
            self.t_pre + self.t_move,
            2 * self.t_pre + self.t_move,
        ])
        t = []

        # create list with time points extended by length num.repetition
        for iteration in range(self.num_repetitions):
            t.extend(
                list(
                    t_phase + t_phase[-1] * iteration
                )
            )

        vel_phase = [0, 0, self.grating_vel, self.grating_vel, 0, 0]
        vel = []
        vel.extend(vel_phase * self.num_repetitions)
        print(f't: {t}')
        print(f'vel: {vel}')
        #col = [(255, 255, 255)] * len(t)

        df = pd.DataFrame(dict(t=t, vel_x=vel))
        stimulus_sequence = []

        stimulus_param = namedtuple(
            'StimulusParam',
            [
                'color',
                'grating_angle',
                'stimulus_color',
                'direction',
            ],
            verbose=True
        )



        parameters = [
            stimulus_param(
                stimulus_color=Color.MILDGREY,
                direction=Direction.RIGHT,
                color=(40,) * 3,
                grating_angle=90
            ),
            stimulus_param(
                stimulus_color=Color.GREY,
                direction=Direction.LEFT,
                color=(100,) * 3,
                grating_angle=270
            ),
            stimulus_param(
                stimulus_color=Color.SUPERGREY,
                direction=Direction.LEFT,
                color=(155,) * 3,
                grating_angle=270
            ),
        ]
        for param in parameters:
            df['direction'] = float(param.direction)
            df['stimulus_color'] = float(param.stimulus_color)

            stimulus_sequence.append(
                MovingGratingStimulusTest(
                    df_param=df,
                    grating_col_1=param.color,
                    grating_angle=param.grating_angle * np.pi / 180,
                    # self.grating_angle_deg * np.pi / 180,
                    grating_period=self.grating_period,
                ),
            )
        return stimulus_sequence


if __name__ == "__main__":
    # This is the line that actually opens stytra with the new protocol.
    st = Stytra(protocol=GratingsProtocol())

ModuleNotFoundError: No module named 'PyQt5'

Following install instructions gives error in title. Perhaps add pyqt5 to setup.py install_requires?

Edit: just saw PyQt5 is not listed as an explicit requirement because it should come with the Anaconda package. In any case, pip install pyqt5 worked for me despite posts from 2017 suggesting that it does not.

Camera settings GUI

It would be nice to have a GUI for camera settings that cannot be changed online (ROI selectable visually, rotation). Actually rotation should be moved to online settings, I don't think there is a good reason it has to be fixed at initialization

Support units

There should be a way of specifying the units of the log files that get saved. This could be a list for all the columns of the log files that is saved in the experiment metadata json tree.

Running on cameras with low fps

I have a point grey camera with max 30fps sampling, and find it useful to test things at home with it. I've been trying to run a simple script to just display the camera output using Stytra, adapted from display_opencv_cam, called display_spinnaker_cam:

from stytra import Stytra
from stytra.stimulation.stimuli import Pause
from stytra.stimulation import Protocol

class Nostim(Protocol):
    name = "empty_protocol"
    stytra_config = dict(camera=dict(type="spinnaker"))
    def get_stim_sequence(self):
        return [Pause(duration=10)]  # protocol does not do anything

if __name__ == "__main__":
    s = Stytra(protocol=Nostim())

It says in the Stytra gui that the Spinnaker API camera was successfully opened, but that the FPS is too large:

Error: Spinnaker: GenICam::OutOfRangeException= Value 150.000000 must be smaller than or equal 30.003332. : OutOfRangeException thrown in node 'AcquisitionFrameRate' while calling 'AcquisitionFrameRate.SetValue()' (file 'FloatT.h', line 85) [-2002]
File "C:\Users\Eric\Dropbox\Programming\stytra_dev\stytra\stytra\collectors\accumulators.py", line 316, in update_list
t, fps = self.queue.get(timeout=0.001)

I went into hardware/video/__init__.py and modified the CameraControlParameters() but still got the same error, though in a different locus in the flow:

SpinnakerCamera.set() error: Spinnaker: GenICam::OutOfRangeException= Value 150.000000 must be smaller than or equal 30.003332. : OutOfRangeException thrown in node 'AcquisitionFrameRate' while calling 'AcquisitionFrameRate.SetValue()' (file 'FloatT.h', line 85) [-2002]
File "C:\Users\Eric\Dropbox\Programming\stytra_dev\stytra\stytra\gui\status_display.py", line 88, in refresh
self.addMessage(msg)

It seems there are a few places where it is trying to enforce a frame rate lower limit. I wonder if there is a simple workaround for testing with lower frame rates?

stytra.examples.looming_exp crashes after ~3 seconds

I am able to get the GUI running, and when I click play the experiment starts, but the program crashes quickly. Sometimes immediately, othertimes I see a white flash on screen that disappears like a moving bar, and the program crashes a second later. I installed using pip install stytra on python-3.7.4 installed with Anaconda on CentOS 8 (gnome, wayland).

$ python -m stytra.examples.looming_exp
PyAv not installed, writing videos in formats other than H5 not possible.
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
Traceback (most recent call last):
  File "/home/tyler/code/stytra/stytra/stimulation/__init__.py", line 191, in timestep
    if self.i_current_stimulus >= len(self.stimuli) - 1:
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
Aborted (core dumped)
$ uname -r
4.18.0-80.11.2.el8_0.x86_64

edit: Same behavior with stytra-0.8.24 and latest master

stytra crashes on linux (segmentation fault)

Hi,

I manually installed stytra in a conda environment where I had previously installed all the packages required (including git which for some reason was not installed). I am using Python 3.7, and conda 4.7.12.
stytra-0.8.26 was succesfully installed with all the requirements satisfied.
When I'm running
python -m stytra.examples.looming_exp
I get a window opening and closing immediately. An error message shows up:
Segmentation fault (core dumped)

I also tried to run other function:
python -m stytra.offline.track_video
The window asking to select the file pops up, I can select it and click on Start Stytra.
But then again, a window tries to open up but is closed immediately and the same error message Segmentation fault appears.

Do you have any idea what's wrong ?

Thanks for your answer ! (and thanks a lot for this amazing project!)

Bug fix for saving "video_times" file

In the stytra/hardware/video/write.py file, line 83, the command should be pd.DataFrame(self.times, columns=["t"]), to save the dataframe of video_times correctly. The current version does not have square brackets and therefore the program does not save the file. I do not really know how to submit a pull request, but I hope this works too.

ImportError: Bad git executable

I believe I successfully installed Stytra on a Windows system (using manual installation) but am unable to open looming_exp to test my installation. The error message I receive is this:

**ImportError: Bad git executable.
The git executable must be specified in one of the following ways:

  • be included in your $PATH
  • be set via $GIT_PYTHON_GIT_EXECUTABLE
  • explicitly set via git.refresh()

All git commands will error until this is rectified.**

Another issue I had during installation that is possibly related is that the version of PyQt5 I installed seemed to be incompatible with Stytra and I finally had to use an administrator terminal to install Stytra.

Please use layman's terms and provide command-line syntax if possible in your response as I am very much a beginner.

session_id flow in Experiment

The current flow of the session_id in the Experiment class is in two parts: the session_id itself, and the timestamp that it contains, it is updated as follows:

  1. Start of Stytra: first timestamp is defined in the __init__ call, session_id is set to None.
  2. Start of experiment: session_id is updated to the first timestamp, the timestamp variable stays the same.
  3. Start of the protocol: nothing happens.
  4. End of the protocol: set_id is called, but session_id does not get updated because we still use the first timestamp. After this, the timestamp gets updated. So session_id reflects the first timestamp, but the timestamp itself is already the second timestamp.
  5. Start of the second protocol run: nothing happens.
  6. End of the second protocol run: set_id is called, it gets updated to the second timestamp. After this, the timestamp gets updated. So session_id reflects the second timestamp, but the timestamp itself is already the third timestamp.

So between 3 and 4, the first (and correct) timestamp is used. Between 5 and 6, the first (and incorrect) timestamp is used. And between a potential 7 and 8, the second (and incorrect) timestamp will be used.

The resulting effect (at least for me) is since there is a dependency session_id -> filename_prefix -> filename_base. So the filename during a protocol run will reflect the previous protocol from the second protocol onward. Only once the protocol is ended will it get updated. This usually poses no problem, since most files are written after this. However, I don't think this is a logical flow for session_id, and it results in problems such as when writing the video file, since it needs a file to write to from the onset of the protocol.

Since this code comes from the Experiment class I do not know how much code is reliant on the current behaviour. It would likely be good to test/check the code that makes use of this before pushing a fix.

Saving stimuli videos breaks on MacOS High Sierra

In the record_stimuli.py test, the MP4 video writing fails with an FFMPEG error: 'Unknown encoder 'libx264'. The version of FFMPEG installed via pip or conda is built with the libx264 flag disabled, but installing FFMPEG from the conda-forge channel builds with all dependencies installed.

Hangs on offline "Track Video"

Had immediate success with seeing live plots with offline tail tracking, thanks for great interface! However, I'm seeing a program hang when I click "Track video," regardless of output format:

  • Tracking popup immediately jumps to 100%, and hangs around
  • Camera frame rates is still reported (~90)
  • tracking frame rates does not change (stuck at 80.0)
  • looking at htop shows stytra using ~200% CPU and ffmpeg using ~200% CPU
  • once these processes finish I see python3.7 -s -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=25, pipe_handle=52) --multiprocessing-fork using ~17% for quite a while
  • the GUI freezes entirely and stops responding

Here's the shell output before I ^C it:

> python -m stytra.offline.track_video
QSettings::value: Empty key passed
QSettings::value: Empty key passed
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1519: RuntimeWarning: invalid value encountered in remainder
  ddmod = mod(dd + pi, 2*pi) - pi
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1520: RuntimeWarning: invalid value encountered in greater
  _nx.copyto(ddmod, pi, where=(ddmod == -pi) & (dd > 0))
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1522: RuntimeWarning: invalid value encountered in less
  _nx.copyto(ph_correct, 0, where=abs(dd) < discont)
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1519: RuntimeWarning: invalid value encountered in remainder
  ddmod = mod(dd + pi, 2*pi) - pi
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1520: RuntimeWarning: invalid value encountered in greater
  _nx.copyto(ddmod, pi, where=(ddmod == -pi) & (dd > 0))
/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/numpy/lib/function_base.py:1522: RuntimeWarning: invalid value encountered in less
  _nx.copyto(ph_correct, 0, where=abs(dd) < discont)
^C  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/gui/container_windows.py", line 194, in closeEvent
    self.experiment.wrap_up()
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/experiments/tracking_experiments.py", line 424, in wrap_up
    super().wrap_up(*args, **kwargs)
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/experiments/tracking_experiments.py", line 137, in wrap_up
    self.camera.join()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/process.py", line 140, in join
    res = self._popen.wait(timeout)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
    pid, sts = os.waitpid(self.pid, flag)
<class 'KeyboardInterrupt'>: 
^CError in sys.excepthook:
Process camera:
Traceback (most recent call last):
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
    pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

Original exception was:
Traceback (most recent call last):
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/gui/container_windows.py", line 194, in closeEvent
    self.experiment.wrap_up()
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/experiments/tracking_experiments.py", line 424, in wrap_up
    super().wrap_up(*args, **kwargs)
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/experiments/tracking_experiments.py", line 137, in wrap_up
    self.camera.join()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/process.py", line 140, in join
    res = self._popen.wait(timeout)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
    pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Traceback (most recent call last):
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/stytra/hardware/video/__init__.py", line 351, in run
    time.sleep(extrat)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/process.py", line 300, in _bootstrap
    util._exit_function()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/util.py", line 337, in _exit_function
    _run_finalizers()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/util.py", line 277, in _run_finalizers
    finalizer()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/util.py", line 201, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/queues.py", line 192, in _finalize_join
    thread.join()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/threading.py", line 1044, in join
    self._wait_for_tstate_lock()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):
KeyboardInterrupt
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/pyqtgraph/__init__.py", line 312, in cleanup
    if isinstance(o, QtGui.QGraphicsItem) and isQObjectAlive(o) and o.scene() is None:
<class 'ReferenceError'>: weakly-referenced object no longer exists
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/nix/store/4ffsiqql2lf9jlysdcljzlhz8igph3gs-python3-3.7.5-env/lib/python3.7/site-packages/pyqtgraph/__init__.py", line 312, in cleanup
    if isinstance(o, QtGui.QGraphicsItem) and isQObjectAlive(o) and o.scene() is None:
ReferenceError: weakly-referenced object no longer exists
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/util.py", line 277, in _run_finalizers
    finalizer()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/util.py", line 201, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/multiprocessing/queues.py", line 192, in _finalize_join
    thread.join()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/threading.py", line 1044, in join
    self._wait_for_tstate_lock()
  File "/nix/store/5dbidajmgrvskv7kj6bwrkza8szxkgar-python3-3.7.5/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):
KeyboardInterrupt

Edit: The hang is still accurate, but I was wrong: csv and hdf5 files are created. The csv file looks good, but in the hdf5 file, I cannot open any of the data arrays, likely as the file was not closed properly

Edit 2: seems like I can read the data with tables in python but not hdfview

Edit 3: My video is 346548 frames, but the hdf5 & csv files are only 346507 frames--I wonder if the final 31 frames are being dropped?

Second video in same GUI session overwrites first video

I am trying to use Stytra to record behavioral videos. At this point I am not yet implementing an online tracking, but to be able to save the videos, I set the tracking to "tail" because otherwise the videos are not being saved.

I found out that, after starting the GUI, I can record one video which seems to be saved well. However, presenting the second stimulus in the same GUI session results in the previous 'video' file and 'video_times' file to be overwritten with the new data (the names stay the same). Curiously, the log files and img file for the new experiment are created with the (correct) new name. Also, after this second stimulus, closing the GUI results in the message that Python is not responding and the program needs to be forced to close.

I assume that this problem arises from the write.py file in the hardware/video folder. In line 133, self.filename_base seems to not be updated with a new stimulus presentation.
line 133: filename = self.filename_base + "video." + self.extension

Maybe the configure function is only called when starting the GUI, and should actually be called every time an experiment starts? Could you help me solve this problem?

MovingGrating stimulus and colors

I have to compliment you for he great work you've done with this program. It is quite intuitive to install and use!

With my very scarce knowledge in programming, I am trying to generate a Moving grating stimulus that features two different colors. To these regards I have two questions for you:

  • I cannot find a way to make the color-choice an editable parameter, nor I can find the correct syntax on the documentation to do so. Is there a way or should I go back to my script any time I want to change things?
  • The PaintGrating stimulus, from which the MovingGrating stimulus is based, features (at least code-wise) the possibility to change both stripes colors. However, when defining them with grating_col_1 and grating_col_2, the stimulus shows only one color and black stripes where the second color should be. Do I get this because with the movement the software cannot actually show two different colors? Is there a way to do this?

I hope I have been clear enough with my questions. Again congrats for this very nice program!

Fix glitch for movie recording

Currently only a TrackingExperiment implements the option for saving the video, while this should happen for any CameraExperiment (see also #57).

YoLo

Hello:-),

Thank you so much for the wonderful library!

Does stryta use the YoLo algorithm from deep learning anywhere internally? If not would there be interest in developing a module with YoLo?

Maria

Issue with launching stytra after installation

Hi,
I'm encountering an issue after installing stytra. I'm using Anaconda and used the "conda env create -f environment.yml" to install stytra. This step worked fine. However, when I try to use "conda activate stytra_env" it just says "Note: You may need to restart the kernel to use updated packages". I've restarted the kernel and the computer multiple times, but I'm still encountering this error. Does anyone have a suggestion on how to get stytra started?

I also tried running the test for the installation "python -m stytra.examples.looming_exp", but that only gives me a syntax error.

IMG_7274

Tracking Configuration Param for Online Experiments

Hi,
I'm currently testing out Stytra with zebrafish larvae. I'm using an IR light as well - with that, I was wondering what the parameter bounds are to track the fish: specifically, Bg downsample, Bg dif threshold, Threshold eyes.

  1. Am I supposed to configure this with an IR light?
  2. Whenever I adjust the values for these, the background difference isn't substantial enough for the fish to be fully white or seen to be tracked. Saw your previous answer about this and tried adjusting for different images but still can't seem to track the fish because I don't know the bounds for the parameters.
    Is there a video tutorial?

Looking forward to hearing from you,
Rhoshini

add Stytra to open-neuroscience.com

Hello!

We are reaching out because we would love to have your project listed on Open Neuroscience, and also share information about this project:

Open Neuroscience is a community run project, where we are curating and highlighting open source projects related to neurosciences!

Briefly, we have a website where short descritptions about projects are listed, with links to the projects themselves, their authors, together with images and other links.

Once a new entry is made, we make a quick check for spam, and publish it.

Once published, we make people aware of the new entry by Twitter and a Facebook group.

To add information about their project, developers only need to fill out this form

In the form, people can add subfields and tags to their entries, so that projects are filterable and searchable on the website!

The reason why we have the form system is that it makes it open for everyone to contribute to the website and allows developers themselves to describe their projects!

Also, there are so many amazing projects coming out in Neurosciences that it would be impossible for us to keep track and log them all!

Please get in touch if you have any questions or would like to collaborate!

Tracking rate slower than camera acquisition above 80Hz

I am running a simple protocol to track fish, without showing any stimuli. Below 80Hz, the tracking and acquisition are in sync: when I go above 80 video capture, the tracking rate stays at 80. Someone else using Stytra in our lab uses them together at 150Hz.

My cpu seems to be running pretty hot during this: I have a Xeon Quad Core W-2102 (2.9 GHz). Am I hitting a known cpu-induced limit, or perhaps doing something silly in my code?

Code:

from stytra import Stytra
from stytra.stimulation.stimuli import Pause
from stytra.stimulation import Protocol


class FreeTrackingProtocol(Protocol):
    name = "fishtrack"
    stytra_config = dict(
        display=dict(min_framerate=50),
        tracking=dict(method="fish", embedded=False, estimator="position"),
        camera=dict(type="spinnaker", max_buffer_length = 5, rotation = 2,  roi =  [400, 400, 1300, 1300]),
        min_framerate=50, )

    def __init__(self):
        super().__init__()

    def get_stim_sequence(self):
        return [Pause(duration=10)]  # protocol does not do anything

if __name__ == "__main__":
    s = Stytra(protocol = FreeTrackingProtocol())

Red / monochromatic theme for UI

For microscope rooms where light of certain wavelengths is to be avoided, it would be good to have a monochromatic theme that can be switched.

opencv instalation tip

Hi,

This is just a suggestion to edit the main Stytra website and change the recommended instructions for installing opencv from the current " conda install opencv" to "pip install opencv-python". I have tried using the conda install version on both a mac and a windows machine and did not manage to get it to work. But pip worked quickly on both machines.

Thanks!

Error when loading a video and trying to track eyes

Hello:-)

Thank you so much for the wonderful software!

I installed stytra and was trying to open a movie for detecting fish eyes however I got an error message that I can't make sense of. Posted are links to the error and a frame of the movie.

https://github.com/mariakesa/Neural-Data-Analysis/blob/master/ZebraFish/Stytra/image-movie-foreyestracking.png
https://github.com/mariakesa/Neural-Data-Analysis/blob/master/ZebraFish/Stytra/errorstytra.png

Thanks so much and any help would be appreciated!

Offline tracking crashes

Hello there, amazing job you did with the program!!
I'm trying to use the offline tracking feature (tail) and I'm facing an error. The GUI loads ok and the video to track also but then no tracking happens and after a while the console shows :

(stytra) acer@swift:~$ /home/acer/anaconda3/envs/stytra/lib/python3.7/site-packages/numpy/lib/function_base.py:1520: RuntimeWarning: invalid value encountered in greater _nx.copyto(ddmod, pi, where=(ddmod == -pi) & (dd > 0)) /home/acer/anaconda3/envs/stytra/lib/python3.7/site-packages/numpy/lib/function_base.py:1522: RuntimeWarning: invalid value encountered in less _nx.copyto(ph_correct, 0, where=abs(dd) < discont)

I'm on Ubuntu 16.04 or 19.04 (same error for both versions) and all conda packages up to date.

Thanks!

--EDIT--
Im getting a new error in one of my installations.

QOpenGLShaderProgram: could not create shader program Vertex shader for simpleShaderProg (MainVertexShader & PositionOnlyVertexShader) failed to compile QOpenGLShader: could not create shader Fragment shader for simpleShaderProg (MainFragmentShader & ShockingPinkSrcFragmentShader) failed to compile Errors linking simple shader: QOpenGLShaderProgram: could not create shader program Vertex shader for blitShaderProg (MainWithTexCoordsVertexShader & UntransformedPositionVertexShader) failed to compile QOpenGLShader: could not create shader Fragment shader for blitShaderProg (MainFragmentShader & ImageSrcFragmentShader) failed to compile Errors linking blit shader: QOpenGLShaderProgram: could not create shader program Warning: "" failed to compile! /home/martin.carbotano/anaconda3/envs/stytra/lib/python3.7/site-packages/numpy/lib/function_base.py:1520: RuntimeWarning: invalid value encountered in greater _nx.copyto(ddmod, pi, where=(ddmod == -pi) & (dd > 0)) /home/martin.carbotano/anaconda3/envs/stytra/lib/python3.7/site-packages/numpy/lib/function_base.py:1522: RuntimeWarning: invalid value encountered in less _nx.copyto(ph_correct, 0, where=abs(dd) < discont)

--

Angular units for stimulus parameters

Many people work with visual field angles instead of linear units. It would be a very nice feature to give the possibility of an alternative calibration based on distance from the monitor to work with of visual field angles.

environment.yml does not install Stytra

I tried installing using the environment.yml file, but it does not include stytra in the site-packages. When I included the following lines in the environment.yml, it installed.

  - pip:
    - stytra

How to record stimuli movies?

I am sorry, this is probably a trivial question, but I cannot figure out how to use the record_stimuli.py code to save a video of a certain stimulus.

When I just run the record_stimuli.py file, it finishes without errors but I cannot find any videos on my computer. Do I need to run it in a different way? Should I define the directory somewhere?

Thank you in advance!

Recording and saving video data

I am trying to record and save the video data during stimulus presentation, in this case I am trying this in the looming_exp.py example experiment.

After the line:

name = "looming_protocol"

I added this configuration:

stytra_config = dict( camera=dict(type="spinnaker"), recording=dict(extension="mp4"), dir_save=r'c:\Users\zuidinga\Data')

However, I do not see a saved video in the folder where the metadata and stimulus_log are saved. Another thing I found out, is that ‘capture an image’ does save a png file to the correct folder, but it is completely black and provokes the warning message:
c:\Users\zuidinga\Miniconda3\envs\stytra_env\lib\site-packages\stytra\qui\camera_display.py:209: UserWarning:(path_to_image) is a low contrast image
imsave(name, self.image_item.image)

It is not the case that the camera is not working at all; streaming the camera view in the Stytra GUI seems to work.

Custom experiment: Wrong overlay of fish plot mirrored in 2 axes

The fish plot overlayed on the camera GUI is displayed mirrored in 2 axes. The diagnosis windows show the fish is correctly recognized and the data is correctly saved. Maybe a problem with the CameraViewFish class interaction and custom experiments.

To replicate (stytra custom experiment, e.g.):
if __name__ == "__main__": # Here we do not use the Stytra constructor but we instantiate an experiment # and we start it in the script. Even though this is an internal Experiment # subtype, a user can define a new Experiment subclass and start it # this way. app = QApplication([]) app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt5()) protocol = FlashProtocol() exp = TrackingExperiment(protocol=protocol, app=app, display=dict(full_screen=True), tracking=dict(method="fish"), camera=dict(video_file=str(Path(r"C:\Users\portugueslab\python_code\stytra\stytra\examples\assets\fish_compressed.h5")))) exp.start_experiment() app.exec_()

Example image:
Capture

Strange error when trying to run 'python -m stytra.offline.track_video'

When I try to run 'python -m stytra.offline.track_video' I get the following error. Any help would be appreciated!

']objc[56402]: Class QMacAutoReleasePoolTracker is implemented in both /anaconda3/envs/zebrafish/lib/python3.6/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore (0x13b25d0f8) and /anaconda3/envs/zebrafish/lib/python3.6/site-packages/cv2/.dylibs/QtCore (0x143659700). One of the two will be used. Which one is undefined.
objc[56402]: Class QT_ROOT_LEVEL_POOL__THESE_OBJECTS_WILL_BE_RELEASED_WHEN_QAPP_GOES_OUT_OF_SCOPE is implemented in both /anaconda3/envs/zebrafish/lib/python3.6/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore (0x13b25d170) and /anaconda3/envs/zebrafish/lib/python3.6/site-packages/cv2/.dylibs/QtCore (0x143659778). One of the two will be used. Which one is undefined.
objc[56402]: Class KeyValueObserver is implemented in both /anaconda3/envs/zebrafish/lib/python3.6/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore (0x13b25d198) and /anaconda3/envs/zebrafish/lib/python3.6/site-packages/cv2/.dylibs/QtCore (0x1436597a0). One of the two will be used. Which one is undefined.
objc[56402]: Class RunLoopModeTracker is implemented in both /anaconda3/envs/zebrafish/lib/python3.6/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore (0x13b25d1e8) and /anaconda3/envs/zebrafish/lib/python3.6/site-packages/cv2/.dylibs/QtCore (0x1436597f0). One of the two will be used. Which one is undefined.
QObject::moveToThread: Current thread (0x7fade14860f0) is not the object's thread (0x7fade541a5a0).
Cannot move to target thread (0x7fade14860f0)

You might be loading two sets of Qt binaries into the same process. Check that all plugins are compiled against the right Qt binaries. Export DYLD_PRINT_LIBRARIES=1 and check that only one set of binaries are being loaded.
qt.qpa.plugin: Could not load the Qt platform plugin "cocoa" in "/anaconda3/envs/zebrafish/lib/python3.6/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: cocoa, minimal, offscreen, webgl.

[1] 56402 abort python -m stytra.offline.track_video'

Thank you!

Maria

Video example

Hi,
is it possible to have a video example in order to try StyTra and compare with our video quality?
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.