jfkominsky / pyhab Goto Github PK
View Code? Open in Web Editor NEWLooking time and stimulus presentation system for PsychoPy.
License: GNU General Public License v3.0
Looking time and stimulus presentation system for PsychoPy.
License: GNU General Public License v3.0
For the new EEG habituation mode, having a way to continuous combine off-time from consecutive trials would be useful.
Habituation with fixed trial length based on consecutive look-away time. So, e.g., three consecutive trials (where the trials are of fixed length) with 3s of consecutive look-away time in each trial.
For threshold-based habituation criterion, add option to end habituation if criteria is not set by a certain trial.
For gaze-contingent trials, have a minimum trial duration that is separate from minimum on-time and all other factors that control trial duration. E.g., present first X seconds of trial regardless of whether they're looking or not, then the gaze contingent conditions kick in.
I tried to run the demo, but "Fatal Python error: (pygame parachute) Segmentation Fault" kept popping up and it wouldn't run. I can't figure out why :(
For some reason the path updating seems to skip the location of files used in attention-getters that are not movie files.
Hi @jfkominsky
I use pyhab to build a head-turn preference experiment, but it crashed when I try to open the stimuli setting window. I had this problem when I upgraded BigSure before it was fine.
It says there is 'AttributeError: module 'pyglet.window' has no attribute 'get_platform'. I searched it online but did not find anything useful.
File "/Applications/PyHab-0.8.2/PyHab/PyHabBuilder.py", line 1725, in stimSettingsDlg
defDisp = pyglet.window.get_platform().get_default_display()
AttributeError: module 'pyglet.window' has no attribute 'get_platform'
Could you please let me know how I can solve this issue?
Many thanks.
Best,
Zehra
Hi Jonathan. First time trying this out, and I was able to download Selenium and moved the relevant drivers into bin, link up my stim w/ slide numbers, and I keep getting stuck when I'm actually going to run the expt.
OSError: [Errno 86] Bad CPU type in executable: 'chromedriver'
Traceback (most recent call last):
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/goals-simple/goals-simpleLauncher.py", line 78, in <module>
run()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/goals-simple/goals-simpleLauncher.py", line 40, in run
experiment.run()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/goals-simple/PyHab/PyHabClass.py", line 2311, in run
self.SetupWindow()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/goals-simple/PyHab/PyHabClass.py", line 2415, in SetupWindow
self.nextArrow = self.browser.find_element_by_css_selector("button.navigate-right")
File "/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/selenium/webdriver/remote/webdriver.py", line 598, in find_element_by_css_selector
return self.find_element(by=By.CSS_SELECTOR, value=css_selector)
File "/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message:
Decided to use issues in case this is useful for others in the future. If you have any suggestions for things I should try, let me know!
The online version is very out of date relative to the in-person version w/r/t things like different habituation modes, blocks, etc.
This will require a large-scale refactoring.
Attention-getters currently only play on the "center" screen. Need to add screen selection.
0.10.2
Just needs an extra set of checks in DoExperiment around line 1455-1465 to not do anything if redo is pressed before the first trial has started.
Currently attention-getter duration is set by file duration alone. Change to make it so there is a setting where the duration of an attention-getter can be configured in the builder.
Add image/audio pair to stimulus library in an experiment with only one trial type. If you then try to immediately add it to the trial type, it freezes while trying to pull up the list of trials. If you do anything else in between, like add a thing to the study flow or create a new trial type, it seems to be fine.
QTgui issue?
Hi @jfkominsky
We are building a looking-time experiment using PsychoPy 2021.2.3 and PyHab 0.9.3 on a MacOS Monterey 12.0.1.
We were able to build the experiment and run it using two monitors, but it would crash sometime between trial 8-15. Additionally, everytime we tried to open stimulus settings, it crashed and gave us the error that has already been addressed on this github: 'AttributeError: module 'pyglet.window' has no attribute 'get_platform'.
However, we downgraded the pyglet on the computer (pip3 install pyglet==1.3.2), yet got the same error. We also attempted to run the same experiment Launcher on PsychoPy version 2020.2.8, and still got the pyglet attribute error. We began changing the source code within PyHabbuilder according to what we found online about pyglet attributes, including importing packages and calling canvas.window instead. We also attempted changing preferred window type from pyglet to pygame in user preferences, but this still raised an error upon opening stimulus settings. After all of the errors, we have changed the code back to the default defDisp setup. We even contacted the computational team here at UCSD for help with this issue, and they recommended multiple version changes and attempted them as well with no success
Are there any other suggestions you can think of? We believe its due to incompatible version types between our downloaded PyHab version, PsychoPy version, Python/Pyglet versions, and MacOS.
If you try to insert a habituation trial after habituation has been met, while doing an offline coding (i.e. no stimulus presentation), three bad things happen:
I and @sabrinapiccolo are currently using the latest online version of pyhab to set up a VOE experiment. I have found that if I have the 'automatically redo trial if min on-time criterion not met' box checked in the trial settings, as soon as I indicate an off look, the control window disappears and I get the following console output. If this box is unchecked, then the experiment runs fine. Dropbox link to project: https://www.dropbox.com/sh/w4u9aj21cy3ph49/AABHC_L3XA68EiHwJarZcI8-a?dl=0
2021-11-05 17:39:18.308 python[8692:274783] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/18/nbn3vbdx2jv900brph6lq3180000gn/T/org.opensciencetools.psychopy.savedState
Traceback (most recent call last):
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/goal-simpleLauncher.py", line 78, in <module>
run()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/goal-simpleLauncher.py", line 45, in run
run()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/goal-simpleLauncher.py", line 40, in run
experiment.run()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/PyHab/PyHabClass.py", line 2323, in run
self.SetupWindow()
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/PyHab/PyHabClass.py", line 2467, in SetupWindow
self.doExperiment() # Get this show on the road!
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/PyHab/PyHabClass.py", line 1223, in doExperiment
x = self.doTrial(trialNum, self.actualTrialOrder[trialNum - 1], disMovie) # the actual trial, returning one of four status values at the end
File "/Users/ShariLiu/Dropbox (MIT)/Research/Studies/_NIRS/Goals-Simple/github:goal-simple/PyHab/PyHabClass.py", line 1454, in doTrial
if localType in self.autoRedo and nowO
ff >= self.onTimeDeadline[localType] and not deadlineChecked:
KeyError: 'fam'
##### Experiment ended. #####
HPP condition interface currently only supports a limited number of trials (14), and further modifications have to be made to the condition file directly. Paginate and make it suitable for an unbounded number of trials.
Hi! @sabrinapiccolo @shariliu and I were wondering if it's possible to add a feature to PyHab that would extract the trial onset / offset timestamps with respect to the start of the experiment (in addition to trialDuration and other data included in the PyHab output currently). We are trying to run our videos through an automated looking time pipeline, but we need to provide the trial onsets and offsets to the pipeline as calibration.
If this is possible, it would be a great help. Thanks!
Hello!
We are running an online study, and I am a little confused about the trialTiming.csv output. Of the four event conditions, "startAttnGetter", "endAttnGetter", "startTrial", and "endTrial", the event condition "endAttnGetter" seems to match up consistently with the start of the trial (or in other words, immediately after the attention getter ends, which is immediately a trial). This is fine, but what does "startTrial" constitute then? In most trials, the difference is only half a second, but for our first trial (our calibration trial), there is a stark 3 second difference between "endAttnGetter" and "startTrial". An example is shown below:
trialNum,trialType,event,time
1,calibration-short,startAttnGetter,0.02680308325216174
1,calibration-short,endAttnGetter,3.560075916815549
1,calibration-short,startTrial,6.960292249917984
1,calibration-short,endTrial,25.966494167223573
2,train,startAttnGetter,26.983967083040625
2,train,endAttnGetter,30.07727766688913
2,train,startTrial,30.693748833145946
2,train,endTrial,69.93557758303359
Thank you for the help--still pretty new to PyHab so any help is very much appreciated!
Sam
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.