Introduction to PsychoPy#

Important prerequisites#
In order to get the best and most helpful experience out of this session, there are two important prerequisites everyone should have. If you have any problems concerning this, please just let one of the instructors know.
I - solid understanding of python#

II - working PsychoPy installation#
You should have downloaded the standalone version specific for your OS from https://www.psychopy.org/download.html (it should automatically suggest the fitting version via the big blue button) and subsequently installed it.
You should be able to see the PsychoPy
application in your Applications
/Programs
, click on it to open it!

Computational environments#
While we could set up and use a computational environment
as discussed in the school prerequisites and RDM session, we will use the standalone version of PsychoPy for this session.

The reasons are outlined below, but in short, the manual installation tends to result in some differences between operating systems concerning drivers, etc. and we unfortunately don’t have time to address these potential problems during the session. However, please note and remember that the manual installation usually works perfectly fine and you can utilize either in your local setup.

Outline#
Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
Running experiments (using Python)#

Experiment software and builders#

One argument that is routinely brought up goes something like “Clearly open source can’t match the performance of proprietary software.”. While performance is definitely an important aspect, generalizing that open source software is generally worse, isn’t very accurate and also not very scientific. How about having a precise look at it and compare different software and settings?
That’s exactly what the The timing mega-study: comparing a range of experiment generators, both lab-based and online did:
compared prominently used software across
OS
, local & onlinePsychoPy
showed performance comparable to proprietary softwareprominent differences between
OS
:ubuntu
>windows
>macOS
also prominent differences between browsers
online versions less precise, independent of software but
PsychoPy
overall great performance


Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
Introduction to PsychoPy#
basic overview#

PsychoPy resources#
PsychoPy website |
PsychoPy documentation |
PsychoPy discourse forum |
---|---|---|
RDM - Experiments#
Starting new experiments
follows the same guidelines as starting new projects
in general:
create and store everything in a dedicated place on your machine
use the (standalone version of)
PsychoPy
specific for yourOS
document everything or at least as much as possible
test and save things in very short intervals, basically after every change

PsychoPy components#
The standalone version of PsychoPy
comes with three distinct windows
and respective functionalities.

PsychoPy Experiment files#
As mentioned before, we will save and store everything in a dedicated place on our machines. In PsychoPy
experiments that are created/managed via the Builder
are saved as .psyexp
files, so let’s check this. After you opened PsychoPy
, ie the Builder
, you can do either save it via File
-> Save as
or the little 💾 icons. In both cases you should select the code/experiment
directory we created.

What’s one of the first things we always have to do when utilizing python?
That’s right: thinking about modules
/functions
we need.
PsychoPy modules#
various basic functions, including timing
& experiment
termination
creation/management of dialog boxes
allowing user input
handling of keyboard
/mouse
/other input from user
presentation of stimuli
of various types (e.g. images
, sounds
, etc.)
handling of condition parameters
, response registration
, trial order
, etc.
we unfortunately can’t check out due to time constraints
Question for y’all
Which one do we most likely need?
A general experiment workflow#

Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
basic working principles#
After this brief overview of PsychoPy
and its parts, we will explore some of it basic working principles, situated along the above outlined experiment workflow
.
Dialog box(es) to get user input#
Many experiments start with a GUI dialog box
that allow users/participant to input certain information, for example participant id
, session
, group
, data storage path
, etc. .
We can implement this crucial aspect via the psychopy.gui module with the respective being accessible via the ⚙️
icon in the top menu bar.
It will open the Experiment Properties window
through which we can set a variety of important experiment characteristics, including Experiment info
which will become our dialog box
.

Let’s add a few fields to the dialog box
How about participant ID, session, age, handedness, etc. ?

That’s actually all we need to test our GUI dialog box
!
In order to do that, we need to run/execute
our experiment
. This is achieved via clicking the ▶️ icon in the top menu bar. This will run/execute
the experiment
via the python
version installed in the standalone application
.
To bring in some form of computing environment management
, we have to set the Use PsychoPy version
field in the Experiment Properties
window -> please select 2023.2.3
.

Testing the GUI dialog box#
If everything works/is set correctly, you should see a GUI dialog box
appearing on your screen asking for the information we indicated in
our crtt_exp.py
python script
(chances are the layout on your end looks a bit different than mine, that’s no biggie).

After entering all requested information and clicking OK
the GUI dialog box
should close, a frame rate measurement
should start and no errors should appear.
If you click Cancel
the same thing should happen.
You can also check the logs
via The runner

Wow, this actually works for everyone (hopefully)!
While this is kinda cool, it’s super detached from what we really wanted to do: exploring how we can run experiments
using Python
by building on top of all the things you already learned…
One way to still get something of that is to use PsychoPy
’s builder
magic …

Converting experiments to python code & scripts#
PsychoPy
’s builder
allows you to automatically convert experiments
you build via the GUI
to python
code and scripts.
This is done via clicking the python
icon in the top menu bar which should result in a new python script
called crtt_exp.py
that opens in the editor window
of The coder
and is saved to your code/experiment
folder.

But what’s in this python code
? Let’s check it out in more detail and look for things we defined/set, as well as classic python
aspects.


The builder components and routines - adding instructions#
After having set this crucial aspect of our experiment
, it’s time to actually start it.
Quite often, experiments
starts with several instruction messages
that explain the experiment
to the participant. Thus, we will add a few here as well, starting with a common “welcome” text message
.
To display
things in general but also text
, the psychopy.visual
module is the way to go. Regarding this we, however, need to talk about the general outline/functional principles of the builder
first…

For example, if we want to add some instructions
, we need to create a respective Routine
.
To add one, click on “Insert Routine
” and select “new
” (should you already have an empty Routine
, please make sure to delete it via right-clicking on it and selecting “remove
”).

Finally, we’re going to give this new routine an informative name, ie “welcome
”.

Within this routine
, click on Text
in the components window
which should open a text properties window
.
This allows to set basically everything we want/need regarding the message we want to display and once everything is set, click on OK
and the presentation of the instruction
is added to the Routine
.

But what does all of this mean? Let’s have a closer look at the inputs and settings!

Now we need to add a Component
to the Routine
that will allow us to continue with the experiment
once the space bar
was pressed.
This is achieved via the Keyboard Component
which will bring up the Keyboard Response Properties window
. As with any other Components Properties window
, we can set certain characteristics to attain a certain behavior.
Here, we set the Name
of the Component
, Start
, Stop
and that the Keyboard Response
should end
the Routine
, i.e. allow us to continue to the next part of the experiment
and specifically only via the allowed key “space
”.

With that, our first ever routine is done and looks like this:

Testing the welcome routine#
Let’s give it a try via ▶️!
If everything works/is set correctly, you should see the GUI dialog box
and after clicking OK
, the text we defined as a welcome message
should appear next and after pressing the space bar
, the Experiment
should end without any errors.

We are also going to update our python script
again to check what changed via the python
icon.


windows#
We came across one of PsychoPy
’s core working principles: we need a general experiment window
, i.e. a place we can display/present something on.
You can define a variety of different windows
based on different screens
/monitors
which should, however, be adapted to the setup
and experiment
at hand (e.g., size
, background color
, etc.). You will get to the respective property window
via the little screen
icon next to the ⚙️
icon.

Basically, all experiments
you will set up will require to define a general experiment window
as without it, no visual stimuli
(e.g. images
, text
, movies
, etc.) can be displayed
/presented
or how PsychoPy
would say it: drawn
.

Speaking of which: this is the next core working principle we are going to see and explore is the difference between drawing something and showing it.
draw & flip#
In PsychoPy
(and many other comparable software), there’s a big difference between drawing
and showing
something.
While we need to draw
something on/in a window
that alone won’t actually show
it.
This is because PsychoPy
internally uses “two screens
” one background
or buffer screen
which is not seen (yet) and one front screen
which is (currently) seen.

When you draw
something it’s always going to be drawn
on the background
/buffer screen
, thus “invisible” and you need to flip
it to the front screen
to be “visible”.
Let’s see how that looks in python
code:

Why does PsychoPy
(and other comparable software) work like that?
the idea/aim is always the same: increase performance and minimize delays
(as addressed in theintroduction
)draw
ing something might take a long time, depending on thestimulus
at
hand, butflip
ping something alreadydrawn
from thebuffer
to the
front screen
is fast(er)can ensure better and more precise timing
can work comparably for
images
,sounds
,movies
, etc. where things are
set
/drawn
/pre-loaded
and presented exactly when needed

adding more instructions#
With these things set up, let’s add a few more messages to our experiment
.
One will be presented right after the welcome message
and explain very generally what will happen in the experiment
.
Another one will be presented at the end of the experiment
and display a general “that’s it, thanks for taking part” message.

The outline for creating, draw
ing and presenting these messages is identical to the one we just explored: we need to create respective Routines
.
We’re going to start with some general instructions and information and add a new routine
called instructions_general
via Insert Routine
-> (new)
.


After clicking OK
, a white dot will appear which indicates where the new routine should be added.
Please make sure to add it after the welcome screen
by moving the dot there and then clicking on it.

After that, the new routine should appear in our experiment
and a respective new routine window

Task for y’all!
Now it’s time to add the actual general instructions. Please use above-outlined approach and components
to do the following:
add this
text
: “In this task, you will make decisions as to which stimulus you have seen. There are three versions of the task. In the first, you have to decide which shape you have seen, in the second which image and the third which sound you’ve heard. Before each task, you will get set of specific instructions and short practice period. Please press the space bar to continue.”add a component that advances the experiment, ie continues with the next screen, once participants pressed the
space bar
If you get stuck, make sure to check the “welcome
” routine again. You have 5 min and please let us know if you have questions/run into problems!
Answer
Step 1: Add a text
component
displaying the instructions
click on
Text
in thecomponents
window, set the stop tocondition
and remove any number in the respective value fieldadd the instructions in the
Text
window, adding some line breaks for readabilityclick
OK

Step 2: Add a Keyboard response
component
that ends the routine
click on
Keyboard
in thecomponents
window, name it and remove any all but the ‘space’ key from the list ofAllowed keys
click
OK

Your new general instruction routine
should now look something like this.

Task for y’all!
We also have to add the general end screen. Please use above-outlined approach and components
to do the following:
add a new
routine
calledend_screen
add this
text
: “You have reached the end of the experiment, thank you very much for participating. Please press thespace bar
to finish.”add a component that ends the experiment once participants pressed the
space bar
If you get stuck, make sure to check the “welcome
” or prior routine again. You have 5 min and please let us know if you have questions/run into problems!
Answer
Step 1: Add a new routine
click on “
Insert Routine
” ->(new)
and name it “end_screen
”, placing it behind theinstructions_general
routine

Step 2: Add a text
component
displaying the message
click on
Text
in thecomponents
window, set the stop tocondition
and remove any number in the respective value fieldadd the instructions in the
Text
window, adding some line breaks for readabilityclick
OK

Step 3: Add a Keyboard response
component
that ends the routine
click on
Keyboard
in thecomponents
window, name it and remove any all but the ‘space’ key from the list ofAllowed keys
click
OK

Your new end screen routine
should now look something like this.

Task for y’all!
We still need to update our respective python
script. Please use the convert to python
functionality to convert our experiment
again and thus update the corresponding python
script. After that, please scroll through the python
script to find and briefly evaluate the newly added routines
and components
.
If you get stuck, make sure to check the prior sections again. You have 5 min and please let us know if you have questions/run into problems!
Answer
Step 1: Convert the experiment to python
click on “
Compile to Python script
” and open/look the coder window
Step 2: Check the newly added routines and components

Testing the updated experiment#
Let’s give it a try via ▶️!
If everything works/is set correctly, you should see the GUI dialog box
and after clicking OK
, the text
we defined as a welcome message
should appear next, followed by the general instruction message
and finally the end message
.
Having this rough frame of our experiment
it’s actually time to add the experiment
itself: the “Choice Reaction Time Task”

Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
PsychoPy#
stimuli & trials#
Quick reminder: our experiment
should collect responses
from participants
regarding their perception of various stimuli: shapes
, images
and sounds
.
Specifically, they should indicate which shape
/image
/sound
was presented in a given trial
via a dedicated button press
.
Thus we need to add/implement three new aspects in our experiment
: the presentation of stimuli
, trials
and responses
.
We’re going to explain these concepts and their implementation based on the shape
task
.
adding further instructions - shape task
#
At first, we need to add further instructions that are specific to the subsequent task
as mentioned above, ie before each task
(shape
, image
, sound
), a short explanation should provide further information.
Adding a respective new routine
and components
is simply done as before.
Here are the instructions for copy-paste
In this task you will make a decision as to which shape you have seen.
Press C or click cross for a cross Press V or click square for a square Press B or click plus for a plus
First, we will have a quick practice.
Push space bar or click / touch one of the buttons to begin.
Your instructions for the shape task
routine
should now look something like this.

To support the familiarization with the task
, we will also include some visual reminders
concerning which button
should be pressed when.
stimuli - the Image
component
#
To implement this, we need a new component
: Image
. Once you clicked on it, a window like the following should appear.

Comparable to the ones we’ve seen before, it allows us to specify important aspects of the Component
and here specifically Image
, ie visual stimulus
.
There’s however something new: the Image
field which allows us to set a path
to the image
we want to display
.
The images
and stimuli
we want to use are actually part of the materials
you downloaded, they can be found under school/materials/psychopy/stimuli
.
At this point, we have to go back to RDM
…
Following best practice RDM, where should the stimuli be stored?
They should be placed within the stimuli
directory of our project
/dataset
directory
.
choice_rtt/
code/
stimuli/
Please go ahead and move the shape stimuli
to this directory
.
choice_rtt/
code/
stimuli/
shapes/
black_square.jpg
blank_square.jpg
...
Now that we have the stimuli in place, we can set them in the Image Component
. Importantly, we will create one image component
per visual aid we want to display
, ie three in total.
Starting with the first, we will select “response_square.jpg
”

Given that we want to display multiple visual reminders
and thus stimuli
, we have to arrange and place them respectively in our window
on the screen
.
To achieve this, we can use the “Layout
” tab of the Image component
window, which let’s us set the Size
and Position
of the image
at hand (among other things)
We are going to set (0.2, 0.1)
for Size
and (0, -0.25)
for Position
and leave the rest as is.

What python data type was used to set the size and position?
A tuple
, denoting width
and height
concerning Size
and x
and y
coordinates concerning position
.
What do the other tabs entail?
Please have a quick look at the other tabs and briefly summarize what you can do with them.
While you’re at it, go back to the instruction message and change the Position
under Layout
to (0, 0.1)
and the Letter Height
under Formatting
to 0.035
.
Your updated shape task instruction
routine
should now look something like this.

Let’s give it a try via ▶️!
If everything works/is set correctly, you should see the visual reminder
, ie image
on the bottom of the window
and the text
should be smaller.

Fantastic! We will now add two more visual reminders
for the other stimuli
and set the following component properties
:
Image component
,stimulus
:response_plus.jpg
,Size
:(0.2, 0.1)
,Position
:(0.25, -0.25)
Image component
,stimulus
:response_cross.jpg
,Size
:(0.2, 0.1)
,Position
:(-0.25, -0.25)
The remaining settings (e.g. duration
, etc.) should be identical to the first Image component
.
Your updated shape task instruction
routine
should now look something like this.

When running the experiment
via ▶️ now, the instructions
for the shape task
should look something like this:

One thing we haven’t done in a while is to update our python script
via the Compile Python script
option and check the respective changes. Let’s do that!
If you scroll through the python script
, you should be able to see the newly added images
, ie visual stimuli
:

implementing trials - shape task
#
A lot of experiments, ie most of them, include a few task
-specific practice trials
and the experiment
at hand is no exception to that. Especially considering that we have multiple sub-tasks
. Thus, let’s add practice trials
and while doing so, we are going to explore how to implement one core aspect of experiments
in PsychoPy
, trials
.
(practice) trials#
Trials
start like any other aspect of the experiment
we’ve explored so far: by creating a new Routine
. This one is going to be a bit special, though, as we want to utilize the same basic structure for practice
and actual experiment trials
, ie we will be using this routine
at multiple points in our experiment
.
The first point to do so, is right after the instructions_shape
routine
.

Starting easy, we will add the visual ques
as we did before. However, instead of creating them from scratch, we will make use of the copy-paste
functionality of components
. In more detail, we will go back to the instructions_shape
component
, right-click on the component
we want to copy, e.g. instruct_shape_square
, select copy
, go to the trial
component
, right-click, select paste
and give the component
a new name, e.g. visual_reminder_square
.



Our newly created trial
routine
should now look something like this:

Task for y’all
Please apply the same approach to the other visual reminders
, renaming them to visual_reminder_plus/cross
respectively.
This is our trial
routine
after having added all visual reminders
:

Great, that worked like a charm. Within the Choice Reaction Time Task
, the to-be-evaluated/recognized stimulus
can appear at different positions on the screen
. Here, we are going with 4
different positions and thus will add 4 tiles
at the respective position the stimuli
can appear at/in. Concerning this, we will keep it simple and utilize a respective graphic called white_square.png
and insert it as an Image
at the desired positions. Thus, we need to add 4
new Image components
to our trial
routine
.
The first one is going to be called “left_far_tile
”, utilze the aforementiond white_square.png
and be positioned at (-0.375, 0)
with a size of (0.22, 0.22)
. Importantly, we will set its start to 0.5
so that it’s going to be presented shortly after the trial
started.

Task for y’all
Please apply the same approach to the other tiles
as follows:
name
:left_mid_tile
,position
:(-0.125, 0)
name
:right_mid_tile
,position
:(0.125, 0)
name
:right_far_tile
,position
:(0.375, 0)
Your updated trial instruction
routine
should now look something like this.

Having set up the cues
and tiles
, we should start thinking about presenting the actual target
, ie the stimulus
participants have to evaluate and make decisions on. Obviously, we need another Image
component
and if you have a closer look at the stimuli
directory, you’ll see a couple of files named target_
. Given that 1+1=2
, we know what to do: create a new Image
component
and set one of the target_
files as the image
that should be presented. As it should appear slightly after the cues
and tiles
are already there, we will set its Start
time
to e.g. 0.7
, present it for 0.2s
(Stop duration (s)
) and choose one of the tile
position
coordinates
to present it at:
name
:target_image
Start time(s)
:0.7
Image
:target_square.jpg
Size
:(0.2, 0.2)
Position
:(-0.125, 0)

Great job! Now, we only have to add a Keyboard
Component
to record the response
of the participants
. Comparable to the ones before, we set the Start time (s)
to 0
and leave Stop duration (s)
empty. In contrast to the Keyboard
Components
we set prior, we will change the Allowed keys
to b
,c
,v
with regard to the task
and visual cues
: b
if a plus
was presented, c
if a cross
and v
if a square
. This will also allow us to compute task
performance, ie accuracies
, later on.

That should do the trick. Let’s give it a try via ▶️.
You should now see the welcome
message, followed by the general instructions
, the task-specific instructions
and then the first practice trial
that remains on screen until you’ve responded via one of the set keys. Finally, you should see the end screen
and the experiment
should end without errors
.
Looking back at our experiment flow
, we can see, that we basically already got the core aspects implemented. Nice!

Wait: there’s a slight problem/inconvenience with this approach…but what?
While the approach worked nice for a single stimulus
, we usually want to show lots of them and most likely different ones. So, do we have to set a new component for every single stimulus
?
What would be a great way of utilizing a python control flow to adress this?
That’s right, we could use a for-loop
to iterate over a list
of stimuli
, ie setting a new Image
in the component
with every trial
iteration.
for-loops#
Implementing for-loops
in PsychoPy
is a three-stage endeavor (at least within the builder
):
preparing a file that entails the
list
ofstimuli
we want to iterate overcreating/adapting a
component
to work iterativelyinsert the
for loop
in the experiment flow
Stimuli lists#
While there are other ways to create stimuli lists
(e.g. via coding
, etc.), we will keep it brief and simple for this session and use a spreadsheet application
to do so.
We only need to provide an informative column name
, e.g. TargetImage
and then provide the paths to the images
below, one per row.
An example is shown below: we will call the file shapes_targets.csv
, define one column
called TargetImage
and then a list of stimuli
, one per row. NB: you can use relative
or absolute
paths
, just make sure that the path(s) is/are correct. Here, we used relative paths
, pointing to the choice_rtt/stimuli/shapes
directory relative to where the stimulus list
is/will be saved, ie choice_rtt/code/experiment
.

NB: If you don’t want to create this file, we also included it in the materials
you downloaded, ie psychopy/choice_rtt/code/experiment
. Just make sure to move it to the respective directory you saved the PsychoPy
experiment
in.
Using components iteratively#
With the stimuli list
in place, we can continue with adapting our trial
component
to work iteratively.
Luckily, this is relatively straight-forward, as we “only” need to change the Image
parameter of the component concerning two aspects:
instead of providing a path to one single
stimulus
, we will enter the name of thecolumn
that denotes thelist
ofstimuli
in ourshapes_targets.csv
, ieTargetImage
, and add a$
in the beginning to indicate that the paths and thusstimuli
provided in a givenrow
should be used when iteratinginstead of setting the presentation to
constant
, we will change it toset every repeat
, ie indicating that we want to update theimage
every iteration of thefor-loop

This approach is generally how creating/adapting components
that work iteratively
is done, e.g. also for other types of stimuli
, etc. .
Inserting a for loop#
The last step entails adding the actual for loop
. However, instead of adding it directly to the/a component
, for loop
s are added “around” whatever is supposed to be “loop
ed”, ie iteratively updated
. In our example, the for loop
should be added around the trial
component
.
Here’s how this is done:
click on “
Insert Loop
” in theFlow
windowselect the start position of the
for loop
by placing thedot
before theroutine
that should be looped overselect the end position of the
for loop
by placing thedot
after theroutine
that should be looped over

the
for loop
properties window should appear within which we can set the “behavior” of thefor loop
we will set
Name
toPracticeLoop
we will set
loopType
torandom
, indicating that thestimuli list
should be iterated over in arandom
fashionwe will set
nReps
to1
, indicating that thestimuli list
should be iterated over oncewe will set the
Conditions
file to be ourshapes_targets.csv
If we did everything right, it should already state how many conditions
and parameters
are in our stimuli list
. Here, it should be 6 conditions
as we have 6 rows
, ie stimuli
and 1 parameter
as we have one column
, ie TargetImage
.

After clicking OK
, the for loop
should be added to our experiment flow
, enclosing the trial
routine
. This should look something like this.

Before we are going to try it…
What would be the expected behavior of our experiment?
We see the
welcome screen
.We see the
general instructions
.We see the
task-specific instructions
.We get
6
practice trials
across which thetarget
is changed each iteration.
This seems that have worked out great. However …
What are the two other problems with the way we are presenting stimuli?
We set a fixed
onset
, i.e., thetarget
will always appear at the same time within thetrial.
We set a fixed
position
, i.e., thetarget
will always appear at the same location within thewindow
/onscreen
.
Both aspects are not ideal concerning fundamentals of experimentation
(e.g. regarding fatigue and expectation, etc.) and are usually addressed via introducing a jitter
so that the onset
and position
(in our example) are not constant but change between trials
.
adding python code via Costum
components
#
While we could theorectically add more columns to our stimuli list
file, ie jitter_onset
and jitter_position
, we will use this opportunity to explore another great feature of PsychoPy
: adding python code
via Costum components
.
We already know that we can use the Builder
to Compile
a corresponding python script
but we can also go the other way around and address some of the Builder
’s limits (or just to be faster). The Costum
component
allows us to add python code
that directly interacts with our routines
and components
. Here, we will use it to introduce jitters
to the onset
and position
of the targets
.
As with other components
, we start by selecting it from the Components
window
, specifically Code
.

This should open up the Code properties window
, within which we set some properties
and also enter our python code
.
Initially, we are going to name it “jitterOnsetPosition
” and select Begin Routine
to indicate that the code should be run at the beginning of our routine
, ie with each iteration of the for loop
.

Now to the actual python code
. What we need is:
a
list
of possibleonsets
calledlist_onsets
a
list
of possiblepositions
calledlist_positions
picking one
random
ly for a giventrial
and assigning them to variables calledonset_trial
andposition_trial
How would you implement this in python?
There are of course many different ways to do it, but here’s one.
#list of possible onsets for target
list_onsets = [1, 1.2, 1.4, 1.6, 1.8]
# randomize these onsets
shuffle(list_onsets)
#pick the first value from the list
onset_trial = list_onsets[0]
#list of possible positions for target
list_positions = [-0.375, -0.125, 0.125, 0.375]
# randomize these onsets
shuffle(list_positions)
#pick the first value from the list
position_trial = list_positions[0]
What other options can you think of?
With our python code
ready, we simply copy-paste/write it in the left column of the Custom
Code
window
.

After clicking OK
, the component
should appear in our routine
.

So far, so good. However, to actally change the onset
and position
of the target
at each iteration
, we have to set this in the respective Image
component
properties
, ie Start time (s)
and Position [x,y]
.
In more detail, we have to replace the fixed values with the variables we’re assigning in our python code
:
Start time (s)
:onset_trial
Position [x,y]
:(position_trial,0)
,set every repeat

As mentioned before, the python code
and Builder
directly interact with one another and thus are aware of the respective variables. In other words, at the beginning of every trial
(routine
) iteration, the python code
will be run and generating the variables onset_trial
and position_trial
which are then used by the Image
component
to set the onset
and position
of the target
, now in a jittered
manner (based on the shuffle()
in our python code
.
One thing we need to do however, is to move
the Custom
Code
component
to the top of the components
in order to ensure that it’s run
first, ie generating the onset
and position
so that they can be used in the Image component
.
We can do that via right-clicking on the Custom
Code
component
and then select move to top
.


When running the experiment
via ▶️ now, the onset
and position
of the targets
should change every trial
.
providing participant feedback#
Another important aspect of practice trials
is to provide participants
with feedback
concerning their performance
to ensure that they understood the task
and perform respectively. Here, we are going to provide feedback
regarding their response accuracy
and reaction time
.
In order to do this in PsychoPy
(at least our experiment
), we need to do the following:
add functionality to compute
accuracy
andreaction times
on the flyadd a new
routine
that presents the computedfeedback
Starting with 1., we intially need adapt our Costum
Code
component
jitterOnsetPosition
and also indicate which response
, ie button press
is correct
for a given image
, so that we can utilize this information later on.
Here’s the python code
we need to add:
#TargetImage is the variable in the Image field of the target_image component
#path of where to find the target image
if TargetImage == '../../stimuli/shapes/target_square.jpg': #path of where to find the target image
#setting the key press that will be the correct answer
corrAns = 'v'
#path of where to find the target image
elif TargetImage == '../../stimuli/shapes/target_cross.jpg':
#setting the key press that will be the correct answer
corrAns = 'c'
#path of where to find the target image
elif TargetImage == '../../stimuli/shapes/target_plus.jpg':
#setting the key press that will be the correct answer
corrAns = 'b'
Thus, our Custom
Code
Component
should now look like this:

Given that we now have indicated what the correct response
for a given stimulus
would be, we can utilize this information within the keyboard_response
component
.
Under the Data
tab we can actually set what the correct response
or here Correct answer
would be and if it should be stored.
Here, we will set $corrAns
and thus forward the information set in the Custom
Code
Component
.
Additionally, we have to deselect the Save onset/offset times
and Sync timing with screen
fields in order to compute the respective reaction time
.
Last but not least, we should change the Allowed keys
field to set every repeat
as we’re iterating over different stimuli
.

This basically already assess if a response
was correct by comparing the response
to the set correct response
of the respective stimulus
. However, we still need to utilize this information and will do so by adding another Custom
Code
Component
.
Within it, we will programmatically
access the keyboard_response
Component
and its content. Let’s call it compRTandAcc
and copy-paste/write the following python code
in the tab End Routine
to indicate that the code
should be run at the end of the routine
, ie each trial
iteration
.
#this code is to record the reaction times and accuracy of the trial
thisRoutineDuration = t # how long did this trial last
# keyboard_response.rt is the time with which a key was pressed
# thisRecRT - how long it took for participants to respond after onset of target_image
# thisAcc - whether or not the response was correct
# compute RT based on onset of target
thisRecRT = keyboard_response.rt - onset_trial
# check of the response was correct
if keyboard_response.corr == 1:
# if it was correct, assign 'correct' value
thisAcc = 'correct'
# if not, assign 'incorrect' value
else:
thisAcc = 'incorrect'
# record the actual response times of each trial
thisExp.addData('trialRespTimes', thisRecRT)

As you can see, we directly access the keyboard_response
Component
, specifically keyboard_response.rt
and keyboard_response.corr
. While the latter entails the comparison between the response
provided by the participant
and the response
we set as correct
(ie if the correct
key
was pressed a 1
is assigned and if not a 0
), the former entails the the reaction time
in terms of the first key press
within a given trial
. Thus, we have to substract the onset_trial
value from it to get the actual reaction time
from target
presentation to key press
(Reminder: onset_trial
is the variable assigned by the jitterOnsetPosition
Custom
Code
Component
.).
After clicking “OK
”, the new Component
should appear in the Routines
window:

That concludes 1., let’s continue with 2.: add a new routine
that presents the computed feedback
.
At first, you’ve guessed it, we need to add a new routine
via “Insert Routine
” and we’re going to name it “practice_feedback
” and place it behind the trial
routine
but still within the for loop
as we want to present feedback
after every practice trial
.


As we want to present the feedback
visually, ie via text
, we will add a new Text
Component
and call it feedback_text
. As with the other Text
Components
, we will set Start time (s)
to 0.0
and leave Stop duration (s)
empty.
The new functionality comes now: we have to once more interact with the Custom
Code
Component
. This time, we need information from compRTandAcc
, specifically the reaction time
, ie thisRecRT
, and accuracy
, ie thisAcc
.
We are going to use these variables
within the Text
field of the Component
like so:
$"The last recorded RT was: " + str(round(thisRecRT, 3)) + " \nThe response was: " + thisAcc + " \n\nPress the space bar to continue."
In more detail, we’re making use of python strings
and string formatting
, via directly adding the variables
to the string
, ie text
, that should be presented.
The leading $
in combination with setting the Text
to set every repeat
indicates that the variables
and thus the string
should be updated at each iteration
, ie practice trial
.
What are the \n denoting?
They are used to denote a line break
.
The complete Text
Component
should look like this:

We will also add a Keyboard
Component
, calling it keyboard_response_feedback
and setting the Allowed keys
to 'space'
in order to allow participant to advance to the next (practice) trial
.
And that’s our routine
to present the feedback
computed by the compRTandAcc
Custom
Code
Component
.

If you now run the experiment
again, you should be presented with a feedback screen
after each practice
trial
, displaying your reaction time
and if your response
was correct
.

To get the full experience, please make sure to go through the entire experiment
once (we will also need the data
in a few minutes)!
Great job everyone!

Now that we have collected our first set of responses
, we can have a look at another core aspect of PsychoPy
(and comparable software): storing
and handling
acquired data
.
Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
data output#
One thing we haven’t checked out yet is what kind of output
, ie data
is actually generated by PsychoPy
and specifically, our experiment
. Let’s start this topic with two quick questions.
Here’s the good news: PsychoPy
is once again highly customizable and can provide many different types of output/data across many levels. As we didn’t set anything specific yet, we should start with having a look at the default output we get by running our (or any other) experiment
.
default outputs#
By default, you should have a directory called data
within the experiment
direcotry within which all data
acquired by running the experiment
is stored.
choice_rtt/
code/
experiment/
data/
01_crtt_exp_2024-02-02.csv
01_crtt_exp_2024-02-02.log
01_crtt_exp_2024-02-02.psydat
As you can see, the file name
s should be a combination of the participant ID
and the date
(+ the time
). Furthermore, there should be 3
different kinds of output
/data
files: .csv
, .log
and .psydat
, each containing information about the experiment
run
at different level of detail. We will have a quick look at them now.
01_crtt_exp_2024-02-02.csv
The .csv
file contains information, i.e. data
, in a summarized way for each routine
and trials
therein, which is (kinda) intelligible and easy to grasp. Usually, this is the file you want to use/work with wrt to analyses
, etc. .

While we definitely get a lot of data
, its provided in way we can make sense of and easily utilize in subsequent steps: we get data
concerning the presented routine
(e.g. start
/stop
), trials
(e.g. start
/stop
), stimuli
(e.g. file
, position
in loop
) and components
(e.g. text
, stimuli
, responses
) and we get the data
entered in the GUI dialog box
within the last columns
. We also get the reaction time
and accuracy
we computed on the fly. Basically, everything we need.
01_crtt_exp_2024-02-02.log
The .log
file contains information, i.e. data
, in a detailed way for each and every frame
, which is (kinda) intelligible but rather hard to grasp. Usually, these are the files you want to use/work with wrt to precise/detailed quality control
and testing
, etc. but not wrt analyses
, etc. .

Here, we get each frame
(left column
), what “event type” happened during it (middle column
) and what happend within the “event type” (right column
).
01_crtt_exp_2024-02-02.psydat
The .psydat
file as comparable to the .log
as it contains information, i.e. data
, in a detailed way for each and every frame. In order to view and interact with the data
, you need to use PsychoPy’s python module.
At first, you need to load
the data
:
from psychopy.misc import fromFile
# set the psydat file
psydatFile = "/Users/peerherholz/Desktop/choice_rtt/code/experiment/data/01_crtt_exp_2024-02-01_09h33.45.473.psydat"
# load the psydat file
psydatFile_load = fromFile(psydatFile)
Which you can then view:
print(dir(psydatFile_load))
['__class__', '__del__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_getAllParamNames', '_getExtraInfo', '_getLoopInfo', '_guessPriority', '_paramNamesSoFar', '_status', 'abort', 'addAnnotation', 'addData', 'addLoop', 'appendFiles', 'autoLog', 'close', 'columnPriority', 'currentLoop', 'dataFileName', 'dataNames', 'entries', 'extraInfo', 'getAllEntries', 'getJSON', 'getPriority', 'loopEnded', 'loops', 'loopsUnfinished', 'name', 'nextEntry', 'originPath', 'pause', 'resume', 'runtimeInfo', 'saveAsPickle', 'saveAsWideText', 'savePickle', 'saveWideText', 'setPriority', 'sortColumns', 'status', 'stop', 'thisEntry', 'timestampOnFlip', 'version']
or access/interact with:
trialHandler = psydatFile_load.loops[0]
trialHandlerParams = dir(psydatFile_load.loops[0])
print(trialHandlerParams)
print(trialHandler.data)
print(trialHandler.data['ran'])
print(trialHandler.data['order'])
print(trialHandler.data['keyboard_response_feedback.rt'])
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__next__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_createOutputArray', '_createOutputArrayData', '_createSequence', '_exp', '_makeIndices', '_terminate', 'addData', 'autoLog', 'data', 'extraInfo', 'finished', 'getCurrentTrial', 'getEarlierTrial', 'getExp', 'getFutureTrial', 'getOriginPathAndFile', 'method', 'nRemaining', 'nReps', 'nTotal', 'name', 'next', 'origin', 'originPath', 'printAsText', 'saveAsExcel', 'saveAsJson', 'saveAsPickle', 'saveAsText', 'saveAsWideText', 'seed', 'sequenceIndices', 'setExp', 'thisIndex', 'thisN', 'thisRepN', 'thisTrial', 'thisTrialN', 'trialList']
{'ran': masked_array(
data=[[1.0],
[1.0],
[1.0],
[1.0],
[1.0],
[1.0]],
mask=[[False],
[False],
[False],
[False],
[False],
[False]],
fill_value=1e+20,
dtype=float32), 'order': masked_array(
data=[[1.0],
[5.0],
[4.0],
[3.0],
[0.0],
[2.0]],
mask=[[False],
[False],
[False],
[False],
[False],
[False]],
fill_value=1e+20,
dtype=float32), 'keyboard_response.keys': array([['b'],
['c'],
['v'],
['c'],
['b'],
['c']], dtype=object), 'keyboard_response.corr': masked_array(
data=[[1.0],
[1.0],
[1.0],
[1.0],
[1.0],
[0.0]],
mask=[[False],
[False],
[False],
[False],
[False],
[False]],
fill_value=1e+20,
dtype=float32), 'keyboard_response.rt': masked_array(
data=[[2.702897787094116],
[2.077255964279175],
[2.052070140838623],
[2.1845827102661133],
[2.8191428184509277],
[2.8643736839294434]],
mask=[[False],
[False],
[False],
[False],
[False],
[False]],
fill_value=1e+20,
dtype=float32), 'keyboard_response.duration': array([[None],
[None],
[None],
[None],
[None],
[None]], dtype=object), 'keyboard_response_feedback.keys': array([['space'],
['space'],
['space'],
['space'],
['space'],
['space']], dtype=object), 'keyboard_response_feedback.rt': masked_array(
data=[[1.059588074684143],
[0.6677191853523254],
[0.7012380361557007],
[0.6207000017166138],
[2.571624517440796],
[1.0015552043914795]],
mask=[[False],
[False],
[False],
[False],
[False],
[False]],
fill_value=1e+20,
dtype=float32), 'keyboard_response_feedback.duration': array([[None],
[None],
[None],
[None],
[None],
[None]], dtype=object)}
[[1.0]
[1.0]
[1.0]
[1.0]
[1.0]
[1.0]]
[[1.0]
[5.0]
[4.0]
[3.0]
[0.0]
[2.0]]
[[1.059588074684143]
[0.6677191853523254]
[0.7012380361557007]
[0.6207000017166138]
[2.571624517440796]
[1.0015552043914795]]
Which one you use further is definitely up to you, but starting with .csv
is definitely a good default. While these three files are already pretty cool and useful, PsychoPy
actually has quite a bunch more options concerning data output
which we will explore next.
Output data properties#
One way to change the output data
provided by PsychoPy
, in terms of formatting, level of detail, etc., is to adjust the settings in the Data
tab of the experiment
Properties
window which is accessible via the ⚙️ icon in the top menu bar.

Based on the settings we can see here, the generation of the output data
files we checked before makes more sense:
the
Data filename
is created by usingexperiment
variables
(participant ID
,experiment name
andexperiment date/time
) andstring formatting
the different
output files
are generated bySave log file
,Save csv file (summaries)
andSave psydat file
As you can see, we could actually set more properties, e.g. the Data file delimiter
changing column
arrangement, change the Logging level
and save further output
files, ie Save Excel file
, Save csv file (trial-by-trial)
and Save hdf5 file
.
We can also add basically any data
obtained via Costum
Code
Components
as seen before:
# record the actual response times of each trial
thisExp.addData('trialRespTimes', thisRecRT)
For now, we will keep things as they are with one exception: the Data filename
.
Changing data output files#
Going back to the RDM session and specifically Project and Data Organization, we learned utilizing a dedicated directory structure
and file identifier
will go a long way concerning FAIR
-ness.
Obviously, we want to apply these principles to our experiment
as well and therefore, we are going to change the Data filename
in two aspects:
we will add the
session identifier
to thefile name string
we will add a
path
variable
to thefile name
so that we can choose a dedicated directory to save thedata output files
in
The first aspect is comparably easy and straightforward, as we already have the session identifier
through our GUI dialog box
and only need to add it to the file name string
.
How would we do this?
We simply need to do the following:
u'data/%s_%s_%s_%s' % (expInfo['participant'], expInfo['session'], expName, expInfo['date'])
where expInfo['session']
is the value of the session
field in our GUI dialog box
.
The second aspect requires slightly more work, as we so far didn’t include an option to specify an output path
. However, this is easily added to our GUI dialog box
, via clicking the last +
icon to add a field, which we will name “output-path
”.

Following the same approach as above…
How would you add this to the file name?
We need to exchange the data
string
with the value of the output-path
variable
obtained from the GUI dialog box
via `string formatting:
u'%s/%s_%s_%s_%s' % (expInfo['output-path'], expInfo['participant'], expInfo['session'], expName, expInfo['date'])
Fantastic, we’re almost there. One last thing we should think about and incorporate is the directory structure
. As learned in the Project and Data Organization part of the RDM session, we should keep different versions of our data
strictly separate to avoid confusion, data loss and provenance problems. Specifically, this refers to an example directory structure
with the following aspects (applied to our experiment
):
choice_rtt/
code/
experiment/
crtt_exp.py
stimuli/
shapes/
sourcedata/
sub-ID
sub-ID/
derivatives/
pipeline/
sub-ID
Do you know what each aspect should entail?
Let’s have a look at the directories
!
choice_rtt/
choice_rtt
is our root directory
within which we will store everything else, ie the highest level.
choice_rtt/
code/
expertiment/
crtt_exp.py
Within code
, we are going to store every piece of code
related to the project
/dataset
. For example, code
used to acquire data
(as in our example the experiment
directory
, data
conversion
or data
analyses
.
choice_rtt/
stimuli/
shapes/
The stimuli
directory
will store all stimuli
used during data
acquisition
, ie within the experiment
.
choice_rtt/
sourcedata/
sub-ID
All acquired
data
in its “source
” form, ie before conversion
and/or standardization
will be stored under sourcedata
. Importantly, each participant will get their own directory in there.
choice_rtt/
sub-ID/
raw
data
, ie data
after conversion
and standardization
should be placed within a participant-specific directory, called sub-ID
where ID
is the participant identifier. All respective directories are stored directly at root
.
choice_rtt/
derivatives/
pipelineID/
sub-ID
The derivatives
directory
will store all derivatives
obtained through applying some form of data processing
to the raw
data
, ie in sub-ID
. Within it, there should be one directory for each processing pipeline
, e.g. preprocessing
, statistics
, etc. and within it usually one directory
per participant
in the form of sub-ID
and/or group
in case results
are aggregated across participants.
Importantely, if you don’t have these directories yet, please make sure to create them via:
mkdir path/to/choice_rtt/sourcedata
mkdir path/to/choice_rtt/derivatives
Only adapting the string
of the Data filename
to
u'%s/sub-%s/ses-%s/%s_%s_%s_%s' % (expInfo['output-path'], expInfo['participant'], expInfo['session'], expInfo['participant'], expInfo['session'], expName, expInfo['date'])
unfortunately won’t be enough, as we have to make that the respective directories
for a given participant and session
are created before we attempt to save files
there.
But no worries: we can just add Custom
Code
Component
that does this for us. Let’s place at the beginning of the welcome
routine
so that it immediately creates the directory structure
we need. Here’s the respective python code
:
# Construct the path with placeholders replaced by actual values
dir_path = expInfo['output-path'] + "/sub-" + expInfo['participant'] + "/ses-" + expInfo['session']
# Check if the path exists
if not os.path.exists(dir_path):
# Create the directory, including all intermediate directories
os.makedirs(path)
print(f"Directory '{dir_path}' was created.")
else:
print(f"Directory '{dir_path}' already exists.")
How would you add this component to the welcome routine?
open the
welcome
routine
select the
Custom
Code
Component
copy-paste/write the
python code
from above within theBegin Routine
tab

click
OK
With that in place, we adapt the Data filename
string
as indicated above:
u'%s/sub-%s/ses-%s/%s_%s_%s_%s' % (expInfo['output-path'], expInfo['participant'], expInfo['session'], expInfo['participant'], expInfo['session'], expName, expInfo['date'])
When running the experiment
now, you should be able to specify a path
in the GUI dialog box
, which should be /path/to/choice_rtt/sourcedata
, and the data output files
should be stored under this directory
in the aforementioned structure: /path/to/choice_rtt/sourcedata/sub-ID/ses-ID/
.
Give it a try!
That seemed to have worked out great, awesome! This concludes our data output
adventure and bring us to the next chapter.
Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
A very simple experiment#
By now, we have already explored quite a lot of PsychoPy
’s functionality, even though our experiment
is quite simple. Speaking of which: we actually didn’t implement the experiment trials
yet, only the practice ones.
In order to evaluate if we did a good job of teaching you the respective PsychoPy
aspects and find potential gaps/problems, we kindly ask you to add the experiment trials
to the experiment
. Below you find a bit more information on how to do it, as well as the answer. However, it would be cool if you could give it an honest try first. After 30 min., we will go through the corresponding steps together.
Task 1 - Adding instructions for the actual trials
Similar to the instructions
we provided for the practice trials,
we should provide some for the actual trials.
Thus, please add/do something to achieve the following:
the
instructions
should appear after thepractice trials
and before the actualtrials
the following
text
should be displayed: “Well done, you have finished the practice! Now for the main experiment. Press space to continue!”there should be
line breaks
between the sentences
participants should be able to end the
instructions
and start theexperiment
by pressing thespace bar
and only thespace bar
Answer
Step 1: Copy-paste the instructions_shape
Routine
the
instructions_shape
Component
almost has all the things we need, except for thetext
which we can simply replaceselect the
instructions_shape
Component
in theRoutine
window
click on “
Experiment
” in the top menu bar and selectCopy Routine
click on “
Experiment
” in the top menu bar and selectPaste Routine

rename it, e.g. to “
instructions_shape_experiment
”

Step 2: Remove unnecessary Components
remove all
Image
Components
via right-clicking on them and selecting “remove
“

Step 3: Adapt the Text
open the
Text
Component
and exchange theText
to the one provided above, addingline breaks
between sentencesoptional: assign the
Component
a new name

Step 4: Insert the Component
select “
Insert Routine
” and chooseinstructions_shape_experiment
place it after the
practice trial
for loop
and click on the dot

Done!
NB: While you also could have simply created a new Routine
and add the respective Components
, we wanted to show you the Copy-Paste Routine
functionality. Here, it’s actually more work and takes longer, but for more complex and heavy Routines
, this functionality comes in handy!
Great job, y’all! Let’s step it up with the next task.
Task 2 - Adding the actual trials
Now it’s time to add the actual experiment trials.
Thus, please add/do something to achieve the following:
the
experiment trials
should appear after theexperiment instructions
and before theend screen
they should be identical to the
practice trials
in terms ofcues
,targets
,keyboard responses
, etc.every
stimulus
in thestimuli list
should appear2
times
Answer
Given that the actual trials
should be identical to the practice trials
(in terms of their structure, etc.), we can simply make use of the trial
Routine
again and then add a new for loop
.
Step 1: Re-use the trial
Component
the
trial
Routine
has all the things we needclick on
Insert Routine
and selecttrial

place it between the
instructions_shape_experiment
andend_screen
Routine
click on the dot

Step 2: Add a for loop
click on
Insert Loop
place the first dot before the just added
trial
Routine
and the second on behind itname it, e.g. to “
exp_loop
”set
nReps
to2
for
Conditions
select thestimuli list
file,shapes_targets.csv
click
OK


Done!
NB: As in Task 1, we could have created a new Routine
from scratch but in this case re-using an already existing one was definitely faster.
Fantastic work everyone. However, there’s one last thing you need to do.
Task 3 - Run the experiment
After all this work, you should check what you have created and of course also test things. Thus, please go through the entire experiment at least once and afterwards have a look at the data!
That concludes our adventure into the basics of PsychoPy
. While this was a lot already, we still wanted to talk about some advanced topics.
Running experiments (using python)
Introduction to PsychoPy
basic overview
basic working principles
PsychoPy
stimuli & trials
data output
a very simple experiment
PsychoPy advanced
Eye-Tracking and neuroimaging
online experiments
Outro/Q&A
PsychoPy advanced#
After exploring a lot of PsychoPy
’s basic functionalities, we will also have a look at more advanced topics that are, however, common and important for everyday research work. Specifically, we will briefly talk about how to integrate Eye-Tracking
, EEG
and fMRI
into PsychoPy
, as well as have a look at running online studies based on PsychoPy
.
ET, EEG and fMRI#
It should come to no surprise that PsychoPy
supports the integartion of several external hardward very well. This includes:
Eye-Tracking
EEG
fMRI
microcontrollers
, e.g.Arduino
sfNIRS
Within this brief exploration, we will focus on Eye-Tracking
, EEG
and fMRI
.
Eye-Tracking#
PsychoPy
has several options and functionalities to connect to and communicate with Eye-Tracking
systems, directly via the Builder
or the Coder
.
Overall, adding Eye-Tracking
to your PsychoPy
experiment
is a two-stage approach: Eye-Tracker
setup/configuration, followed by integrating Eye-Tracking
Components
.
Setup/Configuration#
The setup
and configuration
of an Eye-Tracker
is comparably easy and straightforward (We know, we know…we say this all the time.), as PsychoPy
supports multiple common systems, including SR Research
and Tobii Technology
.
Initially, you need to access the Eye-Tracking
tab of the experiment
settings which you find via the ⚙️
icon in the top menu bar and select the device you’re using.
Depending on the device, you can then set a variety options and information such as the model and serial number of your device.

SR Research#
Please make sure to also have a look at the specific requirements outlined here.
Tobii Technology#
Please make sure to also have a look at the specific requirements outlined here.
MouseGaze#
Don’t have an Eye-Tracker
ready to test wherever you’re working on your PsychoPy
experiment
and/or just want to simulate respective Eye-Tracking
data
? No worries at all, PsychoPy
has your back.
You can select MouseGaze
. This will allow your mouse cursor
to act as a gaze point
on your screen
, and so allow you to simulate eye movements
without using an Eye-Tracker
. Then, when you’re ready to use your Eye-Tracker
, you can just select it from the Experiment Settings
and run your experiment
in the same way.
Eye-Tracking Components#
PsychoPy
comes with a core set of Component
specifically for Eye-Tracking
. You can find them in the Component window
.
In general, there are two types of Components
: calibration
/validation
and recording
.

Calibration
/Validation
#
When adding Eye-Tracking
to your experiment
, you usually want to start with adding the Calibration
and Validation
Component
to the beginning of your experiment
, ie the very first things that should happen.
Even though you select them from the Component window
, they’re added as standalone Routines
to your experiment
.
You can then select and set different options for these Routines
, e.g. the number of calibration points
and their properties
(e.g. color
, size
, etc.).

Recording - General#
To actually start recording
Eye-Tracking
data, you need to add the Eyetracker Record
Component
as this starts
and stops
the Eye-Tracker
recording. Usually, you would add this Component
to your instructions
Routine
or something similar, so that your Eye-Tracker
is set off recording
before your trials
start, but you can add them in wherever makes sense for your experiment
. As you can see below, you simply add it to the Routine
during which you want to start
/stop
the Eye-Tracker
.
Importantly, this Component
does it all: start
only, stop
only and start
and stop
after a certain time.

Recording - ROIs#
If you want to record information on gaze position
, or you want something to happen when the participant
looks at or away from a certain part of the screen
, the ROI
Component
is your best friend. The ROI
Component
has lots of options - you can choose what should happen given a certain gaze position
/pattern
, what shape the ROI
has etc. All of which can also be defined in a conditions file
(like we used for for loop
), just like any other Component
. Simply, choose the options that fit the needs of your experiment
.

For our experiment
, we could for example use the information, ie variables
, generated in the Custom Code
Component
that jitters the onset
and position
of the target
to indicate the ROI
position and another Custom Code
Component
that interacts with the data
generated by the ROI Component
. In that way, the ROI
Component
behaves similarily to a visual Component
, e.g. Image
.
EEG#
Integrating EEG
data acquisition
in your PsychoPy
experiment
is no biggie either, although no dedicated Components
exist. However, there are two important prerequisites and caveats you have address before you start.
Prerequisites and caveats#
At first, you have to think about problems inherent to EEG data acquisition
in general, specifically timing and synchronization issues
as outlined here.
Caveats#
We briefly talked about that at the beginning of this session, when we had a look at the Timing Mega Study by Bridges et al. 2020: there can be issues concerning dropped frames
, lag
and variability
, monitor refresh rate
(Poth at al., 2018) and so on.
In order to address these aspects and mitigate potential problems, it’s recommended to:
use the
Builder
as it automatically optimizes thecode
(e.g.draw
andflip
) concerning these problemsrepeatedly
test
yourexperiment
and evaluatetimings
This is especially important for EEG
where you have to work with millisecond precision
.
Parallel & Serial Ports#
When connecting the machine you run your PsychoPy
experiment
on and the EEG system
you’re using, you need to find out two aspects and set things in your experiment
respectively:
The
port address
yourEEG acquisition system
is connected to.How your
EEG acquisition system
wants toreceive
thetriggers
you send.
Why is this important? When acquiring EEG
data
via any software
, including PsychoPy
, you usually want to send triggers
to the EEG system
do denote what
(e.g. conditions
, stimuli
) happened when
(e.g. onset
and duration
), so that this information is added to the recorded data
and available in the acquired EEG
data
.
Concerning 1., you have to find out if your setup is using a Parallel Port or a Serial Port. Then you can select the respective Component
from the Component window
, either within EEG
or I/O
and place it in the Routine
you would like to add triggers to.

Each comes with a range of settings and options, such as
Insert a Serial Out component to the routine that you’d like to add triggers to
Start
: when you want thetrigger
to be sent; this can be at a certaintime
orframe
, or we can set thetrigger
to be sent when a certaincondition
is met (more on this later)Stop
: for how long thetrigger
should be sentPort
refers to theaddress
of theport
yourdevice
is connected toStart data
refers to the value that you want to actually send to youracquisition system
- exactly what you’d like to send will depend on what yoursystem
wants to receive. This can take avariable
just like most otherfields
so that you can senddifferent triggers
fordifferent stimuli
etc.

Both Components
are essentially set up in the same manner, except that the Parallel port
has the properties
spread across tabs.
In case you have to add a port address
, you can do so via Preferences
-> Hardware
:

Regarding 2., you have to carefully think about how you want to send triggers
, ie as addressed before when
and for how
long, the trigger
itself and so on.

As mentioned before, you usually want to set the Start
to condition
so that the trigger
is send when something happens.
To send on the onset
of something (e.g., a visual stimulus
) set that condition
to be (where stimulus
is the name of the Component
you want to yoke your trigger
to, e.g. in our example TargetImage
):
stimulus.status == STARTED
To send on a keypress
, keep the Start
as Condition
and set the condition
to be (where key_response
is the name of your Keyboard
Component
):
key_response.keys
To send when a stimulus
is clicked by a mouse
(mouse
here refers to the name of your mouse
Component
, and stimulus_mouse
refers to the stimulus
you want to be clicked):
mouse.isPressedIn(stimulus_mouse)
fMRI#
Generally, rather than programming your PsychoPy
experiment
to send triggers
to some hardware
in the same way as, e.g. EEG
, with fMRI
you would want to set up your experiment
so that it waits until it has detected when the scanner
has sent out a trigger
before moving on to present trials
.
Comparable to the other external hardware we talked about before, PsychoPy
also allows you to run fMRI
experiment
and the respective implementation is done in a two-step-approach: the MRI scanner setup
and trigger handling
.
MRI scanner setup#
Before doing anything else, it’s important that you know how the scanner
you’ll be using will emit these triggers
, and whether these are converted to some other signal such as characters
on a serial port
or a simulated keypress
. In general, there are at least 3 ways a scanner
might send a trigger
to your experiment
:
emmulate a
keypress
via
parallel port
via
serial port
Trigger handling#
A Routine
to detect fMRI
triggers
is really simple to set up. Regardless of the method your scanner
uses to send the triggers
, you’ll just need a Routine
that waits
until it has detected the trigger
before moving on.
A common approach is to create a new Routine
and insert a Text
Component
that says “Waiting for Scanner
”.

The scanner simulates a key press#
Insert a Keyboard
Component
to your "Waiting for Scanner"
Routine
. In allowed keys
use the key
that the scanner
will send e.g. if the scanner
sends a 5
, the allowed keys
will be 5
.

Now, when the keypress
is detected, the "Waiting for Scanner"
screen
will end and the next Routine
, e.g. trial
s start. Although, be careful! PsychoPy
doesn’t know the difference between the emulated key presses
sent from the scanner
and key presses
made by a participant
! So take care not to type on the keyboard
connected to the PsychoPy
computer whilst your experiment
runs to avoid your key presses
being mistaken for triggers
.
The scanner communicates via a port#
Regardless if serial
or parallel port
, you want to add a Custom Code
Component
to set up the respective port
and check for triggers
.
In the Begin Experiment
tab of the Code
Component
, add the following code
to set up a Parallel Port
:
from psychopy.hardware.parallel import ParallelPort
triggers = ParallelPort(address = 0x0378) #Change this address to match the address of the Parallel Port that the device is connected to
pinNumber = 4 #Change to match the pin that is receiving the pulse value sent by your scanner. Set this to None to scan all pins
and for a Serial Port
:
from psychopy.hardware.serial import SerialPort
triggers = SerialPort('COM3', baudrate = 9600) #Change to match the address of your Serial Port
trigger = '1' #Change to match the expected character sent from your scanner, or set to None for any character
In the Each Frame
tab of the same Code
Component
, add the following code
to check for triggers
.
For a Parallel Port
:
if frameN > 1: #To allow the 'Waiting for Scanner' screen to display
trig = triggers.waitTriggers(triggers = [pinNumber], direction = 1, maxWait = 30)
if trig is not None:
continueRoutine = False #A trigger was detected, so move on
and for a Serial Port
:
if thisTrigger in self.read(self.inWaiting()):
continueRoutine = False #Our trigger was detected, so move on
Lastly, you have to make sure that your experiment
keeps in synch with the MRI scanner
to avoid timing issues. In more detail, if you only synch the experiment
and the MRI scanner
in the beginning of the experiment
, their timescales might get out of synch as the experiment
progresses. Thus, it’s a good idea to add a synchronization to certain parts of the experiment
, e.g. when a block
of trials
is over, etc. . This can basically be implemented via the approach described above.
Running experiments online - PsychoPy & Pavlovia#
After our brief tour of communicating with and integrating external hardware
, we will also have a brief look at running PsychoPy
experiment
s online via its connection to Pavlovia.
Pavlovia#

Pavlovia is a secure server for running experiments
and storing data
. It’s built upon a git-based version control
system and has a huge open-access library of experiments
(that you can add to!). Additionally, it’s place for creating and running Surveys (using Pavlovia Surveys).
Account & costs#
In order to get started, you only need to create a free account using your institutional (or other) Email address. Afterwards, you can immediately start using Pavlovia
, e.g. uploading your PsychoPy
experiment
or creating a survey
.
While the account itself is free, acquiring data is not. However, with only ~ 0.28 Cents
to store one participant
’s data
, it’s comparably cheap (e.g. acquiring data
from 1000
participants
would be less than 300 Euro
).
Regarding this, you need to buy Pavlovia Credits through the Pavlovia Store.

Experiments library and dashboard#
As mentioned before, Pavlovia
has a huge open-access library of experiments
that you can browse, re-use and adapt. Furthermore, it allows you to query the library concerning certain properties.

Your experiments
are stored in the dashboard
which has the same interface as the open-access library. You can of course choose if your experiment
should be public
or private
.

Launching an experiment#
Pavlovia
evolved from the PsychoPy
realm and those the two have a direct connection that you can leverage to launch the experiment
, you created locally via PsychoPy
, online via Pavlovia
.
When you make an experiment
in PsychoPy
’s Builder
, your experiment
can compile to python
(for running offline) or JavaScript
(which allows it to run in a browser). Pavlovia
acts as the way of hosting the JavaScript
file online (it handles creating a URL for you to share with your participant
, saving the data
etc.).
Initially, you have to make sure your files are set up such that you have one directory that only contains one .psyexp
file and the files needed to run the experiment
. For example, in our experiment
we have:
choice_rtt/
code/
experiment/
crtt_exp.psyexp
crtt_exp.py
shapes_targets.csv
Which means, that we almost have everything ready to go. We would only need to add the stimuli files
so that they’re included as well.
After that’s done, we have to open the experiment
in the Builder
and log into Pavlovia
.

You can now sync your experiment
to Pavlovia
:

Select project info
then select the hyperlink
, that will take you to your Pavlovia
project (if you are not logged into Pavlovia.org
already in the browser you may need to log in to view).



You can then set your experiment
to running
and given you have Pavlovia credits
, you start acquriring data
by sharing the link
to the experiment
with your participants
.

Once your participant
completes the experiment
, you can view data
in one of two ways. Either select "Download Results"
or "View Code"
to see all the data
in the "data"
subfolder of your project
.
Integration with recruitment platforms#
Sometimes you might want to connect your experiment
with other platforms. For example, you might use a third party service to help with recruitment (such as prolific or SONA). In these cases you want your experiment
to go through something like the following chain:
recruitment website
:participant
s discover yourexperiment
and are issued aparticipant ID
experiment
and/orsurvey
:participant
completes theexperiment
and/orsurvey
recruitment website
: mark as complete soparticipants
can be reimbursed if needed
and you want information like “participant ID” as well as other info to be passed forward through the chain.
How does this work?
Information
is passed to a website using something known as a query string
. This is essentially any information that follows a “?
” in the URL
in your browser. For example, your experiment
could have the URL
:
run.pavlovia.org/your_user_name/your_experiment_name/
This would likely start up your experiment
with a dialogue window where you can type participant
and session
. You could however provide the respective values already within the URL
using a query string
, like this:
run.pavlovia.org/your_user_name/your_experiment_name/?participant=123&session=1
Here the last part of the URL
is a query string
, and you will notice that this autocompletes the fields “participant
” and “session
” in the startup dialogue
of your experiment
. This is what happens when you daisy chain with other platforms, platform 1 will send information
via a query string
to platform 2, then platform 2 sends it to platform 3 and so on…
The exact implementation, however, depends on the recruitment platform
you are using.
Git-based features#
By now, you might have already noticed that Pavlovia
exists on Gitlab
. As we have learned in the Git & Gitlab session, it’s pretty feature rich and allows many neat features and control over your projects:
Version control
: view the version history of your experiment
and restore old versions
Collaboration
: Add team members or fork
projects to groups
Privacy settings
: Make your project public or private
You can access the Gitlab repository
of your project by selecting “View code
” on your project dashboard
.