June 1, 2020 at 2:44 pm #5553
In my experiment, participants are given a target word and then listen to an audio file — if they hear the target word, they should press the spacebar as quickly as possible. I’ve collected some data, but I have some questions about how to read it.
I’m logging the start and end of the audio file, as well as each instance of a spacebar press.
// Play audio stimulus and record reaction time newImage("fixation_cross", "fixation_cross.png") .size(300,300) , newCanvas(300, 310) .add(0, 10, getImage("fixation_cross")) .print() , newKey("spacebar_press", " ") .log("all") , newAudio("stimulus", variable.stimulus) .play() .log("play") .log("end") .wait()
I get lines in the results file like below, where there’s a timestamp for the audio start and end, and a value for the length of the audio file itself. However, subtracting the timestamps produces a value that’s greater than the audio file.
...,Audio,stimulus,play,0,1590531168158,AC,ambig,1,WATCH,Y,,NULL ...,Key,spacebar_press,PressedKey, ,1590531170861,AC,ambig,1,WATCH,Y,,NULL ...,Audio,stimulus,end,5.014421768707483,1590531173520,AC,ambig,1,WATCH,Y,,NULL
calculated length: 1590531173520-1590531168158 = 5362 ms
recorded length: 5.014s / 5014ms
The difference is generally within 100-400ms, but sometimes greater than 900ms.
...,Audio,stimulus,play,0,1590530962868,,np,6,CLASS,Y,J,NULL ...,Key,spacebar_press,PressedKey, ,1590530966759,,np,6,CLASS,Y,J,NULL ...,Audio,stimulus,end,5.250294784580499,1590530969041,,np,6,CLASS,Y,J,NULL
calculated length: 1590530969041-1590530962868 = 6173ms
recorded length: 5.250s / 5250ms
Since this experiment is looking at reaction time (difference between keypress timestamp and audio start timestamp), I’m not sure if I should be worried about the differences between recorded and calculated audio lengths. Is this maybe related to some buffering time, and can be ignored? The documentation says that the .log(“play”) creates a timestamp and an offset, but I didn’t notice any offsets in the results file.
AngelicaJune 1, 2020 at 3:02 pm #5554
If you compare the play lines to the end lines, you’ll notice that you have a 0 for play where you have a “value for the length of the audio file” for end—in both cases, it actually corresponds to the position in the audio file when the event is detected. So in your case, you always have an offset of 0, which means that the play event is triggered while the audio hasn’t started playing yet.
I’m afraid that the delays you are seeing mean that execution was slow for at least some of your participants. Unfortunately, you should see lines in your results file for any buffering that happened, but I just noticed that there is a typo in the code and those won’t end up in the results file… I’ll fix this in the next release.
If you see that the delays are systematically greater for some participants, I would say this is good indication that the conditions were not optimal for those, either because of a browser slowdown (for example due to many tabs open in parallel executing various scripts) or because the audio stream that’s normally cached got lost at some point and needed to buffer again.
JeremyJune 1, 2020 at 6:18 pm #5555
Thanks for the reply! It seems like calculating the reaction time by subtracting the audio start timestamp from the keypress timestamp may not be accurate, because any file buffering is of unknown length. Do you think it would be accurate to calculate reaction time from the opposite end, by subtracting the keypress timestamp from the audio end timestamp?June 1, 2020 at 6:23 pm #5556
Alternatively, would the difference between the recorded and calculated audio file lengths be the file buffering, i.e I could add that time to the calculated reaction time (keypress timestamp – audio start timestamp) to get a more accurate reaction time?June 1, 2020 at 7:00 pm #5557
We can’t be sure that the delays are due to buffering, since unfortunately the result lines do not give us that information. It could be that no participant ever experienced buffering issues, but only slowdowns due to their browser’s poor performance at the time they took the experiment.
For the same reason, using the end time will not give you more accurate measures, because you don’t know what exactly caused the delay.
I realize this is a really frustrating situation, and I apologize for it—you expect the experimental software that you use to give you accurate measures, and PennController clearly failed to do so in this case. I will do my best to improve PennController’s performance.
Then again, many other factors can impact the quality of one participant’s data, including the browser they use and how they use that browser. Safari for example is known to handle media elements differently from Chrome or Firefox (which I use to develop PennController). If you see that only some participants’ data manifest this kind of delays, it could be indicative that their specific configuration (eg. using a specific browser) contributed to the delays, and hopefully you could decide whether to filter them out for analysis purposes.
JeremyJune 3, 2020 at 1:03 pm #5573
I see, the information about the browsers is good to know. Thank you for all of the hard work you’ve put into PennController! I really appreciate all of your insightful and incredibly prompt replies 🙂
You must be logged in to reply to this topic.