PennController for IBEX › Forums › Support › Eyetracking experiment: Uploading and downloading material to server
- This topic has 18 replies, 2 voices, and was last updated 1 year, 10 months ago by
Jeremy.
-
AuthorPosts
-
March 31, 2023 at 8:17 am #10430
sangheekim
ParticipantHi Jeremy,
Just a very quick follow-up question in addition to my previous post —
I’m having trouble getting the Latin square design with the current Sequence. I have four conditions for each item for ‘target’ trials and two conditions for each item for ‘filler’ trials. Somehow each participant is getting only one condition out of four in the target trials throughout the whole trial and only some of the items in the filler trials. Could you help me locate the issue? Thank you!
Here is the demo link: https://farm.pcibex.net/r/kNBfaC/
Best,
SangheeApril 3, 2023 at 12:14 pm #10431Jeremy
KeymasterHi Sanghee,
The size of the dots is 48*48px as of PennController 2.0. The calculation starts immediately when the middle dot shows up, and the score is proportional to the average distance from the center of the screen over the (X,Y) coordinates of the estimated gazes (
precision = 100 - (distance / halfWindowHeight * 100)
). As far as I remember, and as far as I can tell now, there is no default threshold value: if none is provided, then all calibrations will be successful. ReferenceThere is no set sampling rate: if memory serves, the WebGazer library will try to run at each update of the visual frame (look up
requestAnimationFrame
) which will happen more or less frequently depending on the performance of the participant’s browser at the time of the experiment.Regarding the Latin Square design, this is something you need to code yourself in your CSV. PennController picks one value from the ones listed in the “Group” or “List” column and subsets the table to all and only the rows containing that one value in that column, and generate the trials from those rows. If you want to cycle through conditions across items in Latin Square fashion, you need to design your table accordingly, as illustrated in the advanced tutorial
Jeremy
April 5, 2023 at 5:04 pm #10435sangheekim
ParticipantHi Jeremy,
Thanks for your response.
I had a follow-up question on your previous reply on understanding the columns of the eyetracking data. The reply you gave me is copy pasted here: “The columns starting with _ are named after the elements you added to the EyeTracker element, and report 0 or 1, depending on whether the (X,Y) coordinates that were estimated by the tracker fell within the boundaries of the corresponding element at that time point.”
I was curious to know where the “boundaries of the corresponding element” is defined.
The size of the image in my experiment is set up as 20vh*20vh, and I have a code in the
main.js
file that defines the canvas size of the visual stimuli:newCanvas("TopLeft", "50vw", "50vh") .add("center at 50%", "middle at 50%", newImage(images[0])) // retrieve the first image from the shuffled array .print("center at 25%", "middle at 25%") .log()
Does this mean that if the participant’s eye gaze falls within the boundary of within the left 50% of the width of the screen and 50% of the height of the screen, the data will be marked as 1 for TopLeft column?
Does the image not matter in this case?
Thank you so much!
Best,
SangheeApril 12, 2023 at 6:13 am #10440Jeremy
KeymasterHi Sanghee,
The EyeTracker element tracks gazes on the elements that you’ve added to it. Since the elements you’ve added to it at the four quadrant Canvas elements, and since each is 50vw*50vh, those are the boundaries of the corresponding elements. If the tracker estimates a gaze to fall within the left 50% of the width of the screen and 50% of the height of the screen, then it will report
1
for _TopLeft, indeedJeremy
-
AuthorPosts
- You must be logged in to reply to this topic.