For beginner no previous knowledge design eye tracking study

PennController for IBEX Forums Support For beginner no previous knowledge design eye tracking study

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #8320

    I am a student and for my student job I need to design a pilot online eye-tracking study and I dont have any background in this.Think of someone starting from zero. I am starting with almost zero knowledge and I dont know where to start. I need something that walks me through this step by step. I need to do design with 5 items, run it and read the data in R.(up to me to decide the items, I think it can be with or without sound, this is example I need to create for a job). I read instructions in “Collecting eye tracking data”. but that is overwhelming for me. I am going to start reading the advanced tutorials with number 8 but I don’t know if this is the right place to start. I was looking at the script that is available in teh “eye-tracking starter experiment”. I understand that the script is written I maybe just need to change some of the lines wehre I would my own pictures and words or audio but that’s all I understand. I don’t understand where to get or how to create these css files and .cvs files looks have the elements I am not sure how to create this.
    Thank your answers.



    Unfortunately I don’t have resources to learn how to design eye-tracking experiments online from a beginner’s perspective. What prompted me to develop the EyeTracker element in PennController, besides finding out about the webgazer.js library developed by Papoutsaki et a., was to allow experimenters who ran experiments with a physical eye tracker to port them over to a browser using a webcam

    One thing to keep in mind here is that the eye-tracking capacities that come with a webcam are limited, and depending on what design you are implementing, using a webcam eye-tracker might not really be a realistic option at the moment: you should probably only consider it if all you are interested in is whether the participant is looking left vs right with a time resolution of ~100ms at best (unless you run the experiment in lab and make sure the browser’s performance allows for a finer time resolution)

    This is what the eye-tracking starter experiment does, although it may already be a little too ambitious: it creates four Canvas elements placed at each quadrant of the page that each cover 40% of the width and height of the page (so there is ~10% of empty space between the Canvas elements) and prints content inside those Canvas elements. The EyeTracker element then tracks the Canvas elements (= the four quadrants): this way, when the participants look at content, even if the gaze estimates are off because of the relatively poor performance of the webcam-based eye-tracker, they should still mostly fall in the correct quadrant

    Regarding how to design an experiment with PennController more generally, and understanding the code in the eye-tracker template more specifically, you can find the tutorial here and a recording of a workshop we gave at CUNY 2021 here


Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.