Forum Replies Created
-
AuthorPosts
-
multanip
ParticipantBut I am wondering if there is a way to write the setup (one item 4images) above into the eyetrackign script of the cards example that you have? I played aroudn with script but no luck. at one point it showed one image from each row but no logging.
multanip
ParticipantThank you. I will make these changes and then get back to you. But, I was something being logged in our server from a different link. I am assuming that was you. Our set up is different than the regular eyetrackign script which why I originally struggled to change the cards example eyetracking script from pcibex and asked for help last year.
Thank you and let me get back to you.
multanip
ParticipantThe study is published and the link the URL data is being sent. I am sure the URL is correct because there is data (white paper logo in front of individual lines in data ) being sent that time stamp matches in server when i ran the experiment. But it mostly only logs when i insert eyetracker start() before the newTimer (pretrial0…) line, but only shows first item or one pic in each row depending on how I experiment with script.
For the php, I will double check but it is the same php we used for other eyetracker study that logged data. But my understanding to do php file is just copy what you have on your website for php to text file and name it with name.php. correct? I believe There is data being sent to it because with individual yellow folder with same name as the data collection link and files individually created with timestamp.
multanip
Participantdata collection link
https://farm.pcibex.net/p/KdyWqP/multanip
ParticipantDemo link
https://farm.pcibex.net/r/doDwEu/multanip
ParticipantTHis is an eyetracking script and study.
I went back and added and moved the eyetracker start in different part of the script and all the items run with audio but no logging in server. Below is the newer script.Template( "eng_01_geo_List_tense_old.csv", row=> // all newText commands will automatically print the text centered when run newTrial("eyetracking", defaultImage.center().print(), // all newImage commands will automatically print the image centered when run defaultImage.size(1280,720).print()//, .print("center at 50vw", "center at 50vh") //defaultText.center().print() , newEyeTracker("tracker",1).callback( function (x,y) { if (this != getEyeTracker("tracker")._element.elements[0]) return; // The callback commands lets us log the X and Y coordinates of the estimated gaze-locations at each recorded moment in time getEyeTracker("tracker")._element.counts._Xs.push(x); getEyeTracker("tracker")._element.counts._Ys.push(y); }) , newFunction(()=>{ getEyeTracker("tracker")._element.counts._Xs = []; getEyeTracker("tracker")._element.counts._Ys = []; }).call() , getEyeTracker("tracker") .calibrate(5) // Make sure that the tracker is still calibrated .log() // log the calibration test.index //, //getEyeTracker("tracker") // .log() // If this line is missing, the eye-tracking data won't be sent to the server //.start() , newTimer("pre-trial0", 500).start().wait() , newImage("image0", row.Static_image1),// we always use middleImage.png as the middle image newTimer("displayTime0", 2000).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the iamge //getTimer("last-image"), getImage("image0").remove() .log() , newTimer("pre-trial1", 500).start().wait() , newImage("image1", row.action_image), // the first image is set in design.csv newTimer("displayTime1", 2500).start().wait(), //.start().wait() // wait 400ms before moving on to the next image //getTimer("mid-image"), getImage("image1").remove() .log() , newTimer("pre-trial2", 500).start().wait() , newImage("image2", row.Static_image2), // we always use middleImage.png as the middle image newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv getAudio("audio").wait("first"), // wait until the Audio element has finished playing back //getImage("image2").remove(), //getText("AudioSpace").remove(), //newKey("stop", "FJ").callback(getTimer("last-image").stop()), newTimer("displayTime2", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge //getTimer("last-image"), getImage("image2").remove() .log() , newTimer("pre-trial3", 500).start().wait()//remain for 1000 ms on screen , newImage("image3", row.action_image2), // the third image is set in design.csv newTimer("displayTime3", 2500).start().wait(), getImage("image3").remove() .log() , getEyeTracker("tracker") .add( getImage("image0"), getImage("image1"), getImage("image2"), getImage("image3") ) .log() // If this line is missing, the eye-tracking data won't be sent to the server .start(), // Stop now to prevent collecting unnecessary data getEyeTracker("tracker") .stop() ) .noHeader() .log("Subject" , getVar("Subject") ) .log("Static_image1" ,row.Static_image1) .log("action1_image" ,row.action_image) .log("ID_No" ,row.random) .log("Static_image2" ,row.Static_image2) .log("wav_file" ,row.wav_file) .log("action_image2" ,row.action_image2) .log("ViewportWidth" ,window.innerWidth ) // Screensize: width .log("ViewportHeight" ,window.innerHeight ) // Screensize: heigth ) SendResults("Send")
multanip
ParticipantAnd please wait 1 min to load is still a problem.
multanip
Participantgroup,Condition,main_picture_video,middle_picture_s_video,wav_file,end_picture_video,ID_No A,2,01_1_a_test_slow.mp4,01_1_a_test_slow.png,02_1_a_test.wav,01_1_b_test_slow.mp4,1 A,2,02_1_a_test_slow.mp4,02_1_a_test_slow.png,02_1_b_test.wav,02_1_b_test_slow.mp4,2 A,2,03_1_a_test_slow.mp4,03_1_a_test_slow.png,02_2_a_test.wav,03_1_b_test_slow.mp4,3 A,2,04_1_a_test_slow.mp4,04_1_a_test_slow.png,02_2_b_test.wav,04_1_b_test_slow.mp4,4 A,2,05_1_a_test_slow.mp4,05_1_a_test_slow.png,03_1_a_test.wav,05_1_b_test_slow.mp4,5 A,2,06_1_a_test_slow.mp4,06_1_a_test_slow.png,03_1_b_test.wav,06_1_b_test_slow.mp4,6 A,2,07_1_a_test_slow.mp4,07_1_a_test_slow.png,03_2_a_test.wav,07_1_b_test_slow.mp4,7 A,2,08_1_a_test_slow.mp4,08_1_a_test_slow.png,03_2_b_test.wav,08_1_b_test_slow.mp4,8 A,2,09_1_a_test_slow.mp4,09_1_a_test_slow.png,04_1_a_test.wav,09_1_b_test_slow.mp4,9 A,2,10_1_a_test_slow.mp4,10_1_a_test_slow.png,04_1_b_test.wav,10_1_b_test_slow.mp4,10
multanip
ParticipantUPDATE!!!! script. I have used the same script for 2 videos and picture int he middle with audio but this time I have added a lot more videos and audio and pictures in the middle. each one has ten different videos and picture and audio
so there would be 20 different videos, 10 pictures and 10 audio files. all uploaded into a server. it would be video would play and finish the video with 1000ms and 400 ms gap and then static picture with audioa nd then 400 ms gap and then video again needs to finsh playing. I have uploaded all 35 vidoes and 15 audios and 35 pictures into a server (will be adding more).
My script and my csv file are below. But when I run the script and I get to the main Template which containes the experiment and it goes to the first video before the middle one. It just shows black screen with play button that u cannot press play and it is stuck at that canvas with the black screen.PennController.ResetPrefix(null) // Keep this here PennController.DebugOff() // PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Audio.zip") PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/pic_video.zip") PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/new_video_mute.zip") // EyeTrackerURL("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_results/Eng_eyetracker_priya.php") // var showProgressBar = false; //Sequence of the experiment Sequence("Preload","Welcome","Webcam_Check","ChromeCheck","L1Check","Introduction","Consent","QuestionairePage","WebcamSetUp","AudioSetUp","Instructions","Counter",randomize("eyetracking"),"Send","Exit") // CheckPreloaded CheckPreloaded("Preload") //Welcome Message newTrial("Welcome", newVar("Subject", randomnumber = Math.floor(Math.random()*1000000)) .global() .log() , newText("WelcomeMessage", "<b>Hello and Thank you very much for participating in our eye-tracking experiment.</b><br><br>Before we proceed, we have three quick questions.</b> <br><br>Please press the <b>SPACEBAR</b> to continue.") , newCanvas("InstructionsCanvas", "80vw" , "40vh") .add(0,0, getText("WelcomeMessage"))//https://farm.pcibex.net/r/wqCsWk/experiment.html?test=true .print("center at 50%", "top at 25%") , newKey("next", " ") .wait() ) //Asking Participants whether they give permission to the Webcam, reminder to use chrome, and English language skill L1. If answer 'no' to any of these questiosn, they cannot participate in the experiment. newTrial("Webcam_Check", newText("Permission_Webcam", "Question 1:<br><br> In order to be able to record your line of sight on the computer screen, we ask for permission to access your webcam. We will <b>not</b> record a video of you or collect any other data that allows conclusions to be drawn about your identity. Do you give your permission for us to access your webcam?") , newText("No_Permission", "<p><b>No, I don't give permission.</b><br>Click the 'N' on the keyboard</p>") , newText("Yes_Permission", "<p><b>Yes, I give permission.</b><br>Click the 'Y' on the keyboard</p>") , newCanvas("ChecksCanvas", "60vw" , "20vh") .add("center at 50%", "top at 10%", getText("Permission_Webcam")) .add("center at 20%", "top at 80%", getText("Yes_Permission")) .add("center at 80%", "top at 80%", getText("No_Permission")) .print("center at 50%","top at 25%") , newKey("yes_no", "NY") .wait() , getKey("yes_no") .test.pressed("Y") .failure( getCanvas("ChecksCanvas") .remove() , newCanvas("No_Permission", "60vw" , "20vh") .add("center at 50%", "top at 10%", newText("Unfortunately, you cannot participate in this study. Please close this window by ending the browser session, possible pop-up windows can be ignored.")) .print("center at 50%", "top at 25%") , newButton("waitforever") .wait() ) ) //Chrome newTrial("ChromeCheck", newText("ChromeCheckText", "Question 2:<br><br>The display and the course of the experiment will only work without problems if you use the Google Chrome browser on a laptop or desktop computer (not on a smartphone or tablet). Are you currently using Google Chrome?</p>") , newText("NoChrome", "<p><b>No, I'm not currently using any of the options</b><br>Click the 'N' on the keyboard</p>") , newText("YesChrome", "<p><b>Yes, I'm currently using Chrome Browser</b><br>Click the 'Y' on the keyboard</p>") , newCanvas("ChecksCanvas", "60vw" , "20vh") .add("center at 50%", "top at 10%", getText("ChromeCheckText")) .add("center at 20%", "top at 80%", getText("YesChrome")) .add("center at 80%", "top at 80%", getText("NoChrome")) .print("center at 50%", "top at 25%") , newKey("yesno", "NY") .wait() , getKey("yesno") .test.pressed("Y") .failure( getCanvas("ChecksCanvas") .remove() , newCanvas("NoChrome", "60vw" , "20vh") .add("center at 50%", "top at 10%", newText("Unfortunately, the experiment only works with Google Chrome (which can be downloaded for free). Please close this window by ending the browser session (possible pop-up windows can be ignored) and open the link to the experiment again with Google Chrome.")) .print("center at 50%", "top at 25%") , newButton("waitforever") .wait() ) ) //Language check newTrial("L1Check", newText("L1CheckText","Question 3:<br><br> In order to take part in this study, you must speak <b>English as your mother tongue</b>. Is English your mother tongue?</p>") , newText("NoL1","<p><b>No, English is not my mother tongue</b><br>Click 'N' on the keyboard<b><p>") , newText("YesL1","<p><b>Yes, English is my mother tongue</b><br>Click 'Y' on the keyboard</b></p>") , newCanvas("ChecksCanvas", "60vw" , "20vh") .add("center at 50%", "top at 10%", getText("L1CheckText")) .add("center at 20%", "top at 80%", getText("YesL1")) .add("center at 80%", "top at 80%", getText("NoL1")) .print("center at 50%", "top at 25%") , newKey("yesno", "NY") .wait() , getKey("yesno") .test.pressed("Y") .failure( getCanvas("ChecksCanvas") .remove() , newCanvas("NoL1", "60vw" , "20vh") .add("center at 50%", "top at 10%", newText("Unfortunately, you are not eligible to participate in this study. Please close this window by ending your browser session (any pop-up windows can be ignored.)")) .print("center at 50%", "top at 25%") , newButton("waitforever") .wait() ) ) //Introduction newTrial("Introduction", newText("IntroductionText","<b>Thank you for answering the questions about the system requirements!</b><br><br>The eye-tracking experiment will be very simple and will take about 10-15 minutes. Look at the three pictures and listen to the sentence for the middle picture. <br><br>Your task is to look at the images and the content of the spoken sentences as closely as possible. Press SPACE to continue in between pictures and trials. During the entire experiment, please try to sit as still as possible but comfortably and never take your eyes off the computer screen.<br><br>We will <b>not</b> record a video of you or collect any other data that allows conclusions to be drawn about your identity. We will only collect data related to your eye movements on the computer screen.<br><br>It is important that you are in a well-lit and quiet environment, otherwise the webcam will not be able to detect your eye movements. Please turn off any music and other applications and websites (such as cell phones, email notifications, etc.) that could distract you during the experiment.<br><br>The next pages will be displayed in full screen mode. Please do not close the full screen for the remainder of the experiment.<br><br>Click the <b>SPACEBAR</b> to continue.") , newCanvas("InstructionsCanvas", "60vw" , "20vh") .add(0,0, getText("IntroductionText")) .css("front-size","25px") .print("center at 50%", "top at 25%") , newKey("next", " ") .wait() , fullscreen() ) //.setOption("hideProgressBar",true) //Consent Form newTrial("Consent", newHtml("consent_form","consent_pilot_eng.html") .center() .cssContainer({"width":"720px"}) .checkboxWarning("You must give your consent to continue.") .print() , newButton("continue", "Click here to continue.") .center() .print() .wait(getHtml("consent_form").test.complete() .failure(getHtml("consent_form").warn()) ) ) //Participant Questionaire newTrial("QuestionairePage", newHtml("Questionnaire","questionnaire_pilot_eng.html") .center() .cssContainer({"width":"720px"}) .checkboxWarning("You must give your consent to continue.") .print() , newButton("continue","Click here to continue.") .center() .print() .wait(getHtml("Questionnaire").test.complete() .failure(getHtml("Questionnaire").warn()) ) ) //Set up the webcam:need calibrtion, the resources will preload at the same time. newTrial("WebcamSetUp", newText("WebcamSetUpText", "The next few pages will help you set up the webcam and audio player. The webcam is set up through a simple calibration process. You will see video of the webcam recording during the calibration process. As previously mentioned, we will not store any footage of these webcam recordings. Please make sure your face is fully visible and that you are centered in front of your webcam.<br><br>You can start the calibration process when the box is GREEN by clicking the start button that will appear in the center of the computer screen.<br><br>During the calibration process, you will see eight dots on the screen. Please click on all these points and follow the mouse pointer closely with your eyes. Once you have clicked all the dots, a new dot will appear in the middle of the screen. Please click on this dot and <b>look at it for three seconds</b> so that the algorithm can check if the calibration process was successful.<br><br>If the calibration fails, the last step repeated again. <br><br> Press <b>SPACEBAR</b> to continue.") .center() .print() , newKey("next", " ") .wait( newEyeTracker("tracker").test.ready()) , fullscreen() , // Start calibrating the eye-tracker, allow for up to 3 attempts // 50 means that calibration succeeds when 50% of the estimates match the click coordinates // Increase the threshold for better accuracy, but more risks of losing participants getEyeTracker("tracker").calibrate(5,3) ) .noHeader() //.setOption("hideProgressBar", true) // Audio set-up newTrial("AudioSetUp", newText("AudioInstructions", "The webcam is now calibrated so that the audio player can be set up in the next step. In this experiment, you will hear several sentences. You can play a sample sentence that will appear in this study by pressing the play button. Please use the audio recording to adjust the volume as well. You can play this sample set as many times as you like. Once you're ready, you can go to the next page.") , newAudio("cocktail","02_1_b_test.wav") ///ADDD EXAMPLE WAV TO CHECK , newCanvas( "myCanvas", "60vw" , "60vh") .settings.add(0,0, getText("AudioInstructions")) .settings.add("center at 50%", "top at 20%", getAudio("cocktail")) .print("center at 50%", "top at 25%") , newButton("Next Page") .center() .print("center at 50%", "top at 70%") .wait() ) // Experiment instructions// newTrial("Instructions", newText("TaskInstructions", "<p>We are ready to start the experiment! The experiment round will start immediately. <b>PLEASE keep your gaze FIXED</b> on the computer at ALL TIMES and head must remain still.<br><br>In experiment round, <b>DO NOT</b> scroll or move up and down or sideways, <b>ONLY</b> or <b>DO as Instructed</b>.<br><br>BEFORE each section, a green dot will appear in the middle of the screen for web re-check. Just look at it <b>for THREE SECONDS</b>. IF the camera is still calibrated, it will continue, otherwise the computer will recalibrate. In Experiment, you will see a action video followed by static picture with audio and a final action video and <b>DO NOT TOUCH THE MOUSE DURING THIS SCREEN</b>. You <b>MUST QUICKLY</b> only look at the picture and video.</b> <br><br> We will now start with the experimental run. It should take approximately 15-20 minutes.") , newCanvas("myCanvas", 800 , 300) .settings.add(0,0, getText("TaskInstructions")) .print("center at 50%", "top at 25%") , newButton("Begin the Experiment") .center() .print("center at 50%", "top at 70%") .wait() ) SetCounter("Counter", "inc", 1); // Template("Video_EyeTracking_english_mp4.csv", row=> // all newText commands will automatically print the text centered when run newTrial("eyetracking", defaultImage.center().print(), // all newImage commands will automatically print the image centered when run defaultImage.size(950,650).print(), defaultVideo.center().print(), // all newImage commands will automatically print the image centered when run defaultVideo.size(1000,620).print()//, .print("center at 50vw", "center at 50vh") //defaultText.center().print() , newEyeTracker("tracker",1).callback( function (x,y) { if (this != getEyeTracker("tracker")._element.elements[0]) return; getEyeTracker("tracker")._element.counts._Xs.push(x); getEyeTracker("tracker")._element.counts._Ys.push(y); }) , newFunction(()=>{ getEyeTracker("tracker")._element.counts._Xs = []; getEyeTracker("tracker")._element.counts._Ys = []; }).call() , getEyeTracker("tracker") .calibrate(5) // Make sure that the tracker is still calibrated .log() // log the calibration scores , newTimer("pre-trial", 400).start().wait() , newVideo("image1", row.main_picture_video).print().play(), getVideo("image1").wait("first"), newTimer("displayTime", 1000).start().wait(), //.start().wait() // wait 400ms before moving on to the next image getVideo("image1").remove() .log() , newTimer("trial1", 400).start().wait() , newImage("image2", row.middle_picture_s_video),// we always use middleImage.png as the middle image newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv getAudio("audio").wait("first"), newTimer("displayTime", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge getImage("image2").remove() // wait until the Audio element has finished playing back .log() , newTimer("trial1", 400).start().wait() , newVideo("image3", row.end_picture_video).print().play(), getVideo("image3").wait("first"), // the third image is set in design.csv, newTimer("displayTime", 750).start().wait(), getVideo("image3").remove() .log() ) .noHeader() .log("group" ,row.group) .log("Condition" ,row.Condition) .log("ID_No" ,row.ID_No) .log("main_video" ,row.main_picture_video) .log("end_video" ,row.end_picture_video) .log("Neutral_picture_video" ,row.middle_picture_s_video) .log("ViewportWidth" ,window.innerWidth ) // Screensize: width .log("ViewportHeight" ,window.innerHeight ) // Screensize: Height ) // SendResults("Send"); //Exit newTrial("Exit", exitFullscreen() , newText("Final","Thank you. This is the end of the experiment, you can now close this window. Thank you!") , newCanvas("myCanvas", "60vw" , "60vh") .settings.add(0,0, getText("Final")) .css("font-size", "1.1em") .print("center at 50%", "middle at 50%") , newButton("waitforever") .wait() // Not printed: wait on this page forever )
multanip
ParticipantDO you have solution for the preloading problem?
multanip
ParticipantNEVER MIND, THE VIDEOS WORKED ALL THE WAY!!!!!!!!!!!!!!
BUT, the preloading will take a minute is still a problem and I need to add more videos and trials to this. So, I am concerned this will become a problem. IT does this with BOTH of my picture and the video eye tracking project.
THANK YOU!
multanip
ParticipantOK I have a new script. the two videos and the picture with audio is working but first and last video don’t finish playing before it moves on.
https://farm.pcibex.net/r/OFgwcs/experiment.html?test=trueThe screeen it will take a 1 minute to load is still showing up.
PennController.ResetPrefix(null) // Keep this here PennController.DebugOff() //The experiment is design for Eng based study //CREDIT: The credit for the picture and Audio to Psychling lab and Dr. Dato for all the files //Credit: the script was learned and borrowed from PCIbex website PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Audio.zip") PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Neutral.zip") PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Videos.zip") // EyeTrackerURL("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_results/Eng_eyetracker_priya.php") // var showProgressBar = false; // Sequence("Preload","Counter",randomize("eyetracking"),"Exit") // CheckPreloaded ("Preload") // SetCounter("Counter", "inc", 1); // Template("Video_EyeTracking_english.csv", row=> // all newText commands will automatically print the text centered when run newTrial("eyetracking", defaultImage.center().print(), // all newImage commands will automatically print the image centered when run defaultImage.size(950,650).print(), defaultVideo.center().print(), // all newImage commands will automatically print the image centered when run defaultVideo.size(1000,620).print()//, .print("center at 50vw", "center at 50vh") //defaultText.center().print() , newEyeTracker("tracker",1).callback( function (x,y) { if (this != getEyeTracker("tracker")._element.elements[0]) return; getEyeTracker("tracker")._element.counts._Xs.push(x); getEyeTracker("tracker")._element.counts._Ys.push(y); }) , newFunction(()=>{ getEyeTracker("tracker")._element.counts._Xs = []; getEyeTracker("tracker")._element.counts._Ys = []; }).call() , getEyeTracker("tracker") .calibrate(5) // Make sure that the tracker is still calibrated .log() // log the calibration scores , newTimer("pre-trial", 400).start().wait() , newVideo("image1", row.main_picture_video),// the first image is set in design.csv getVideo("image1").print().play(), newTimer("displayTime", 3000).start().wait(), //.start().wait() // wait 400ms before moving on to the next image getVideo("image1").remove() .log() , newTimer("trial1", 400).start().wait() , newImage("video2", row.middle_picture_s_video),// we always use middleImage.png as the middle image newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv getAudio("audio").wait("first"), // wait until the Audio element has finished playing back newTimer("displayTime", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge getImage("video2").remove() .log() , newTimer("trial1", 400).start().wait() , newVideo("image3", row.end_picture_video), getVideo("image3").print().play(),// the third image is set in design.csv, newTimer("displayTime", 3000).start().wait(), getVideo("image3").remove() .log() ) .noHeader() .log("group" ,row.group) .log("Condition" ,row.Condition) .log("ID_No" ,row.ID_No) .log("main_video" ,row.main_picture_video) .log("end_video" ,row.end_picture_video) .log("Neutral_picture_video" ,row.middle_picture_s_video) .log("ViewportWidth" ,window.innerWidth ) // Screensize: width .log("ViewportHeight" ,window.innerHeight ) // Screensize: Height ) // SendResults("Send"); //Exit newTrial("Exit", exitFullscreen() , newText("Final","Thank you. This is the end of the experiment, you can now close this window. Thank you!") , newCanvas("myCanvas", "60vw" , "60vh") .settings.add(0,0, getText("Final")) .css("font-size", "1.1em") .print("center at 50%", "middle at 50%") , newButton("waitforever") .wait() // Not printed: wait on this page forever )
multanip
ParticipantALSO HOW can I get rid of the “load ing will take a 1min” screen?
This is for eye tracking experiment. this could possibly hinder with the experiment.
August 29, 2022 at 2:02 pm in reply to: How to go automatically from one iamge to next within a row or from one row to #8379multanip
Participanti will try this thank you
August 29, 2022 at 1:27 pm in reply to: How to go automatically from one iamge to next within a row or from one row to #8377multanip
ParticipantOR I think my superviors wants the first image to remain for 1000ms then 400ms gap, picture with audio until audio finishes then 400ms gap then last picture for 700ms then moving on to the next item with calibration in the middle. BUT EIHER WAY. I need to learn to do BOTH for experiments in the future so I don’t know if coding it is really that dramatically different for onscreen vs. transition time. BUT I will start with what you wrote.
thanks. -
AuthorPosts