video not shoing up

PennController for IBEX Forums Support video not shoing up

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
    Posts
  • #8386
    multanip
    Participant

    I have designed study very similar to this one: https://farm.pcibex.net/r/nkHbDP/

    My experiment:
    https://farm.pcibex.net/experiments/OFgwcs/edit
    https://farm.pcibex.net/r/OFgwcs/experiment.html?test=true

    Right now I have the csv file written so that it reterives the same first video, second same image with different audio, and third same video for all the items in all the sets.

    I have created one with pictures and audio in the middle picture just like the one above and it worked. But now I want to change the first and the third picture to video and the code and csv file are below. IT WORKS for first video and second picture with audio but it gets stuck in the second picture after finishes the audio and doesn’t move on the third video item in the set (has error maessages). it needs to loop of 4 items in each group.

    So its VIDEO–> static picture with audio –> Video. Everything except the csv file is upload into the server from which it retrieves the videos, along with image and audio. I can manually move to 1st picture of the next item, but it goes to picture with audio then its stuck.

    THis is the error messages.
    [0:23:30] Attempted to get an invalid element;;Image (newTrial: 1-eyetracking)
    [0:23:30] Attempted to get an invalid element;;Image (newTrial: 1-eyetracking)

    My script

    PennController.ResetPrefix(null) // Keep this here
    PennController.DebugOff()
    //The experiment is design for Eng based study 
    //CREDIT: The credit for the picture and Audio to Psychling lab and Dr. Dato for all the files
    //Credit: the script was learned and borrowed from PCIbex website
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Audio.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Neutral.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Videos.zip")
    //
    EyeTrackerURL("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_results/Eng_eyetracker_priya.php")
    //
    Sequence("Preload","eyetracking")
    //
    var showProgressBar = false;
    //
    CheckPreloaded ("Preload")
    //
    Template("Video_EyeTracking_english.csv", row=>  // all newText commands will automatically print the text centered when run
        newTrial("eyetracking",
            defaultImage.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultImage.size(400,300).print(),
            defaultVideo.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultVideo.size(1180,720).print()//,
            .print("center at 50vw", "center at 50vh")
            //defaultText.center().print()
        ,
        newEyeTracker("tracker",1).callback( function (x,y) {
            if (this != getEyeTracker("tracker")._element.elements[0]) return;
            getEyeTracker("tracker")._element.counts._Xs.push(x);
            getEyeTracker("tracker")._element.counts._Ys.push(y); 
            })
        ,
        newFunction(()=>{
            getEyeTracker("tracker")._element.counts._Xs = [];
            getEyeTracker("tracker")._element.counts._Ys = [];
        }).call()
        ,
            getEyeTracker("tracker")
                .calibrate(5)  // Make sure that the tracker is still calibrated
                .log()  // log the calibration scores
        ,
        newTimer("pre-trial", 400).start().wait()
        ,
        newVideo("image1", row.main_picture_video),// the first image is set in design.csv
        getVideo("image1").print().play(),
        newTimer("displayTime", 1000).start().wait(), //.start().wait() // wait 400ms before moving on to the next image
        getVideo("image1").remove()
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newImage("video2", row.middle_picture_s_video),// we always use middleImage.png as the middle image
        newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv
        getAudio("audio").wait("first"), // wait until the Audio element has finished playing back
        newTimer("displayTime", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge
        getImage("").remove()
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newVideo("image3", row.end_picture_video),
        getVideo("image3").print().play(),// the third image is set in design.csv,
        newTimer("displayTime", 1000).start().wait(),
        getVideo("image3").remove()
        .log()
        )
    .noHeader()
      .log("group"                  ,row.group)
      .log("Condition"              ,row.Condition)
      .log("ID_No"                  ,row.ID_No)
      .log("main_video"             ,row.main_picture_video)
      .log("end_video"              ,row.end_picture_video)
      .log("Neutral_picture_video"  ,row.middle_picture_s_video)
      .log("ViewportWidth" 	    	,window.innerWidth	 		) // Screensize: width
      .log("ViewportHeight"	    	,window.innerHeight 		) // Screensize: Height
    )
    //
    SendResults("Send");
    
    //Exit
    newTrial("Exit",
        exitFullscreen()
        ,
        newText("Final","Thank you. This is the end of the experiment, you can now close this window. Thank you!")
        ,
         newCanvas("myCanvas", "60vw" , "60vh")
            .settings.add(0,0, getText("Final"))
            .css("font-size", "1.1em")
            .print("center at 50%", "middle at 50%")
        ,
        newButton("waitforever")
            .wait() // Not printed: wait on this page forever
    )
    
    My csv file
    
    group,Condition,Sentence,main_picture_video,middle_picture_s_video,wav_file,end_picture_video,ID_No
    A,2,The experiment will next prepare the milkshake,02_1_a_test_slow_mute.mp4,Neutral.jpg,02_1_a_test.wav,02_1_b_test_slow_mute.mp4,1
    A,2,The experimenter has recently prepared the milkshake,02_1_a_test_slow_mute.mp4,Neutral.jpg,02_2_a_test.wav,02_1_b_test_slow_mute.mp4,3
    A,2,The experiment will next prepare the cocktail,02_1_a_test_slow_mute.mp4,Neutral.jpg,02_1_b_test.wav,02_1_b_test_slow_mute.mp4,2
    A,2,The experimenter has recently prepared the cocktail,02_1_a_test_slow_mute.mp4,Neutral.jpg,02_2_b_test.wav,02_1_b_test_slow_mute.mp4,4
    B,3,The exprimenter will now butter the crossiant,02_1_a_test_slow_mute.mp4,Neutral.jpg,03_1_a_test.wav,02_1_b_test_slow_mute.mp4,5
    B,3,The exprimenter has just buttered the crossiant,02_1_a_test_slow_mute.mp4,Neutral.jpg,03_2_a_test.wav,02_1_b_test_slow_mute.mp4,7
    B,3,The exprimenter will now butter the bread slice,02_1_a_test_slow_mute.mp4,Neutral.jpg,03_1_b_test.wav,02_1_b_test_slow_mute.mp4,6
    B,3,The experimenter has just buttered the bread slice,02_1_a_test_slow_mute.mp4,Neutral.jpg,03_2_b_test.wav,02_1_b_test_slow_mute.mp4,8
    C,4,The experimenter will immediately water the sprouts,02_1_a_test_slow_mute.mp4,Neutral.jpg,04_1_a_test.wav,02_1_b_test_slow_mute.mp4,9
    C,4,The experimenter has previously watered the sprouts,02_1_a_test_slow_mute.mp4,Neutral.jpg,04_2_a_test.wav,02_1_b_test_slow_mute.mp4,11
    C,4,The experimenter will immediately water the tulips,02_1_a_test_slow_mute.mp4,Neutral.jpg,04_1_b_test.wav,02_1_b_test_slow_mute.mp4,10
    C,4,The experimenter has previously watered the tulips,02_1_a_test_slow_mute.mp4,Neutral.jpg,04_2_b_test.wav,02_1_b_test_slow_mute.mp4,12
    D,5,The experimeter will soon polish the candle holder,02_1_a_test_slow_mute.mp4,Neutral.jpg,05_1_a_test.wav,02_1_b_test_slow_mute.mp4,13
    D,5,The experimeter has already polished the candle holder,02_1_a_test_slow_mute.mp4,Neutral.jpg,05_2_a_test.wav,02_1_b_test_slow_mute.mp4,15
    D,5,The experimeter will soon polish the glasses,02_1_a_test_slow_mute.mp4,Neutral.jpg,05_1_b_test.wav,02_1_b_test_slow_mute.mp4,14
    D,5,The experimeter has already polished the glasses,02_1_a_test_slow_mute.mp4,Neutral.jpg,05_2_b_test.wav,02_1_b_test_slow_mute.mp4,16
    #8387
    multanip
    Participant

    ALSO HOW can I get rid of the “load ing will take a 1min” screen?

    This is for eye tracking experiment. this could possibly hinder with the experiment.

    #8388
    multanip
    Participant

    OK I have a new script. the two videos and the picture with audio is working but first and last video don’t finish playing before it moves on.
    https://farm.pcibex.net/r/OFgwcs/experiment.html?test=true

    The screeen it will take a 1 minute to load is still showing up.

    PennController.ResetPrefix(null) // Keep this here
    PennController.DebugOff()
    //The experiment is design for Eng based study 
    //CREDIT: The credit for the picture and Audio to Psychling lab and Dr. Dato for all the files
    //Credit: the script was learned and borrowed from PCIbex website
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Audio.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Neutral.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Videos.zip")
    //
    EyeTrackerURL("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_results/Eng_eyetracker_priya.php")
    //
    var showProgressBar = false;
    //
    Sequence("Preload","Counter",randomize("eyetracking"),"Exit")
    //
    CheckPreloaded ("Preload")
    //
    SetCounter("Counter", "inc", 1);
    //
    Template("Video_EyeTracking_english.csv", row=>  // all newText commands will automatically print the text centered when run
        newTrial("eyetracking",
            defaultImage.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultImage.size(950,650).print(),
            defaultVideo.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultVideo.size(1000,620).print()//,
            .print("center at 50vw", "center at 50vh")
            //defaultText.center().print()
        ,
        newEyeTracker("tracker",1).callback( function (x,y) {
            if (this != getEyeTracker("tracker")._element.elements[0]) return;
            getEyeTracker("tracker")._element.counts._Xs.push(x);
            getEyeTracker("tracker")._element.counts._Ys.push(y); 
            })
        ,
        newFunction(()=>{
            getEyeTracker("tracker")._element.counts._Xs = [];
            getEyeTracker("tracker")._element.counts._Ys = [];
        }).call()
        ,
            getEyeTracker("tracker")
                .calibrate(5)  // Make sure that the tracker is still calibrated
                .log()  // log the calibration scores
        ,
        newTimer("pre-trial", 400).start().wait()
        ,
        newVideo("image1", row.main_picture_video),// the first image is set in design.csv
        getVideo("image1").print().play(),
        newTimer("displayTime", 3000).start().wait(), //.start().wait() // wait 400ms before moving on to the next image
        getVideo("image1").remove()
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newImage("video2", row.middle_picture_s_video),// we always use middleImage.png as the middle image
        newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv
        getAudio("audio").wait("first"), // wait until the Audio element has finished playing back
        newTimer("displayTime", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge
        getImage("video2").remove()
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newVideo("image3", row.end_picture_video),
        getVideo("image3").print().play(),// the third image is set in design.csv,
        newTimer("displayTime", 3000).start().wait(),
        getVideo("image3").remove()
        .log()
        )
    .noHeader()
      .log("group"                  ,row.group)
      .log("Condition"              ,row.Condition)
      .log("ID_No"                  ,row.ID_No)
      .log("main_video"             ,row.main_picture_video)
      .log("end_video"              ,row.end_picture_video)
      .log("Neutral_picture_video"  ,row.middle_picture_s_video)
      .log("ViewportWidth" 	    	,window.innerWidth	 		) // Screensize: width
      .log("ViewportHeight"	    	,window.innerHeight 		) // Screensize: Height
    )
    //
    SendResults("Send");
    
    //Exit
    newTrial("Exit",
        exitFullscreen()
        ,
        newText("Final","Thank you. This is the end of the experiment, you can now close this window. Thank you!")
        ,
         newCanvas("myCanvas", "60vw" , "60vh")
            .settings.add(0,0, getText("Final"))
            .css("font-size", "1.1em")
            .print("center at 50%", "middle at 50%")
        ,
        newButton("waitforever")
            .wait() // Not printed: wait on this page forever
    )
    #8389
    multanip
    Participant

    NEVER MIND, THE VIDEOS WORKED ALL THE WAY!!!!!!!!!!!!!!

    BUT, the preloading will take a minute is still a problem and I need to add more videos and trials to this. So, I am concerned this will become a problem. IT does this with BOTH of my picture and the video eye tracking project.

    THANK YOU!

    #8392
    multanip
    Participant

    DO you have solution for the preloading problem?

    #8394
    multanip
    Participant

    UPDATE!!!! script. I have used the same script for 2 videos and picture int he middle with audio but this time I have added a lot more videos and audio and pictures in the middle. each one has ten different videos and picture and audio
    so there would be 20 different videos, 10 pictures and 10 audio files. all uploaded into a server. it would be video would play and finish the video with 1000ms and 400 ms gap and then static picture with audioa nd then 400 ms gap and then video again needs to finsh playing. I have uploaded all 35 vidoes and 15 audios and 35 pictures into a server (will be adding more).
    My script and my csv file are below. But when I run the script and I get to the main Template which containes the experiment and it goes to the first video before the middle one. It just shows black screen with play button that u cannot press play and it is stuck at that canvas with the black screen.

    PennController.ResetPrefix(null) // Keep this here
    PennController.DebugOff()
    //
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/Audio.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/pic_video.zip")
    PreloadZip("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_resources/new_video_mute.zip")
    //
    EyeTrackerURL("https://psyli-lab.research-zas.de/Eye-tracking_eng_Priya/Eyetracker_eng_results/Eng_eyetracker_priya.php")
    //
    var showProgressBar = false;
    
    //Sequence of the experiment
    Sequence("Preload","Welcome","Webcam_Check","ChromeCheck","L1Check","Introduction","Consent","QuestionairePage","WebcamSetUp","AudioSetUp","Instructions","Counter",randomize("eyetracking"),"Send","Exit")
    
    // CheckPreloaded
    CheckPreloaded("Preload")
    
    //Welcome Message
    newTrial("Welcome",
        newVar("Subject", randomnumber = Math.floor(Math.random()*1000000))
            .global()
            .log()
        ,
        newText("WelcomeMessage", "<b>Hello and Thank you very much for participating in our eye-tracking experiment.</b><br><br>Before we proceed, we have three quick questions.</b> <br><br>Please press the <b>SPACEBAR</b> to continue.")
        ,
        newCanvas("InstructionsCanvas", "80vw" , "40vh")
            .add(0,0, getText("WelcomeMessage"))//https://farm.pcibex.net/r/wqCsWk/experiment.html?test=true
            .print("center at 50%", "top at 25%")
        ,
        newKey("next", " ")
            .wait()
    )
    
    //Asking Participants whether they give permission to the Webcam, reminder to use chrome, and English language skill L1. If answer 'no' to any of these questiosn, they cannot participate in the experiment.
    newTrial("Webcam_Check",
        newText("Permission_Webcam", "Question 1:<br><br> In order to be able to record your line of sight on the computer screen, we ask for permission to access your webcam. We will <b>not</b> record a video of you or collect any other data that allows conclusions to be drawn about your identity. Do you give your permission for us to access your webcam?")
        ,
        newText("No_Permission", "<p><b>No, I don't give permission.</b><br>Click the 'N' on the keyboard</p>")
        ,
        newText("Yes_Permission", "<p><b>Yes, I give permission.</b><br>Click the 'Y' on the keyboard</p>")
        ,
        newCanvas("ChecksCanvas", "60vw" , "20vh") 
            .add("center at 50%", "top at 10%", getText("Permission_Webcam"))
            .add("center at 20%", "top at 80%", getText("Yes_Permission"))
            .add("center at 80%", "top at 80%", getText("No_Permission"))
            .print("center at 50%","top at 25%")
        , 
        newKey("yes_no", "NY")
            .wait()
        ,
        getKey("yes_no")
            .test.pressed("Y")
            .failure(
                getCanvas("ChecksCanvas")
                    .remove()
                ,
                newCanvas("No_Permission", "60vw" , "20vh")
                    .add("center at 50%", "top at 10%", newText("Unfortunately, you cannot participate in this study. Please close this window by ending the browser session, possible pop-up windows can be ignored."))
                    .print("center at 50%", "top at 25%")
                ,
                newButton("waitforever")
                .wait()
            )
    )
    //Chrome
    newTrial("ChromeCheck",
        newText("ChromeCheckText", "Question 2:<br><br>The display and the course of the experiment will only work without problems if you use the Google Chrome browser on a laptop or desktop computer (not on a smartphone or tablet). Are you currently using Google Chrome?</p>")
        ,
        newText("NoChrome", "<p><b>No, I'm not currently using any of the options</b><br>Click the 'N' on the keyboard</p>")
        ,
        newText("YesChrome", "<p><b>Yes, I'm currently using Chrome Browser</b><br>Click the 'Y' on the keyboard</p>")
        ,
        newCanvas("ChecksCanvas", "60vw" , "20vh")
            .add("center at 50%", "top at 10%", getText("ChromeCheckText"))
            .add("center at 20%", "top at 80%", getText("YesChrome"))
            .add("center at 80%", "top at 80%", getText("NoChrome"))
            .print("center at 50%", "top at 25%")
        ,
        newKey("yesno", "NY")
            .wait()
        , 
        getKey("yesno")
            .test.pressed("Y")
                .failure(
                getCanvas("ChecksCanvas")
                    .remove()
                ,
                newCanvas("NoChrome", "60vw" , "20vh")
                    .add("center at 50%", "top at 10%", newText("Unfortunately, the experiment only works with Google Chrome (which can be downloaded for free). Please close this window by ending the browser session (possible pop-up windows can be ignored) and open the link to the experiment again with Google Chrome."))
                    .print("center at 50%", "top at 25%")
                ,
                newButton("waitforever")
                    .wait()
            )
    )
    //Language check
    newTrial("L1Check",
        newText("L1CheckText","Question 3:<br><br> In order to take part in this study, you must speak <b>English as your mother tongue</b>. Is English your mother tongue?</p>")
        ,
        newText("NoL1","<p><b>No, English is not my mother tongue</b><br>Click 'N' on the keyboard<b><p>")
        ,
        newText("YesL1","<p><b>Yes, English is my mother tongue</b><br>Click 'Y' on the keyboard</b></p>")
        ,
        newCanvas("ChecksCanvas", "60vw" , "20vh")
            .add("center at 50%", "top at 10%", getText("L1CheckText"))
            .add("center at 20%", "top at 80%", getText("YesL1"))
            .add("center at 80%", "top at 80%", getText("NoL1"))
            .print("center at 50%", "top at 25%")
        ,
        newKey("yesno", "NY")
            .wait()
        ,
        getKey("yesno")
            .test.pressed("Y")
                .failure(
                getCanvas("ChecksCanvas")
                    .remove()
                ,
                newCanvas("NoL1", "60vw" , "20vh")
                    .add("center at 50%", "top at 10%", newText("Unfortunately, you are not eligible to participate in this study. Please close this window by ending your browser session (any pop-up windows can be ignored.)"))
                    .print("center at 50%", "top at 25%")
                ,
                newButton("waitforever")
                    .wait()
            )
    )
    //Introduction
    newTrial("Introduction",
        newText("IntroductionText","<b>Thank you for answering the questions about the system requirements!</b><br><br>The eye-tracking experiment will be very simple and will take about 10-15 minutes. Look at the three pictures and listen to the sentence for the middle picture. <br><br>Your task is to look at the images and the content of the spoken sentences as closely as possible. Press SPACE to continue in between pictures and trials. During the entire experiment, please try to sit as still as possible but comfortably and never take your eyes off the computer screen.<br><br>We will <b>not</b> record a video of you or collect any other data that allows conclusions to be drawn about your identity. We will only collect data related to your eye movements on the computer screen.<br><br>It is important that you are in a well-lit and quiet environment, otherwise the webcam will not be able to detect your eye movements. Please turn off any music and other applications and websites (such as cell phones, email notifications, etc.) that could distract you during the experiment.<br><br>The next pages will be displayed in full screen mode. Please do not close the full screen for the remainder of the experiment.<br><br>Click the <b>SPACEBAR</b> to continue.")
        ,
        newCanvas("InstructionsCanvas", "60vw" , "20vh")
            .add(0,0, getText("IntroductionText"))
            .css("front-size","25px")
            .print("center at 50%", "top at 25%")
        ,
        newKey("next", " ")
            .wait()
        ,
        fullscreen()
    )
    //.setOption("hideProgressBar",true)
    
    //Consent Form
    newTrial("Consent",
        newHtml("consent_form","consent_pilot_eng.html")
            .center()
            .cssContainer({"width":"720px"})
            .checkboxWarning("You must give your consent to continue.")
            .print()
        ,
         newButton("continue", "Click here to continue.")
            .center()
            .print()
            .wait(getHtml("consent_form").test.complete()
                      .failure(getHtml("consent_form").warn())
            )
    )
    //Participant Questionaire
    newTrial("QuestionairePage",
        newHtml("Questionnaire","questionnaire_pilot_eng.html")
            .center()
            .cssContainer({"width":"720px"})
            .checkboxWarning("You must give your consent to continue.")
            .print()
        ,
        newButton("continue","Click here to continue.")
            .center()
            .print()
            .wait(getHtml("Questionnaire").test.complete()
                      .failure(getHtml("Questionnaire").warn())
            )
    ) 
    //Set up the webcam:need calibrtion, the resources will preload at the same time. 
    newTrial("WebcamSetUp",
        newText("WebcamSetUpText", "The next few pages will help you set up the webcam and audio player. The webcam is set up through a simple calibration process. You will see video of the webcam recording during the calibration process. As previously mentioned, we will not store any footage of these webcam recordings. Please make sure your face is fully visible and that you are centered in front of your webcam.<br><br>You can start the calibration process when the box is GREEN by clicking the start button that will appear in the center of the computer screen.<br><br>During the calibration process, you will see eight dots on the screen. Please click on all these points and follow the mouse pointer closely with your eyes. Once you have clicked all the dots, a new dot will appear in the middle of the screen. Please click on this dot and <b>look at it for three seconds</b> so that the algorithm can check if the calibration process was successful.<br><br>If the calibration fails, the last step repeated again. <br><br> Press <b>SPACEBAR</b> to continue.")
            .center()
            .print()
        ,
        newKey("next", " ")
            .wait( newEyeTracker("tracker").test.ready())
        ,
        fullscreen()
        ,
        // Start calibrating the eye-tracker, allow for up to 3 attempts
        // 50 means that calibration succeeds when 50% of the estimates match the click coordinates
        // Increase the threshold for better accuracy, but more risks of losing participants
        getEyeTracker("tracker").calibrate(5,3)
      )
      .noHeader()
      //.setOption("hideProgressBar", true)
    
    // Audio set-up
    newTrial("AudioSetUp",
        newText("AudioInstructions", "The webcam is now calibrated so that the audio player can be set up in the next step. In this experiment, you will hear several sentences. You can play a sample sentence that will appear in this study by pressing the play button. Please use the audio recording to adjust the volume as well. You can play this sample set as many times as you like. Once you're ready, you can go to the next page.")
        , 
        newAudio("cocktail","02_1_b_test.wav") ///ADDD EXAMPLE WAV TO CHECK 
        ,
        newCanvas( "myCanvas", "60vw" , "60vh")
            .settings.add(0,0, getText("AudioInstructions"))
            .settings.add("center at 50%", "top at 20%", getAudio("cocktail"))
            .print("center at 50%", "top at 25%")
        ,
        newButton("Next Page")
            .center()
            .print("center at 50%", "top at 70%")
            .wait()
    )
        
    // Experiment instructions//
    newTrial("Instructions",
        newText("TaskInstructions", "<p>We are ready to start the experiment! The experiment round will start immediately. <b>PLEASE keep your gaze FIXED</b> on the computer at ALL TIMES and head must remain still.<br><br>In experiment round, <b>DO NOT</b> scroll or move up and down or sideways, <b>ONLY</b> or <b>DO as Instructed</b>.<br><br>BEFORE each section, a green dot will appear in the middle of the screen for web re-check. Just look at it <b>for THREE SECONDS</b>. IF the camera is still calibrated, it will continue, otherwise the computer will recalibrate. In Experiment, you will see a action video followed by static picture with audio and a final action video and <b>DO NOT TOUCH THE MOUSE DURING THIS SCREEN</b>. You <b>MUST QUICKLY</b> only look at the picture and video.</b> <br><br> We will now start with the experimental run. It should take approximately 15-20 minutes.")
        ,
        newCanvas("myCanvas", 800 , 300)
            .settings.add(0,0, getText("TaskInstructions"))
            .print("center at 50%", "top at 25%")
        ,
        newButton("Begin the Experiment")
            .center()
            .print("center at 50%", "top at 70%")
            .wait()
    )
    
    SetCounter("Counter", "inc", 1);
    //
    Template("Video_EyeTracking_english_mp4.csv", row=>  // all newText commands will automatically print the text centered when run
        newTrial("eyetracking",
            defaultImage.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultImage.size(950,650).print(),
            defaultVideo.center().print(),     // all newImage commands will automatically print the image centered when run
            defaultVideo.size(1000,620).print()//,
            .print("center at 50vw", "center at 50vh")
            //defaultText.center().print()
        ,
        newEyeTracker("tracker",1).callback( function (x,y) {
            if (this != getEyeTracker("tracker")._element.elements[0]) return;
            getEyeTracker("tracker")._element.counts._Xs.push(x);
            getEyeTracker("tracker")._element.counts._Ys.push(y); 
            })
        ,
        newFunction(()=>{
            getEyeTracker("tracker")._element.counts._Xs = [];
            getEyeTracker("tracker")._element.counts._Ys = [];
        }).call()
        ,
            getEyeTracker("tracker")
                .calibrate(5)  // Make sure that the tracker is still calibrated
                .log()  // log the calibration scores
        ,
        newTimer("pre-trial", 400).start().wait()
        ,
        newVideo("image1", row.main_picture_video).print().play(),
        getVideo("image1").wait("first"),
        newTimer("displayTime", 1000).start().wait(), //.start().wait() // wait 400ms before moving on to the next image
        getVideo("image1").remove()
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newImage("image2", row.middle_picture_s_video),// we always use middleImage.png as the middle image
        newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv
        getAudio("audio").wait("first"),
        newTimer("displayTime", 400).start().wait(), //.start(),//.wait() //wait 400ms befoere moving to the last iamge
        getImage("image2").remove() // wait until the Audio element has finished playing back
        .log()
        ,
        newTimer("trial1", 400).start().wait()
        ,
        newVideo("image3", row.end_picture_video).print().play(),
        getVideo("image3").wait("first"), // the third image is set in design.csv,
        newTimer("displayTime", 750).start().wait(),
        getVideo("image3").remove()
        .log()
        )
    .noHeader()
      .log("group"                  ,row.group)
      .log("Condition"              ,row.Condition)
      .log("ID_No"                  ,row.ID_No)
      .log("main_video"             ,row.main_picture_video)
      .log("end_video"              ,row.end_picture_video)
      .log("Neutral_picture_video"  ,row.middle_picture_s_video)
      .log("ViewportWidth" 	    	,window.innerWidth	 		) // Screensize: width
      .log("ViewportHeight"	    	,window.innerHeight 		) // Screensize: Height
    )
    //
    SendResults("Send");
    
    //Exit
    newTrial("Exit",
        exitFullscreen()
        ,
        newText("Final","Thank you. This is the end of the experiment, you can now close this window. Thank you!")
        ,
         newCanvas("myCanvas", "60vw" , "60vh")
            .settings.add(0,0, getText("Final"))
            .css("font-size", "1.1em")
            .print("center at 50%", "middle at 50%")
        ,
        newButton("waitforever")
            .wait() // Not printed: wait on this page forever
    )
    #8395
    multanip
    Participant
    group,Condition,main_picture_video,middle_picture_s_video,wav_file,end_picture_video,ID_No
    A,2,01_1_a_test_slow.mp4,01_1_a_test_slow.png,02_1_a_test.wav,01_1_b_test_slow.mp4,1
    A,2,02_1_a_test_slow.mp4,02_1_a_test_slow.png,02_1_b_test.wav,02_1_b_test_slow.mp4,2
    A,2,03_1_a_test_slow.mp4,03_1_a_test_slow.png,02_2_a_test.wav,03_1_b_test_slow.mp4,3
    A,2,04_1_a_test_slow.mp4,04_1_a_test_slow.png,02_2_b_test.wav,04_1_b_test_slow.mp4,4
    A,2,05_1_a_test_slow.mp4,05_1_a_test_slow.png,03_1_a_test.wav,05_1_b_test_slow.mp4,5
    A,2,06_1_a_test_slow.mp4,06_1_a_test_slow.png,03_1_b_test.wav,06_1_b_test_slow.mp4,6
    A,2,07_1_a_test_slow.mp4,07_1_a_test_slow.png,03_2_a_test.wav,07_1_b_test_slow.mp4,7
    A,2,08_1_a_test_slow.mp4,08_1_a_test_slow.png,03_2_b_test.wav,08_1_b_test_slow.mp4,8
    A,2,09_1_a_test_slow.mp4,09_1_a_test_slow.png,04_1_a_test.wav,09_1_b_test_slow.mp4,9
    A,2,10_1_a_test_slow.mp4,10_1_a_test_slow.png,04_1_b_test.wav,10_1_b_test_slow.mp4,10
    #8396
    multanip
    Participant

    And please wait 1 min to load is still a problem.

    #8416
    Jeremy
    Keymaster

    Hello,

    Apologies for the late reply: I was away from the office for the past two weeks and only catching up with messages now.

    it goes to the first video before the middle one. It just shows black screen with play button that u cannot press play and it is stuck at that canvas with the black screen.

    This happens when the referenced video is not found: the Video element is printed with its interface, but because no stream can be found, clicking the play button just won’t do anything (since there’s no stream to play). Your script waits until the video has fully played (getVideo("image1").wait("first")) before it reaches the line to print the next Image element (newImage("image2", row.middle_picture_s_video)) which is why the trial is stuck with a blank screen: the video will never have fully played so the script will be stuck on that line (getVideo("image1").wait("first"))

    I’m afraid I cannot help more at this point, since the script at https://farm.pcibex.net/r/OFgwcs/ does not match the one from your most recent message, and the URLs referenced in the various PreloadZip commands are no longer valid

    Note that you will see a preloading screen (“please wait 1 min”) as long as your experiment uses multimedia objects (images, videos, audios) that take a long time to preload or simply fail to preload (as in the case just mentioned, where no video stream can be found). This is not a bug, it is a feature

    Jeremy

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.