multanip

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 35 total)
  • Author
    Posts
  • in reply to: saving results in a server #10984
    multanip
    Participant

    demo link
    https://farm.pcibex.net/r/aTXifq/

    PLEASE do not change the script. just me what is wrong. I can try.

    in reply to: saving results in a server #10983
    multanip
    Participant

    CAN SOMEONE PLEASE ANSWER AND HELP ME THIS! I have send an email while back to admi at pcibex email but no answer. My pcibex and the server items are pulled just fine and results send are not working properly. NOT A EYE-tracking study. I can get items but cannot send results.

    I have been waiting over a week for answer.

    Thank you

    in reply to: saving results in a server #10977
    multanip
    Participant

    I am able to pull items( images, audio) from this server needed for the experiment.

    in reply to: saving results in a server #10975
    multanip
    Participant

    I would share my script but it wont let me post

    in reply to: Eyetracking R script variations #10661
    multanip
    Participant

    ADDITIONAL NOTE: pennElementName or order.number.of.item column is difficult to use in the same list, same item has different numbers. FOr example,

    In list A, item16c.wav (c16c): 1 participant has PennElementno. =59 and order no. =60, 2nd participant same item PennElementno. =60 and order no. =60, 3rd participant same item PennElementno. =47 and order no. =47.

    I need to see how particpants reacted to each item at average onset of the third sentence in each item. Using number is difficult.

    in reply to: Eyetracking R script variations #10658
    multanip
    Participant
    # apply sections lines are run, get ETdata with trial number, data check alaso add lines after this in order to get ETdata and dat check without trial instead with item name
    
    # Bin the data to 100ms since its in every 10s of millisecond
    ETdata$bin <- BIN_DURATION*floor(ETdata$times/BIN_DURATION)
    ETdata <- ETdata %>% group_by(Participant,trial,bin) %>% mutate(
      Right=mean(X_Right),
      #Middle=mean(X_Middle),
      Left=mean(X_Left),
    )
    
    # Some transformations before plotting
    #  - only keep first row for each bin per participant+trial
    ETdata_toplot <- ETdata %>% group_by(Participant,trial,bin) %>% filter(row_number()==1)
    #  - from wide to long (see http://www.cookbook-r.com/Manipulating_data/Converting_data_between_wide_and_long_format/)
    ETdata_toplot <- gather(ETdata_toplot, focus, gaze, Right:Left) #top_female:bottom_male)
    
    #Save the excel files below and make changes to these files to get the average onset for third sentence
    #write.csv(ETdata, file = 'ETdata_fullGer_RStudio.csv',
              #row.names = FALSE)
    #(ETdata_toplot, file = 'ETdata_topplot_fullGer_RStudio.csv',
              #row.names = FALSE)
    
    # Plot the results
    ger_plot = ggplot(ETdata_toplot, aes(x=bin,y=gaze,color=focus)) +
      geom_line(stat="summary",fun="mean")
    ger_plot + xlim(7242.4875,18000)+ 
      geom_vline(xintercept=c(7242.4875,12806.8875,15975.90), linetype="dashed") +
      annotate("text", x=c(13000), y=.14, label=c("X=12806.8875  Aver.3rd sentence onset"), angle=90) + annotate("text", x=c(7355), y=.14, label="X=7242.4875  Aver.2nd sentence onset", angle=90) + annotate("text", x=c(16170), y=.14, label=c("X=15975.90  Aver.3rd sentence completed"), angle=90)
    in reply to: Eyetracking R script variations #10655
    multanip
    Participant

    IF this is too confusing, I can send through email.

    read.pcibex <- function(filepath, auto.colnames=TRUE, fun.col=function(col,cols){cols[cols==col]<-paste(col,"Ibex",sep=".");return(cols)}) {
      n.cols <- max(count.fields(filepath,sep=",",quote=NULL),na.rm=TRUE)
      if (auto.colnames){
        cols <- c()
        con <- file(filepath, "r")
        while ( TRUE ) {
          line <- readLines(con, n = 1, warn=FALSE)
          if ( length(line) == 0) {
            break
          }
          m <- regmatches(line,regexec("^# (\\d+)\\. (.+)\\.$",line))[[1]]
          if (length(m) == 3) {
            index <- as.numeric(m[2])
            value <- m[3]
            if (is.function(fun.col)){
              cols <- fun.col(value,cols)
            }
            cols[index] <- value
            if (index == n.cols){
              break
            }
          }
        }
        close(con)
        return(read.csv(filepath, comment.char="#", header=FALSE, col.names=cols))
      }
      else{
        return(read.csv(filepath, comment.char="#", header=FALSE, col.names=seq(1:n.cols)))
      }
    }
    require("dplyr")
    require("ggplot2")
    require("tidyr")
    # The URL where the data is stored; note the ?experiment= at the end
    ETURL = "https://psyli-lab.research-zas.de/eye-tracking_full_ger/eye-tracking_full_ger_results/php_full_ger.php?experiment="
    # Time-window to bin the looks
    BIN_DURATION = 100
    # We'll use Reception time to identify individual sessions
    results <- read.pcibex("~/Results online study/full_ger_pcibex_june/results.csv")
    names(results)[1] <- 'Participant'
    #write.csv(results, file = 'results_clean_RStudio2.csv',
             # row.names = FALSE)
    # Read ET data file for each session and append output to ETdata (first 5 rows are corrupt)
    ETdata = data.frame()
    filesDF_bak <- subset(results, Parameter=="Filename"&displayID=="p10a") #tried to change the display ID to setnence but still showed p10a displayID for all
    filesDF <- filesDF_bak[6:nrow(filesDF_bak),] 
    apply(filesDF, 1, function(row) {
      data <- read.csv(paste(ETURL,as.character(row[['Value']]),sep=''))
      data$Participant <- row[['Participant']]
      data$displayID <- row[['displayID']] ##added this later and it showed the display id but only p10a for all
      datacheck <<- data
      ETdata <<- rbind(ETdata,data)
    })
    • This reply was modified 11 months, 1 week ago by multanip.
    • This reply was modified 11 months, 1 week ago by multanip.
    • This reply was modified 11 months, 1 week ago by multanip.
    in reply to: Results Eyetracking Audio results Issue #10649
    multanip
    Participant

    Let me get back to you on this. I need the supervisior if i am able to share anything at all.

    in reply to: Results Eyetracking Audio results Issue #10625
    multanip
    Participant

    FOR question 1 (first post), I was trying to ask about the event timestamp numbers FOr example, Same participant as above:
    (item no, Event start_results, event end_results, original total audio+1000ms before onset audio+400ms after audio onset added)

    item18b.wav, 1648122657466, 1648122678169, 17683ms

    The results say the total time is 20703 (end_results-start_results) but my total time with audio, before and after time added is 17683. This gap is really big betweeen the original item and reported time by pcibex.

    Other items are the similiar as well. How should be translate this as well?

    in reply to: Results Eyetracking Audio results Issue #10622
    multanip
    Participant

    But for this participant for example, an item that starts at 0.001168 sec (1.16ms) and ends at 14.689637. Same participant also has different items play at .002114 and another at .022991, Other participants also have times like this. Should this be concerning? Seems really big. How do we understand these numbers?

    in reply to: Results Eyetracking Audio results Issue #10621
    multanip
    Participant

    The files in .wav format.

    in reply to: Results Eyetracking Audio results Issue #10620
    multanip
    Participant

    Sorry typo is the another audio in the same participant. One participant with two different audios.

    in reply to: inserting getEyetracker start and stop into the code #10388
    multanip
    Participant

    Ok. We have success with running all images and logging the way I wanted. I need to run a longer csv list so I will let you know if there are other issues.

    Even though it works, I am just posting the main script below if you can see and let me know I did this correctly. otherwise the links are above for demo to see the script

    Template( "eng_01_geo_List_tense_old.csv", row=> 
        newTrial("eyetracking",
            newCanvas("imageContainer", 1280, 720)
            //defaultImage.size(1280,720).(0,0,getCanvas("imageContainer"))
            //defaultImage.size(1280,720).print()//,
            //.print("center at 50vw", "center at 50vh")
            //defaultText.center().print()
        ,
        newEyeTracker("tracker",1).callback( function (x,y) {
            if (this != getEyeTracker("tracker")._element.elements[0]) return; // The callback commands lets us log the X and Y coordinates of the estimated gaze-locations at each recorded moment in time
            getEyeTracker("tracker")._element.counts._Xs.push(x);
            getEyeTracker("tracker")._element.counts._Ys.push(y);
            })
        ,
        newFunction(()=>{
            getEyeTracker("tracker")._element.counts._Xs = [];
            getEyeTracker("tracker")._element.counts._Ys = [];
        }).call()
        ,
            getEyeTracker("tracker")
                .calibrate(5)  // Make sure that the tracker is still calibrated
                .log() // log the calibration test.index
            ,
            getCanvas("imageContainer").print("center at 50vw", "middle at 50vh")
            ,
            getEyeTracker("tracker")
                .add(
                    getCanvas("imageContainer"))
                .log()  
                .start()
            ,
        newTimer("pre-trial0", 500).start().wait()
        ,
        newImage("image0", row.Static_image1).print(0,0,getCanvas("imageContainer")),
        newTimer("displayTime0", 2000).start().wait(), 
        getImage("image0").remove()
        .log()
        ,
        newTimer("pre-trial1", 500).start().wait()
        ,
        newImage("image1", row.action_image).print(0,0,getCanvas("imageContainer")),
        newTimer("displayTime1", 2500).start().wait(), 
        getImage("image1").remove()
        .log()
        ,
        newTimer("pre-trial2", 500).start().wait()
        ,
        newImage("image2", row.Static_image2).print(0,0,getCanvas("imageContainer")),    
        newAudio("audio", row.wav_file).play(), // the audio file is set in design.csv
        getAudio("audio").wait("first"), 
        newTimer("displayTime2", 400).start().wait(), 
        getImage("image2").remove()
        .log()
        ,
        newTimer("pre-trial3", 500).start().wait()//remain for 1000 ms on screen
        ,
        newImage("image3", row.action_image2).print(0,0,getCanvas("imageContainer")),
        newTimer("displayTime3", 2500).start().wait(),
        getImage("image3").remove()
        .log()
        ,
        // Stop now to prevent collecting unnecessary data
        getEyeTracker("tracker")
            .stop()
        )
    .noHeader()
      .log("Subject"              , getVar("Subject")  )
      .log("Static_image1"          ,row.Static_image1)
      .log("action1_image"          ,row.action_image)
      .log("ID_No"                  ,row.random)
      .log("Static_image2"          ,row.Static_image2)
      .log("wav_file"               ,row.wav_file)
      .log("action_image2"          ,row.action_image2)
      .log("ViewportWidth" 	    	,window.innerWidth	 		) // 
      .log("ViewportHeight"	    	,window.innerHeight 		) // Screensize: 
    )

    I have other questions but I will post later

    in reply to: inserting getEyetracker start and stop into the code #10386
    multanip
    Participant

    Like this:

    newCanvas("imageContainer", 1280, 720),
            //defaultImage.size(1280,720).(0,0,getCanvas("imageContainer")),
            newImage("image0", row.Static_image1).print(0,0,getCanvas("imageContainer")),
            newImage("image1", row.action_image).print(0,0,getCanvas("imageContainer")),
            newImage("image2", row.Static_image2).print(0,0,getCanvas("imageContainer")),
            newImage("image3", row.action_image2).print(0,0,getCanvas("imageContainer"))

    because now I am getting error message in the yellow window.

    [12:30:5] Ambiguous use of getImage(“image3”): more than one elements were created with that name-- getImage(“image3”) will refer to the first one (newTrial: 15)
    [12:30:5] Ambiguous use of getImage(“image2”): more than one elements were created with that name-- getImage(“image2”) will refer to the first one (newTrial: 15)
    [12:30:5] Ambiguous use of getImage(“image1”): more than one elements were created with that name-- getImage(“image1”) will refer to the first one (newTrial: 15)
    [12:30:5] Ambiguous use of getImage(“image0”): more than one elements were created with that name-- getImage(“image0”) will refer to the first one (newTrial: 15)
    [12:30:5] Ambiguous use of getImage(“image3”): more than one elements were created with that name-- getImage(“image3”) will refer to the first one (newTrial: 14)
    [12:30:5] Ambiguous use of getImage(“image2”): more than one elements were created with that name-- getImage(“image2”) will refer to the first one (newTrial: 14)
    [12:30:5] Ambiguous use of getImage(“image1”): more than one elements were created with that name-- getImage(“image1”) will refer to the first one (newTrial: 14)
    [12:30:5] Ambiguous use of getImage(“image0”): more than one elements were created with that name-- getImage(“image0”) will refer to the first one (newTrial: 14)
    [12:30:5] Ambiguous use of getImage(“image3”): more than one elements were created with that name-- getImage(“image3”) will refer to the first one (newTrial: 13)
    [12:30:5] Ambiguous use of getImage(“image2”): more than one elements were created with that name-- getImage(“image2”) will refer to the first one (newTrial: 13)
    [12:30:5] Ambiguous use of getImage(“image1”): more than one elements were created with that name-- getImage(“image1”) will refer to the first one (newTrial: 13)
    [12:30:5] Ambiguous use of getImage(“image0”): more than one elements were created with that name-- getImage(“image0”) will refer to the first one (newTrial: 13)
    [12:30:5] Ambiguous use of getImage(“image3”): more than one elements were created with that name-- getImage(“image3”) will refer to the first one (newTrial: 12)
    [12:30:5] Ambiguous use of getImage(“image2”): more than one elements were created with that name-- getImage(“image2”) will refer to the first one (newTrial: 12)
    [12:30:5] Ambiguous use of getImage(“image1”): more than one elements were created with that name-- getImage(“image1”) will refer to the first one (newTrial: 12)
    [12:30:5] Ambiguous use of getImage(“image0”): more than one elements were created with that name-- getImage(“image0”) will refer to the first one (newTrial: 12)
    in reply to: inserting getEyetracker start and stop into the code #10383
    multanip
    Participant

    Hello, Thank you for the changes. I believe I made the changes correctly and it logs and runs throught all the images.

    but I am ahving another issue that was not there before. The images are all off. Its only giving 500ms blank white flashing gap between image0 and image1 but after that there is no 500ms blank gap after that between image1 to image2 or image2 to image3. It flashed the first action image (iamge1) before the last image (image3). this was there before and it is not suppose to be that way.

    Setup is 500ms preview time, image0 for 2500ms on the screen(guy lookng straight ahead), 500ms gap flashing white screen, image1 action image (small thermos being touched), 500ms flashing white screen, same as image0 (iamge2) +audio, 500ms flashing gap, image3 different action (big thermos being touched).

    I wanted experiment similar to this setup u gave me. https://farm.pcibex.net/r/nkHbDP/ Only in between newimages I would add another newtimer. you can see small white screen before and after the middle picture then moves on to the next item. I no longer have that.

Viewing 15 posts - 1 through 15 (of 35 total)