Pre-processing self-paced reading data in R

PennController for IBEX Forums Support Pre-processing self-paced reading data in R

This topic contains 1 reply, has 2 voices, and was last updated by Jeremy Jeremy 2 weeks ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #5873
    Avatar
    rosa303
    Participant

    Hi Jeremy,

    I’ve written a script for a self-paced reading experiment, and I want to make sure I can analyze the output before I recruit participants. Right now I’m struggling with writing an R script that can process my results file. Is there an existing R script or template I can adapt to pre-process PCIbex results for a self-paced reading study? I’ve read the documentation for “data analysis in R,” but I’m still having difficulty, for example, removing outliers based on criteria like:
    -remove participants with an accuracy rate below 75% on the comprehension Qs for the experimental items
    -remove items with incorrect responses to comprehension Qs
    -remove RTs exceeding a threshold of 3000ms

    This might be more related to R coding rather than PCIbex, but I wasn’t sure where else to look for help, and it’d be really helpful if you could refer me to any sources. Thank you!

    – Rosa

    #5875
    Jeremy
    Jeremy
    Keymaster

    Hi Rosa,

    EDIT: well, I read your message too fast and didn’t realize you were asking about self-paced reading specifically—I’d be happy to adapt the example in this message to self-paced reading trials if it helps

    For this example, I’ll be working from an extremely minimal trial structure:

    newTrial( "experimental" ,
        newScale("comprehensionanswer", "Yes", "No")
            .print()
            .wait()
            .log()
    )
    .log("id", GetURLParameter("id"))
    .log("correct", "Yes")
    .log("itemnumber" , 1 )

    I’m assuming all experimental trials are labeled experimental and that itemnumber uniquely identifies your trials. Let’s first load the results in a table:

    results <- read.pcibex( "results_comprehension" )

    We’ll be comparing Value and correct a lot, so we’ll de-factorize those columns:

    results$Value <- as.character(results$Value)
    results$correct <- as.character(results$correct)

    Now let’s load dplyr and do our magic:

    library("dplyr")
    
    results <- results %>% group_by(id) %>%
      mutate(accuracy=mean(Value[Label=="experimental"&Parameter=="Choice"]
                            ==correct[Label=="experimental"&Parameter=="Choice"])) %>%
      group_by(id,itemnumber) %>% mutate(RT=EventTime[Parameter=="Choice"] -
                                       EventTime[Parameter=="_Trial_"&Value=="Start"])

    The first mutate compares Value against correct for the rows of the experimental trials where Parameter is “Choice” (= rows reporting which option was selected on the scale) and outputs the mean for each participant (see group_by(id))

    The second mutate simply subtracts the EventTime corresponding to the start of the trial from the EventTime corresponding to the choice on the scale, for each trial for each participant (see group_by(id,itemnumber)).

    Now that we have added the accuracy column which reports the proportion of correct answers to the experimental trials for each participant, and the RT column which reports how long they took to make a decision for each trial, we can proceed to the filtering:

    results_filtered <- results %>% filter(Label=="experimental" & accuracy>=3/4 & Value==correct & RT<=3000)

    Let me know if you have questions

    Jeremy

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.