Forum Replies Created
-
AuthorPosts
-
Jeremy
KeymasterCan you create git repos (eg. on Github or Gitlab)? If so, you could create one that only contains a folder named js_includes in which you place the edited copy of PennController.js; then syncing your project with that git repo will overwrite your project’s file named PennController.js with the one from the git repo
Jeremy
Jeremy
KeymasterCalling remove on any element should remove it from the screen, however it was added. Example:
newTrial( newCanvas("container", 400, 400) .css('background-color','lightblue') .add( "center at 50%" , "middle at 50%" , newText("my text") ) .print() , newButton("remove text").print().wait().remove(), getText("my text").remove() , newButton("remove canvas").print().wait().remove(), getCanvas("container").remove() , newButton().wait() )
Note that removing a Canvas will remove the elements it contains along with it
If you added a border to the Canvas through CSS, you can simply overwrite its border property using .css("border", "none") on the Canvas element
Jeremy
Jeremy
KeymasterThere’s a problem with the code handling SendResults used as an in-trial command. Edit your file PennController.js and replace occurrences of e=window.items.indexOf(n); (there should be 2) with e=window.items&&window.items.indexOf(n);
Also, the audio replays if I type ‘r’ in my feedback comments—not sure how to prevent that as a general behavior, but it would involve a disable on the Key element that replays, or a printed test inside the Key element’s callback
Jeremy
Jeremy
KeymasterHi Nickolas,
I don’t think I’ve experienced this problem before, would you mind sending me a link to your experiment?
Jeremy
Jeremy
KeymasterHi Sander,
Use .setOption on the PennController trials for which you want to hide the progress bar, e.g.:
newTrial("intro", newHtml("intro.html").print() , newButton("Next").print().wait() ) .setOption("hideProgressBar", true) Template( row => newTrial( "test" , newButton(row.Text).print().wait() ) ) newTrial("end" , newHtml("end.html").print() , newButton().wait() // wait here forever ) .setOption("hideProgressBar", true) .setOption("countsForProgressBar", false)
Jeremy
Jeremy
KeymasterHi,
Yes, you can recreate the core of the results file using the raw results file. One option is to create a new empty experiment that will serve as the host for the restored results file and pass each line from raw_results to it, effectively simulating submissions.
Here’s what I just did: I parsed
raw_results
to delete all the lines starting with#
, added a comma at the end of each of the remaining lines, and addedvar lines = [
at the very beginning and];
at the very end of the whole document. Then I opened the dummy experiment as if to participate in it, I opened the javascript console and pasted the parsed content. Then I typed this in the console:for (let i = 0; i < lines.length; i++) $.ajax({ url: __server_py_script_name__, cache: false, contentType: "text/html; charset=UTF-8", data: JSON.stringify(lines[i]), type: "POST", success: m=>console.log("success"), error: e=>console.log("error",e) });
I refreshed the Results section of my dummy experiment and I had my results file. The MD5 hashes were different, of course, as was the information in the comments, because I made the submissions myself, but other than that the lines looked ok. I just had one submission rejected, but I identified it as line #4 (starting with 0) and was able to add it back in manually:
$.ajax({ url: __server_py_script_name__, cache: false, contentType: "text/html; charset=UTF-8", data: JSON.stringify(lines[4]), type: "POST", success: m=>console.log("success"), error: e=>console.log("error",e) });
Let me know if you need assistance
Jeremy
Jeremy
KeymasterHi Peiyao,
Yes, use the UploadRecordings command to create a trial that will send the recordings. You can pass "noblock" as the second parameter if you don’t want each upload request to slow down your experiment (take a look at the example script). Just make sure you’re using at least PennController 1.8, as the command was introduced with that release.
Also, using "noblock" will most likely have the requests generated by the UploadRecordings trials complete during a subsequent trial, so the lines added to the results file once upload is complete will be associated with whatever trial is currently running, instead of with their corresponding UploadRecordings trial.
Jeremy
July 31, 2020 at 1:34 pm in reply to: DashedSentence or DashedCustom for self-paced reading with cumulative window? #5879Jeremy
KeymasterHi Gabi,
If by DashedCustom you mean a copy of the DashedSentence controller that you’ve manually edited, yes, it’s very straightforward: just comment out lines 192-193 from DashedSentence.js (or your copy of it) and the preceding words should no longer disappear.
Jeremy
Jeremy
KeymasterHi Rosa,
EDIT: well, I read your message too fast and didn’t realize you were asking about self-paced reading specifically—I’d be happy to adapt the example in this message to self-paced reading trials if it helps
For this example, I’ll be working from an extremely minimal trial structure:
newTrial( "experimental" , newScale("comprehensionanswer", "Yes", "No") .print() .wait() .log() ) .log("id", GetURLParameter("id")) .log("correct", "Yes") .log("itemnumber" , 1 )
I’m assuming all experimental trials are labeled experimental and that itemnumber uniquely identifies your trials. Let’s first load the results in a table:
results <- read.pcibex( "results_comprehension" )
We’ll be comparing Value and correct a lot, so we’ll de-factorize those columns:
results$Value <- as.character(results$Value) results$correct <- as.character(results$correct)
Now let’s load dplyr and do our magic:
library("dplyr") results <- results %>% group_by(id) %>% mutate(accuracy=mean(Value[Label=="experimental"&Parameter=="Choice"] ==correct[Label=="experimental"&Parameter=="Choice"])) %>% group_by(id,itemnumber) %>% mutate(RT=EventTime[Parameter=="Choice"] - EventTime[Parameter=="_Trial_"&Value=="Start"])
The first
mutate
compares Value against correct for the rows of the experimental trials where Parameter is “Choice” (= rows reporting which option was selected on the scale) and outputs the mean for each participant (seegroup_by(id)
)The second
mutate
simply subtracts the EventTime corresponding to the start of the trial from the EventTime corresponding to the choice on the scale, for each trial for each participant (seegroup_by(id,itemnumber)
).Now that we have added the accuracy column which reports the proportion of correct answers to the experimental trials for each participant, and the RT column which reports how long they took to make a decision for each trial, we can proceed to the filtering:
results_filtered <- results %>% filter(Label=="experimental" & accuracy>=3/4 & Value==correct & RT<=3000)
Let me know if you have questions
Jeremy
Jeremy
KeymasterHi Noe,
Differences of a few milliseconds are to be expected: executing commands from the script takes time, and sometimes browsers experience light slowdowns, resulting in slight delays between a log command being executed and the creation/assignment of the Var element targeting the event.
The command shuffle regularly intersperse trials from two sets of trials, while randomize randomly reorders all the trials in a given set. You can always replace shuffle(randomize("..."), randomize("..."), ...) with the shorthand rshuffle("...", "...", ...). But since in your case you only have one set of trials (“experimento”) you can just use randomize like you tried at first
Try a hard refresh on your experiment’s page and you should see a new order (sometimes it takes several shots if you only have a couple trials in the set, because there only are so many possible permutations)
Jeremy
Jeremy
KeymasterHi,
You cannot directly send messages to the Logs (or Errors) tab of the Debug window. Your console.log commands will send messages to the javascript console.
Note that neither console.log nor a line appearing in the Logs tab of the Debug window means that the relevant event/value will be logged in the results file.
If you want to save totalCorrect in the results file, you will need to sort of “hack” into how PennController-native elements are handled. In practice, it basically just means adding ._runPromises() after the series of PennController commands (like the code’s already doing with the dummy Button element):
newTrial("headphonecheck", newButton("check", "Start Heaphone Check") .print() , // This Canvas will contain the test itself newCanvas("headphonecheck", 500,500) .print() , // The HeadphoneCheck module fills the element whose id is "hc-container" newFunction( () => getCanvas("headphonecheck")._element.jQueryElement.attr("id", "hc-container") ).call() , getButton("check") .wait() .remove() , // Create this Text element, but don't print it just yet newText("failure", "Sorry, you failed the heaphone check") , newVar("totalCorrect") .global() , // This is where it all happens newFunction( () => { $(document).on('hcHeadphoneCheckEnd', function(event, data) { getCanvas("headphonecheck").remove()._runPromises(); getButton("dummy").click()._runPromises(); getVar("totalCorrect").set( data.data.totalCorrect )._runPromises(); }); HeadphoneCheck.runHeadphoneCheck({totalTrials: 1, trialsPerPage: 1, doCalibration: false // we definitely want to change this for the real one }) }).call() , // This is an invisible button that's clicked in the function above upon success newButton("dummy").wait() ) .log( "totalCorrect" , getVar("totalCorrect") )
Note that you won’t get any reports of totalCorrect in your results file for participants who fail the headphone check, as the experiment will never move to the next trial, so nothing will get sent to the server.
Jeremy
Jeremy
KeymasterHi Elise,
Visual rendering can quickly become challenging when factoring in the variety of displays used to visit a webpage. The ideal solution will depend on what you are trying to accomplish: do you want constant spacing between elements, so that narrower resolutions might have them outflow the screen, or do you want to (try to) force all your elements to fit on the screen, in which case the spacing between the elements and/or their sizes cannot be constant
Modifying the CSS file is the most powerful and I would say the optimal solution in such cases, but CSS can be quite confusing (at least in my experience). Both the
absolute
andrelative
position settings should be interpreted relative to the first parent which has the same setting, so it’s not always truly absolute. I tried to make Canvas elements a little simpler (I don’t know if I succeeded). For example, if you want to show one rectangle on each quarter of the page, no matter the resolution, you can do this:newCanvas("page", "100vw", "100vh") .add( "center at 25%" , "middle at 25%" , newCanvas().css('background-color', 'red').size("20vw","20vh") ) .add( "center at 25%" , "middle at 75%" , newCanvas().css('background-color', 'green').size("20vw","20vh") ) .add( "center at 75%" , "middle at 25%" , newCanvas().css('background-color', 'yellow').size("20vw","20vh") ) .add( "center at 75%" , "middle at 75%" , newCanvas().css('background-color', 'blue').size("20vw","20vh") ) .print( "center at 50vw" , "middle at 50vh" )
Feel free to give more detail about what visual rendering you’re aiming for
Jeremy
Jeremy
KeymasterHi Sam,
PennController elements are not meant to be used outside a newTrial command. Just check the value of the Type column to pass the appropriate parameter to your newKey command:
Template( "statTableRandom.csv" , row => newTrial( addItem(row.Type) , newText(row.Word).print() , newKey( (row.Type=="Train"?" ":"yn") ).wait() ) )
Jeremy
Jeremy
KeymasterYes it looks fine to me, but the best way to know is always to try it out and check the results file
Jeremy
Jeremy
KeymasterYou can get plain RT using a global Var element in which you compute the time difference, like this:
newTrial("experimento", newText("*") .print() , newKey(" ") .wait() .log("all") .disable() , getText("*") .remove() , newText(variable.oracion) .print() , newVar("RT") .global() .set(v=>Date.now()) , newKey(" ") .wait() .log("all") , getVar("RT") .set(v=>Date.now()-v) , getText(variable.oracion) .remove() ) .log( "ReadingTime" , getVar("RT") )
The timestamps correspond to the number of milliseconds that have elapsed since January 1, 1970. They tell you when the event happened (e.g. 1595564332806 corresponds to July 24, 2020, 9:38am GMT-0400) but are most useful for subtraction, like we’re doing here. The expression Date.now() in the script above returns such timestamps.
Jeremy
-
AuthorPosts