Jeremy

Forum Replies Created

Viewing 15 posts - 61 through 75 (of 1,522 total)
  • Author
    Posts
  • in reply to: Scale feedback and timeout #10766
    Jeremy
    Keymaster

    Hi,

    Yes, you can calculate response time upon runtime, or simply use the EventTime column to calculate it during your analyses. Same thing for accuracy: you can either set a Var element in success/failure to report accuracy directly in the results file, or you can just log the expected answer in an additional column using newTrial().log and compare it to the selection during your analyses

    Jeremy

    in reply to: Safari and Firefox preloading problems #10757
    Jeremy
    Keymaster

    Hi Mete,

    Would you mind sharing a link to your experiment, either here or at support@pcibex.net, so we can take a look at the problem? Thanks

    Jeremy

    in reply to: Scale feedback and timeout #10753
    Jeremy
    Keymaster

    Hi,

    There is no test command specific to the Question controller, because the Controller element can inject any IBEX controller in a PennController trial, including custom ones. The Question controller can very easily be coded manually, so I suggest you do that to get better control over the trial’s structure:

    newTrial(
      newText("Is this a question?").center().print()
      ,
      newTimer("timeout", 3000).start()
      ,
      newScale("answer", "Yes", "No")
        .keys("Y","N")
        .button()
        .callback( getTimer("timeout").stop() ) 
        .center()
        .print()
      ,
      getTimer("timeout").wait()
      ,
      getScale("answer")
        .test.selected()
        .success( getScale("answer").test.selected("Yes").failure( newText("Wrong answer!").print() ) )
        .failure( newText("You didn't answer in time!").print() ) 
      ,
      newTimer(2000).start().wait()
    )

    Jeremy

    Jeremy
    Keymaster

    Hi Kate,

    This is the error message you get from the debugger: “Unrecognized expression ‘myCustomTrialFunction’ (line inside PennController.Template)”

    If you look at your code just above the definition of myCustomTrialFunction, you’ll notice a lone chunk of code:

    newText([t.context_adj,t.target_adj,t2[0],t2[1]].join("<br>")).print()
            .settings.css("font-size", "25px")
            .center()
            .print()
            .log()
    

    This prevents the code that comes after it from being interpreted properly. Just delete that extra chunk of code and your experiment will run again

    Jeremy

    in reply to: Get the previous row of a column #10747
    Jeremy
    Keymaster

    Hi Larissa,

    There are two issues about getText in the code from your message (the same points apply to the “wait” button): first, you use getText("failure") before you even create it, which might cause a reference error for PennController. Second, you print() a Text element named “failure” (which you create at the same time, with newText) upon a click with no text in the input box: calling print() with empty parentheses always has the effect of appending the content of the element (if any) below the most recently print()-ed element; so in your case, the text will appear below the (also just print()-ed) button

    What you want is something like this (ignoring CSS for simplicity):

    newCanvas("my-canvas", 950,625) //950, 625
      .add(180,160, getText("welcome-researcher-msg"))
      .add(150,210, getText("type-ID-msg"))
      .add(225,240, getTextInput("inputID"))
      .add(320,320, newButton("wait", "START EXPERIMENT 👉"))
      .add(270,350, newText("failure","Please, type your participant's ID above 👆").hidden() )
    // ...
    getButton("wait")
      .wait( 
        getTextInput("inputID").testNot.text("") 
        .failure( getText("failure").visible() )
      )
    

    Regarding the fullscreen issue, you are not waiting for the button so the browser tries to go fullscreen as soon as the experiment starts, which most browsers won’t allow for security/user-experience concerns

    Jeremy

    in reply to: Delay in making changes #10746
    Jeremy
    Keymaster

    Hello Mete,

    Unfortunately this is a common issue that I haven’t gotten around to fixing yet. The workarounds are still the same, namely, you can copy the content of the problematic file, paste it inside a new file, and then delete the problematic one. My apologies for the inconvenience

    Best,
    Jeremy

    Jeremy
    Keymaster

    Hi Kate,

    You can use the same function to generate all your trials:

    Template("dummy", () => {
        const targetKeys = Object.keys(targets);
        fisherYates(targetKeys);   // shuffle the references to the pairs
        let new_targets = []; // this will contain half the items (only POS or only NEG for each pair)
        for (let i=0; i<targetKeys.length; i++) // keep the POS rows for the first half, the NEG rows for the second half
            new_targets.push( ...targets[targetKeys[i]][ i<targetKeys.length/2 ? "posi" : "neg"] );
        // Create three items per row that we kept
        new_targets = new_targets.map(t=>
            [ {contextsetter: t['contextcomparative'], contextquestion: t['comparativequestion']},
              {contextsetter: t['contextequative'], contextquestion: t['equativequestion']},
              {contextsetter: t['contextquestion'], contextquestion: t['questionquestion']}
            ].map(row => ["experiment_"+t.pair,"PennController", myCustomTrialFunction(row)] ) // this map returns an array of 3 trials
        ).flat(); // flatten the array to have all the trials at the root, instead of having a series of arrays of 3 trials
        // Shuffle new_targets as long as we can find three items in a row that come from the same pair
        while (new_targets.find( (v,i)=>i<(new_targets.length-2) && v[0].split('_')[1]==new_targets[i+1][0].split('_')[1] && v[0].split('_')[1]==new_targets[i+2][0].split('_')[1] ))
            fisherYates(new_targets);
        window.items = new_targets; // now add the trials to the experiment's items
        return {}; // we added the items manually above: return an empty object from Template
    })

    This way you necessarily get the same rendering for all your trials

    Jeremy

    in reply to: Media Recorder Technical Details #10735
    Jeremy
    Keymaster

    Hello,

    (1) The upload requests come directly from the participant’s browser, so it will come from their own IP addresses (the request will include an origin header indicating where the experiment is being run, ie farm.pcibex.net if you run your experiment on our farm)

    (2) The address will only be constant as long as all the participants take the experiment using the same connection, which won’t be the case unless you control how you recruit them (for example by having them all come to a lab and use a computer there)

    Jeremy

    in reply to: Timeout in Filled Inputs with failure test #10734
    Jeremy
    Keymaster

    Hi,

    Nice solution. A slightly simpler alternative would be to stop the hurry timer in the TextInput element’s callback instead of using a dummy Timer element to accomplish the same:

    newTimer("hurry", 3000).start()
    ,
    newTextInput("answer")
        .before(getText("Preamble"))
        .log("validate")
        .lines(1)
        .cssContainer("display", "flex")
        .print()
        .callback( 
            getTextInput("answer").test.text(/^(.{10,500})$/)
            .success( getTimer("hurry").stop() )
            .failure( newText("<b>Please write more.</b>").color("red").print() )
        )
    ,
    getTimer("hurry").wait()

    Jeremy

    in reply to: Branching Structure for filtering participants #10733
    Jeremy
    Keymaster

    Hi,

    I think there’s a bug with calling SendResults from within Header (also you wouldn’t need any Timer or callback there anyway). One alternative would be to create a SendResults and a final trials specifically for the too-many-errors scenario, placed after the regular ones and only accessed from that failure command. Here’s a basic illustration of the idea, which you can adapt to your needs:

    Sequence( randomize("experiment"), "normalSend", "normalEnd", "errorSend", "errorEnd")
    
    SendResults("normalSend")
    newTrial("normalEnd", newText("Congrats, you did it!").print(), newButton().wait() )
    
    SendResults("errorSend")
    newTrial("errorEnd", newText("Sorry, too many mistakes. The end.").print(), newButton().wait() )
    
    Header(
        newVar("error_count",0) // set the initial value to 0
            .global()           // make sure the Var element is global
            .test.is(v=>v&ltl3)    // the value should be below 3
            .failure( jump("errorSend") , getVar("error_count").set(0) , end() )
    )

    Jeremy

    in reply to: Does automatically generated text appear only in English? #10732
    Jeremy
    Keymaster

    Hi,

    The sendingResultsMessage and progressBarText variables need to be set to the text you want to display, as explained in the IBEX documentation, eg:

    var sendingResultsMessage = "Envoi des résultats en cours...";
    var progressBarText = "Progression";

    Jeremy

    Jeremy
    Keymaster

    Hello Darby,

    Some messages can be customized, for example those coming from the original IBEX engine. For other texts with no pre-set method to edit them, you can use the trick described in this post

    Jeremy

    Jeremy
    Keymaster

    Hi,

    Do you know what I should do in order to randomize the presentation of fillers and the new items generated from the Javascript dictionary?

    A common solution is to use rshuffle, eg rshuffle("experiment",startsWith("experiment_"))

    should I/can I use the same Javascript dictionary (aka, duplicate) to generate and randomize three trials from the N/A items?

    You cannot use the part that handles polarity, since your NA rows won’t fit there. Unlike the non-NA target rows, for which you only want to keep half of them (either the posi ones or the neg ones) it seems that you’re OK with generating three trials for every single NA row. That wouldn’t require any special code, as long as you list one trial per row instead of combining three in a single row, eg:

    Table sample

    adjective number,adjective class,pair,context_adjective,target_adjective,target_polarity,subject property,contextsetter,targetsentence,construction
    1,minimum_partial,NA,healthy,sick,NA,inanimate,I know that the apple tree and the pear tree are both healthy.,Which one is sicker than the other?,comparative
    1,minimum_partial,NA,healthy,sick,NA,inanimate,I know that the tomato plant and the raspberry bush  are both healthy.,Which plant is as sick as these two?,equative
    1,minimum_partial,NA,healthy,sick,NA,inanimate,I know that the orchid and the cactus are both healthy.,How sick are they?,question
    2,minimum_partial,NA,dry,wet,NA,inanimate,I know that the blue towel and the green towel are both dry.,Which one is wetter than the other?,comparative
    2,minimum_partial,NA,dry,wet,NA,inanimate,I know that the white mop and the yellow mop are both dry.,Which mop is as wet as these two?,equative
    2,minimum_partial,NA,dry,wet,NA,inanimate,I know that Patrick's backpack and his hat are both dry.,How wet are they?,question

    Code

    Template("minimumstandard.csv", myCustomTrialFunction)

    Jeremy

    Jeremy
    Keymaster

    The debugger reports this error: TypeError: targets[row.pair][row.target_polarity] is undefined

    Your CSV file does not contain the same values in the target_polarity column as in the former CSV file. The former CSV file had POS and NEG, hence the recurrent references to POS and NEG in the code. In this file, you have posi, neg and NA in that column. I would suggest you replace the occurrences of POS and NEG with posi and neg, respectively, but you’ll still be left with unhandled NAs (which also affects the “pair” column by the way). It seems to me that those NA rows are not of the same nature as the posi and neg ones, in that they won’t fall under the same crossing/distribution of conditions that you describe in your initial post, and that they should therefore live in a separate CSV file referenced somewhere else in your code

    Regarding your other question, just edit the newTrial in the code to generate trials that suit you

    Jeremy

    Jeremy
    Keymaster

    Hi Kate,

    You’re not actually inserting the items in your experiment. The items created by the code above are labeled after the format “experiment_*”. If you replace "experiment", by startsWith("experiment"), in your Sequence command then you’ll see the items at the end of your sequence

    Jeremy

Viewing 15 posts - 61 through 75 (of 1,522 total)