Jeremy

Forum Replies Created

Viewing 15 posts - 586 through 600 (of 1,522 total)
  • Author
    Posts
  • in reply to: Audioupload #7716
    Jeremy
    Keymaster

    Hi Jones,

    When I visit https://amor.cms.hu-berlin.de/~anamjoya/phonologie/audiorecordings/audiorecordings.php I just get a blank page, when I should at least get a PermissionDeniedError error message. Plus, when I visit https://amor.cms.hu-berlin.de/~anamjoya/phonologie/audiorecordings/ (the parent directory) the file size is 0, suggesting that the file is sill blank. Did you double-check that you indeed edited and saved the online version of your PHP file?

    By blocking upload trial, do you mean something like UploadRecordings(“label”,”block”)? Also, would you recommend including an Upload trial after every recording?

    By default PennController inserts a blocking upload trial before sending the results, so in the absence of any explicit UploadRecordings in your script you would already have a blocking upload trial. But if you want to control when that happens, yes, using UploadRecordings("label") is the way to go (the second parameter is optional, it will default to "block"). If you have many recordings, it would be a good idea to intersperse non-blocking upload trials over the course of your experiment too, so your participants don’t have to wait a long time for upload to complete at the end. If you just have one or two short recordings, it might not be worth it

    Jeremy

    in reply to: Undo changes in the code editor panel #7711
    Jeremy
    Keymaster

    Hello Deborah,

    Thank you for reporting this. It is a known issue and I’ll address it next time I refresh the farm’s code

    Jeremy

    in reply to: Audioupload #7710
    Jeremy
    Keymaster

    Hi Jones,

    The code you need to write in the PHP script is reported at step 4 here: https://doc.pcibex.net/how-to-guides/recording-multimedia/#server-setup

    Also, make sure you have at least one blocking upload trial before the results are sent to your server, or you’ll risk hitting “send results” before the upload completes and you won’t have lines in your results file reporting the filename(s) of the uploaded file(s)

    Jeremy

    in reply to: DebugOff() does not work #7706
    Jeremy
    Keymaster

    Hi Elise,

    Did you try using DebugOff (make sure you only include one g) in your experiment’s code on an external server, and the debugger still showed up? Did you make sure to either place DebugOff below PennController.ResetPrefix, or did you use the prefix: PennController.DebugOff()?

    I didn’t change anything about that function in any PennController version, I only set things up on the PCIbex Farm so that the debugger would only (and always) show up at the demonstration link, and not at the data-collection link

    Jeremy

    in reply to: Timers #7704
    Jeremy
    Keymaster

    Hi Elias,

    Like I said, in the absence of a Sequence command, the trials are run in the order in which they are defined. Most projects use a single .js script file, so that would mean that in the absence of a Sequence command, the trial would be run in the top-down order in which they appear in that single script file. In your case, you happen to have more than one .js script file, but that’s orthogonal to including HTML content in your trials

    The next page of the advanced tutorial which I linked to in my previous message, titled Collecting participant information, illustrates how to inject the content of an HTML file from your project’s Resources folder into a newTrial. Then you simply use the Sequence command to control the order in which you trials are run

    Jeremy

    in reply to: Stand-alone server: image resources fail to load #7703
    Jeremy
    Keymaster

    I think you should be able to make the python script serve multimedia files from the chunk_includes folder: at lines 1573-1574 you have this:

                       if fname.endswith(".wav") or fname.endswith(".mp3") or fname.endswith("m4a"):
                            continue

    This prevents the script from returning a 500 error when you request a wav/mp3/m4a file. You can allow more extensions, like this:

                        if fname.endswith(".wav") or fname.endswith(".mp3") or fname.endswith("m4a") or fname.endswith(".ogg"):
                            continue
                        if fname.endswith(".png") or fname.endswith(".jpg") or fname.endswith(".bmp"):
                            continue
                        if fname.endswith(".mp4") or fname.endswith(".webm") or fname.endswith(".ogv"):
                            continue

    Then you could extend line 1615 to actually serve the content of those files when requested:

                    if qs_hash['resource'][0].endswith(".wav") or qs_hash['resource'][0].endswith(".mp3") or qs_hash['resource'][0].endswith(".m4a") or qs_hash['resource'][0].endswith(".ogg"):
                        start_response('200 OK', [('Content-Type', 'audio/*'), ('Content-Length', stats.st_size)])
                    elif qs_hash['resource'][0].endswith(".png") or qs_hash['resource'][0].endswith(".jpg") or qs_hash['resource'][0].endswith(".bmp"):
                        start_response('200 OK', [('Content-Type', 'image/*'), ('Content-Length', stats.st_size)])
                    elif qs_hash['resource'][0].endswith(".mp4") or qs_hash['resource'][0].endswith(".webm") or qs_hash['resource'][0].endswith(".ogv"):
                        start_response('200 OK', [('Content-Type', 'video/*'), ('Content-Length', stats.st_size)])

    This hack is fine as long as you run your experiment locally, but doing this for experiments that you actually run publicly on a webserver will likely cause memory overloads (we tried it on expt.pcibex.net at the time and it didn’t go well)

    Jeremy

    in reply to: Neutral answer on test.selected #7700
    Jeremy
    Keymaster

    Hi,

    You can use the same approach, with minimal tweaks:

    newTrial(
        newFunction(function(){ 
            this.scales = [];
            $("body").click(e=>e.target.type=="range"&&this.scales.push(e.target.parentElement.classList[1].replace(/PennController-/,'')));
        }).call()
        ,
        newScale("test1", 100).size("75vw").slider().print()
        ,
        newScale("test2", 100).size("75vw").slider().print()
        ,
        newButton("Validate")
            .print()
            .wait( newFunction(function(){return ["test1","test2"].filter(v=>!this.scales.includes(v)).length}).test.is(0) )
    )

    You just need to list all the names of the Scale elements you want to validate as an array in place of ["test1","test2"]

    Jeremy

    in reply to: Stand-alone server: image resources fail to load #7699
    Jeremy
    Keymaster

    Hi Emiel,

    That is not a link to a standalone experiment, it’s a link to an experiment on the PCIbex Farm. The resources (two images) seem to preload fine there

    One thing that came to mind in the meantime: the exchange you referred to concerns running your study on a dedicated webserver. If you are running the experiment locally on your own device (eg. by typing python server.py in your terminal) then chances are you have no webserver running, just the server.py python script, which will not serve multimedia files from any folder. In that case, one solution would be to set up a XAMPP environment (or another webserver solution) so you can serve content at localhost:3000 (if you serve content elsewhere/on another port, then make sure to include the full URI to your files in your experiment)

    Jeremy

    in reply to: Separator feedback #7694
    Jeremy
    Keymaster

    Hi,

    I’m not sure whether this should qualify as a bug: the newController command injects the content of what you’d get when creating a dedicated trial using the corresponding controller, inside the current trial (newTrial). When you inject two controllers using newController within the same trial, you are not moving from one trial to the other after you complete the first Controller element: this is why the success/failure of your Separator controller tracks the preceding trial, not the preceding Controller element

    If you want to use native-Ibex controllers, you could simply design a native-Ibex experiment (note that you can use Template to generate regular Ibex items). Alternatively, you could design your task within a full-PennController framework:

    defaultText.center()
    ,
    newText( "<p>"+row.question+"</p>" ).print()
    ,
    newText("Yes").print(),newText("No").print()
    ,
    newSelector("answer")
        .add(getText("Yes"),getText("No"))
        .keys("F","J")
        .log()
        .wait()
    ,
    clear()
    ,
    getSelector("answer")
        .test.selected( getText(row.answer) )
        .success( newText("Correct").print() ) 
        .failure( newText("Incorrect").print() )
    ,
    newTimer(1000).start().wait()

    NB: the Controller element’s log command takes no arguments

    Let me know if you have questions

    Jeremy

    in reply to: Logging of wording of Comprehension Question #7691
    Jeremy
    Keymaster

    Hello Silke,

    Including the Question controller as you do in the code you shared but validating the trial by waiting for a Key element will only use the controller to show the question, but unless the participant makes the effort of clicking on one of the two answers (which won’t have any visible effect on their part) then the Question controller won’t detect any answer as far as it is concerned: only the Key element will detect that something happened. This is why you won’t see a line in the results file for the Question controller (unless you click on an answer)

    If you’d like to stick to the original Ibex controller, you could do that instead, using the options described in the Ibex manual:

    Template("items.csv", row =>
        newTrial("experiment",
            newTimer("break", 1000)
                .start()
                .wait()
            ,
            newController("DashedSentence", {s:row.sentence})
                .print()
                .log()
                .wait()
                .remove()
            ,
            newController("Question", {
                q: row.question,
                as: ["Ja", "Nein"],
                autoFirstChar:true,
                hasCorrect:row.answer,
                randomOrder:false
            })
                .print()
                .log()
                .wait()
        )
        .log("group", row.group) 
        .log("item", row.item)
        .log("condition", row.condition)
        .log("accurate_answer", row.answer)
    )

    You’d need to use “Ja” and “Nein” in your answer column so that that cell matches one of the two possible answers, and you’ll get a 0 or 1 in the tenth column (IIRC) indicating whether the answer was correct

    Another option would be to get rid of the Question controller altogether, in which case it would be easier to implement a timeout feature:

    Template("items.csv", row =>
        newTrial("experiment",
            newTimer("break", 1000)
                .start()
                .wait()
            ,
            newController("DashedSentence", {s:row.sentence})
                .print()
                .log()
                .wait()
                .remove()
            ,
            newText("Question", row.question).center().print()
            ,
            newText("<p>1. Ja<br>2. Nein</>>").center().print()
            ,
            newKey("Answer", "JN")
                .once()
                .callback( getTimer("delay").stop() )
                .log("last")
            ,
            newTimer("delay", 5000).start().wait()
            ,
            newVar("isCorrect").global()
            ,
            getKey("Answer")
                .test.pressed( row.answer )
                .success( getVar("isCorrect").set(1) )
                .failure( getVar("isCorrect").set(0) )
        )
        .log("group", row.group) 
        .log("item", row.item)
        .log("condition", row.condition)
        .log("question",row.question)
        .log("accurate_answer", row.answer)
        .log("correct", getVar("isCorrect"))
    )

    Note that because this tests the key that was pressed, and not which answer was chosen, row.answer should either be J or N, just like it currently is in your csv file

    I’m sorry you experienced issues when exporting the xls file to a csv file. I wasn’t aware that Excel used semi-colons as a default separator in that operation: csv stands from comma-separated values, and semi-colons are not a standard when it comes to those types of files–tabs are another common separator, which have a dedicated tsv extension but are sometimes also found in place of csv files (PennController will accept tsv files too)

    Jeremy

    in reply to: Neutral answer on test.selected #7689
    Jeremy
    Keymaster

    Hello Matthias,

    You are right, this is a problem I should fix. You can technically validate a score of 50 but you need to set the cursor to a different value first and release the mouse button, then select the value of 50 again

    As a workaround, you can use Function elements and variables to keep track of whether the scale was clicked. Here is a minimal example you can adapt to your case:

    newTrial(
        newScale("test", 100).size("75vw").slider().print()
        ,
        newFunction(function(){ getScale("test")._element.jQueryElement.click(()=>this.scaleClicked=true) }).call()
        ,
        newButton("Validate")
            .print()
            .wait( newFunction(function(){return this.scaleClicked;}).test.is(true) )
    )

    Let me know if you have questions

    Jeremy

    in reply to: Timers #7687
    Jeremy
    Keymaster

    Hi Elias,

    Demonstration links give access to all of the project’s code by a simple click on the “edit” link at the top of the page. If you’re talking about live-sharing, as in Google Docs for example, where one can see someone else’s edits in real time, it’s not currently possible on the PCIbex Farm

    Note that, if we’re talking about the project at https://farm.pcibex.net/r/rWbWmp/, you have 3 .js files in the Scripts folder (phase 3.js appears twice, which must be a bug—make sure you save a copy of your code, delete the file and recreate it). All files are executed in an alphanumeric order, so when you run your experiment, Phase 1.js will be executed first, then Phase2.js and finally phase 3.js. By “executed” here I mean that the global commands of each file take effect in that linear order (the commands inside the newTrials will be executed later, when each trial is run—the trials themselves are all created when the files are read)

    In particular, you have four SendResults commands in Phase2.js and another SendResults command in phase 3.js. Because there is no Sequence command anywhere in those js files, the trials are simply run in the order in which they are created, which means: all the trials from Phase 1.js first, all the trials from Phase2.js then, and finally all the trials from phase 3.js. This is something you can confirm by looking up the list of trials in the “Sequence” tab of the debugger when you test your experiment

    All that being said, you have an unlabeled trial at the end of Phase2.js which ends with newButton().wait(), which means that the experiment will hang there, and never actually run the trials from phase 3.js (even though you can see them in the debugger’s “Sequence” tab starting at trial #66)

    At the end of the day, what you should remember is that all you js files are executed and their trials created, so if you have a command Sequence( randomize("experimental-trial") ) in any of those files, your experiment will only run trials labeled “experimental-trial”, but importantly, trials from all three files (because you create trials labeled “experimental-trial” in all three of your files)

    I hope these comments brought some clarity as to how trials are created and run in PennController

    Jeremy

    in reply to: Stand-alone server: image resources fail to load #7684
    Jeremy
    Keymaster

    Dear Emiel,

    Nothing comes to mind right off the bat. Feel free to share the link to your experiment either here, or at support@pcibex.net so I can take a closer look

    Jeremy

    in reply to: Missing data (Failed submission) #7681
    Jeremy
    Keymaster

    Hi Merel,

    The database shows that your experiment did receive 10 submissions, with between 1918 and 1922 rows for each of them, so you should see all your submissions and the corresponding rows in the results file. Maybe you tried to generate the results file before all the incoming data had finished being processed by the server?

    Let me know if the problem persists

    Jeremy

    in reply to: Pick combined with randomizeNoMoreThan #7680
    Jeremy
    Keymaster

    Hi Aliona,

    The Sequence command and the related functions only care about the trials’ labels: if you yes-answer and no-answer trials share the same labels, you won’t be able to control their distribution. I suggest you include the yes/no bit of information in the trials’ labels, eg "critical-yes"/"critical-no" and "filler-yes"/"filler-no"

    The pick function will pick the N next trials from a set: when you do pick(critical,8) and critical was set to randomize("critical"), it will pick 8 trials from a randomized set of all the trials labeled “critical”. Then you pass that to rshuffle, so those 8 trials will be interspersed with trials labeled “filler” in the order in which they were picked from critical

    You could do that, in which case two critical-yes trials would always be separated by a critical-no, a filler-yes and a filler-no trial:

    criticalyes = randomize("critical-yes")
    criticalno = randomize("critical-no")
    fillersyes = randomize("filler-yes")
    fillersno = randomize("filler-no")
    
    Sequence("demographics","etc",
             rshuffle(pick(criticalyes,4),pick(fillersyes,6),pick(criticalno,4),pick(fillersno,5)),"break",
             rshuffle(pick(criticalyes,4),pick(fillersyes,5),pick(criticalno,4),pick(fillersno,6)),"break",
             rshuffle(pick(criticalyes,4),pick(fillersyes,6),pick(criticalno,4),pick(fillersno,5)),
             "etc")

    Don’t forget to add -yes/-no to your trials’ labels, and make sure the 5/6 match the number of different types of trials you have (I went with 17 filler-yes trials and 16 filler-no trials, for a total of 33 filler trials)

    Jeremy

Viewing 15 posts - 586 through 600 (of 1,522 total)