Jeremy

Forum Replies Created

Viewing 15 posts - 556 through 570 (of 1,522 total)
  • Author
    Posts
  • in reply to: Resources Not Loading – Random Points in Experiment #7795
    Jeremy
    Keymaster

    Hello Kelly,

    Your experiment fetches many resources from GitHub’s servers: your code points to the GitLink_* columns of the tables pre.csv, post.csv and training.csv, which represent a total of 265+216+216 = 697 unique URLs (training.csv contains redundant references). When I opened your demonstration link and opened my browser’s console, the Network tab listed a total of 406 successful requests to raw.githubusercontent.com. Besides those, there are 812 (=406*2) other requests to GitHub’s servers, which come upstream from the 406 aforementioned ones. In total, then, there was a total of 1218 requests to GitHub, and it took about 4min to complete them all

    At the end of the day, I was still missing 291 requests. I am not sure what happened to those, as my Network tab does not show any unsuccessful request, but most likely they were dropped because of the many requests that were actually sent. Usually, the more requests you send within a short time to a server, the slower they will be processed, and some of them might even be blocked. PennController tries to prevent flooding the distant server with requests by only allowing for 4 concurrent requests (new requests are sent once older requests have successfully resolved). That being said, in your case, it could be that GitHub blocks the requests of an experiment session after a few hundreds, and/or that PennController fails to free new slots

    My suggestion is that you consolidate your resources into zip files (you could have separate zip files for different sets of trials if you’d like) and host them on a dedicated webserver or an S3 bucket, for example. Then you’ll have just as many requests as you have zip files, making it much more likely that they all succeed, and the zip files will contain all your resources, so once they’ve been downloaded you don’t run the risk of missing one resource here and there

    If that’s not an option you can pursue, I would at least replace the GitHub URLs from your tables with direct URLs pointing to the raw.githubusercontent.com domain, so you don’t generate three times as many requests as you really need. You could also try to balance the requests to GitHub’s servers and to the farm’s server more evenly, in the hope that no server would block the requests

    Let me know if you need assistance

    Jeremy

    in reply to: Questions about eye tracking php script generated file #7790
    Jeremy
    Keymaster

    I see, you do get the right format (one header line followed by multiple lines for each trial) but except for the first line of each trial, the other ones have gibberish where they should have 1s and 0s

    I’m not sure what is happening, I will need to troubleshoot this. I know that other users have successfully set up eye-tracking experiments, if anyone has experienced a similar issue, it would be great if they could share some feedback with us

    Apologies for the inconvenience

    Jeremy

    in reply to: Questions about eye tracking php script generated file #7788
    Jeremy
    Keymaster

    Hi again,

    This line from the demo results file, for example, has a key like httpsdomainofmyexperiment/pathtomyexperiment/vEry-l0ng-uniQu3-1dentif1er:

    1590427228,32334b38c1c127fdaabbdb0580507c91,PennController,2,0,Item-1,NULL,EyeTracker,tracker,Filename,httplocalhost3000/ce5ee9ca-1765-9a4d-c0be-621f52addbb1,1590426844245,NULL

    Look for the EyeTracker lines that report a value for “Filename”

    Jeremy

    in reply to: Questions about eye tracking php script generated file #7786
    Jeremy
    Keymaster

    Hi Tian,

    As mentioned in the guide:

    This script will take care of receiving and storing encoding data lines in subfolders and one file per participant. It will also output back files where the lines have been decoded. You can directly visit it through your browser and type in the field that you see the “URL” key that was reported in your results file (something like httpsdomainofmyexperiment/pathtomyexperiment/vEry-l0ng-uniQu3-1dentif1er). Alternatively, you can directly append key at the end of the PHP script’s url (replacing key with the value from your results file) to get the output file — this is the method we will use in our analyses.

    The R script in the guide uses the latter method (ETURL = ...EyeTracker.php?experiment=) but you can directly visit the URL that points to your PHP file and enter the key from your results file in the input box to manually download a decrypted version of the file

    Let me know if you have questions

    Jeremy

    in reply to: Customize slider #7781
    Jeremy
    Keymaster

    Hello Matthias,

    It would be easier to assist you with a link to your project. The most reliable way of increasing the width of a slider scale element is to use CSS rules in a CSS file named global_*.css in the project’s Aesthetics folder, as described on this thread. To overwrite the constraint on the slider’s maximum length that PennControler sets by default, add max-width: unset !important; to the CSS rules that apply to the Scale element (.PennController-MainSlider in the example above)

    Jeremy

    in reply to: Troubleshooting #7780
    Jeremy
    Keymaster

    Hi,

    Glad to see that you have fixed the issue. Indeed, when specifying which table to use with the Template command, the expected format is Template( "table_filename.csv" , row => newTrial(/* trial content */) )

    Jeremy

    in reply to: can't load results #7779
    Jeremy
    Keymaster

    Hi,

    The database says that your project received 35 submissions at the data-collection link, most of them have between 230-250 rows, but three of them only have 65 rows

    Of those 35 submissions, 7 were received on February 18 and 5 on February 21, so you wouldn’t see (all of) them when you posted your message. But you should get all 35 submissions in your results file if you try to download it today

    Let me know if the problem persists

    Jeremy

    in reply to: Proceeding to the next trial based on different conditions #7778
    Jeremy
    Keymaster

    Hi Noelia,

    PennController commands are linearly executed in a top-down fashion. In your code, once PennController reaches newTimer("hurry", 1000).start(), it starts a 1s timer and immediately moves on to the next command, which is getTimer("hurry").test.ended().success(newText("lento", "¡Muy lento!").print(), getTextInput("rta_p1")). Because you just started the timer, of course the test will fail, so you will never see the Text element named “lento” printed on the page

    Use a callback command to tell PennController to execute commands not linearly, but instead upon the occurrence of the relevant event associated with the type of element on which you use callback. And if you want the participant to click “Próxima” to move on to the next trial after the timer has elapsed, just add your Timer element’s test as a disjunct (or) in the Button element’s wait:

    newTimer("hurry", 60000)
      .callback(
        clear() // remove all elements from the page
        ,
        newText("lento", "¡Muy lento!").print() // print your message
        ,
        getButton("prox").print() // re-print the button, below the message
      )
      .start()
    ,
    newButton("prox", "Próxima")
        .css({margin: "1em", "font-size": "medium"})
        .color("blue")
        .center()
        .print()
        .wait(
          getTimer("hurry").test.ended().or( // validate click if the timer has elapsed...
            getTextInput("rta_p1")           
              .test.text(/^\s*\S+(?:\s+\S+)+\s*$/)  // ... or if there are at least two words in the box
              .failure(
                newText("Es necesario completar el recuadro con más de una palabra para pasar a la próxima oración")
                  .center()
                  .color("red")
                  .print()
              )
          )
        )
    

    Jeremy

    in reply to: Target Reading times (priming task) #7769
    Jeremy
    Keymaster

    Yes, you could use a global Var element to compute the time difference between two timepoints in your sequence of in-trial commands on the fly, and report it as an extra column using newTrial().log():

    newTrial(
      newVar("RT").global().set(()=>Date.now())
      ,
      newText("Hello world").print()
      ,
      newKey("FJ").wait()
      ,
      getVar("RT").set(v=>Date.now()-v)
    )
    .log("RT", getVar("RT"))

    Jeremy

    in reply to: Target Reading times (priming task) #7767
    Jeremy
    Keymaster

    Hi,

    I’m not sure what you mean by “consecutive values,” but your code as it is now logs the value of row.item in each of the three columns:

    .log( "item", row.item )
    .log( "prime", row.item )
    .log( "target", row.item )

    So PennController reports in all three columns the value of the “item” cell from the row that was used to generate the current trial, as it’s been told to

    You can find the timestamp corresponding to the keypress in the results line(s) corresponding to the Key element (which PennController knows to log because log is called on the Key element: newKey("answerTarget", "FJ").log().wait() // Proceed upon press on F or J (log it))

    You can then compute response time by subtracting timestamps, as illustrated using R in the advanced tutorial

    Jeremy

    in reply to: Proceeding to the next trial based on different conditions #7765
    Jeremy
    Keymaster

    Hi,

    You can replace newTimer("timeout",5000).start().wait() with newButton("Skip").callback( end() ).print() to print a Button element which, when clicked, will end the trial immediately

    Your current code (re)starts a new Timer element whenever the participant presses Enter from within the input box, and tell PennController to wait for it to elapse before proceeding, regardless of whether a new, correct response is given before its end

    Jeremy

    in reply to: Uploaded recordings corrupted #7763
    Jeremy
    Keymaster

    Hi Nasim,

    It is unlikely that the issue would lie with the server setup, especially since you get no error from uploading, and locally-downloaded zip files are also corrupted. I edited part of the code of the MediaRecorder element, including bits here and there concerning the zipping of the recordings, but I can’t tell whether those edits are directly related to the present issue

    I am glad to read that the latest version of PennController solves the issue though!

    Jeremy

    Jeremy
    Keymaster

    Hi,

    You can reference the keys set1, set2, set3 and set4 in an array and shuffle it using Ibex’s fisherYates:

    Template("speecherrors.csv", row =>
        newTrial("repetition",
            defaultText
                .css("p")
                .center()
                .print()
            ,
            newText("Please repeat the following sets of words as fast as you can when they appear on the screen. Press start to display them.")
                .css("p")
            ,
            newButton("Start")
                .log()
                .center()
                .print()
                .wait()
                .remove()
            ,
            newTimer("wait", 1000)
                .log()
                .start()
                .wait()
            ,
            newMediaRecorder("speech_set1", "audio")
                .log()
                .record()
            ,
            newText( "blank line" )
                .hidden()
            ,
            row.sets = ["set1","set2","set3","set4"],  // array of keys
            fisherYates(row.sets)                      // shuffle the array
            ,
            newText("set1", row[row.sets[0]])          // use the first (shuffled) key
                .log()
            ,
            newTimer("wait", 3000)
                .log()
                .callback(getMediaRecorder("speech_set1").stop())
                .start()
                .wait()
            ,
            getText("set1")
                .remove()
            ,
            newTimer("wait", 1000)
                .log()
                .start()
                .wait()
            ,
            newMediaRecorder("speech_set2", "audio")
                .log()
                .record()
            ,
            newText("set2", row[row.sets[1]])          // use the second (shuffled) key
                .log()
            ,
            newTimer("wait", 3000)
                .log()
                .callback(getMediaRecorder("speech_set2").stop())
                .start()
                .wait()
            ,
            getText("set2")
                .remove()
            ,
            newTimer("wait", 1000)
                .log()
                .start()
                .wait()
            ,
            newMediaRecorder("speech_set3", "audio")
                .log()
                .record()
            ,
            newText("set3", row[row.sets[2]])          // use the third (shuffled) key
                .log()
            ,
            newTimer("wait", 3000)
                .log()
                .callback(getMediaRecorder("speech_set3").stop())
                .start()
                .wait()
            ,
            getText("set3")
                .remove()
            ,
            newTimer("wait", 1000)
                .log()
                .start()
                .wait()
            ,
            newMediaRecorder("speech_set4", "audio")
                .log()
                .record()
            ,
            newText("set4", row[row.sets[3]])          // use the fourth (shuffled) key
                .log()
            ,
            newTimer("wait", 3000)
                .log()
                .callback(getMediaRecorder("speech_set4").stop())
                .start()
                .wait()
            ,
            getText("set4")
                .remove()
            ,
            newTimer("wait", 1000)
                .log()
                .start()
                .wait()
            ,
            newMediaRecorder("speech_target", "audio")
                .log()
                .record()
            ,
            newText("target", row.target)
                .log()
            ,
            newTimer("wait", 3000)
                .log()
                .callback(getMediaRecorder("speech_target").stop())
                .start()
                .wait()
            ,
            getText("target")
                .remove()
        )
        .log("first", row[row.sets[0]])   // log the first shuffled key
        .log("second", row[row.sets[1]])  // log the second shuffled key
        .log("third", row[row.sets[2]])   // log the third shuffled key
        .log("fourth", row[row.sets[3]])  // log the fourth shuffled key
        .log("target", row.target)        // log the target word too
    )

    Jeremy

    in reply to: Troubleshooting #7755
    Jeremy
    Keymaster

    Hi,

    I just took a test run of the Advanced Tutorial project and loaded the results file in R using read.pcibex and then I ran your code, and it works like a charm. Could it be that you’re working on a different R session than three days ago that has a different results data frame?

    Jeremy

    in reply to: End experiment between trials #7752
    Jeremy
    Keymaster

    Hi Marisol,

    I am going to keep calling trials those things that you get with newTrial, to be consistent with PennController’s terminology and not confuse potential readers

    If you track accuracy independently for different sets of trials, why don’t you use different Var elements for each set? Like this:

    Template(GetTable("tabla.csv")
        .filter( "set" , /a/ )
        ,
        row => newTrial("setA",
          newVar("accSetA", 0).global()
          ,
          // ...
          getTextInput("intento").test.text(row.correcta)
            .failure( getVar("accSetA").set(v => v+1)) 
            .log()
          ,        
          getVar("accSetA").test.is(3)
            .success( getVar("shouldend").set(true))
          // ...
    

    Etc.

    Jeremy

Viewing 15 posts - 556 through 570 (of 1,522 total)