Reply To: Some final issues before the experiment

PennController for IBEX Forums Support Some final issues before the experiment Reply To: Some final issues before the experiment


Hi Danil


I know the recordings of the trial before the last trial and the last trial are uploaded because I can see the lines


These lines simply report that the UploadRecordings trial was run, they do not report that any file was uploaded. The lines that report successful uploads are the ones you mention otherwise, which include the filename of the zip file that was uploaded, like this one:

1613560743,856aafdd008ea714bef357058d6b9c1c,PennController,73,0,experiment,NULL,PennController,UploadRecordings,Filename,,1613560695207,filler2,APN,,карамфил,От градината бяха откъснати засадените от девойката,карамфил,A,STE-044_mono.wav,filler2_APN_карамфил__A,async

It does not seem to be a problem of coding since the final uploading (which is asynchronous (the line used in the main file is indeed UploadRecordings(“sendAsync”, “noblock”)) and the next trial begins before the last upload has completed, which in turn explains why each .zip file points to the preceding trial) seems to happen after the execution of the final trial (as the second “sendAsync”) suggests.

Precisely, this is why I suggested you make (at least the last of) your UploadRecordings trial synchronous, so that the final screen is only reached after upload has completed

Contrary to your previous suggestion I cannot input this information with my materials during the experiment, since I do not have a direct control of my participants.

Are you referring to this message? I’m not sure how it is relevant to the current point, which is about checking whether the (algorithmatically uniquely-named) zip files have been uploaded

Contrary again to your previous suggestion, I wouldn’t want to reset the value of this counter file, since this file contains important information about the number of participants.

Are you referring to my suggestion to use the SetCounter command to increase the value of the counter early in the experiment? Using that command would not reset the counter, it would simply change the default behavior of automatically increasing the counter at the end of experiment, and would increase it wherever you run the SetCounter trial instead (for example, at the very beginning of the experiment, if you make that SetCoutner trial the first one to run). If you want to test your experiment in a specific group without editing anything in your project, use the withsquare method described in the tutorial.

Regarding adding participant-identifying info to the filenames of your audio recordings (ie not the uploaded zip files themselves, but the files they contain): for the reasons you describe, two or more participants could run the experiment with the same counter value, but generating another unique id, in addition to the MD5 hash that’s reported at the beginning of every line, and adding it at the end of every line is not terribly difficult. Just add this to your script:

var id = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, c=>{
  const r = Math.random() * 16 | 0,v = c == 'x' ? r : (r & 0x3 | 0x8);
  return v.toString(16)];
  // void
.log("uniqueID", id)

And then add +id to your MediaRecorder elements’ names, for example newMediaRecorder("test1_recording"+id, "audio") / getMediaRecorder("test1_recording"+id)

3. The first thing you should do, assuming you are hosting your experiment on a secure domain (which is the case on the PCIbex Farm) is replace your line




In any case, your experiment seems to use many files, each between 250-800KB: I wouldn’t be surprised if the servers where you host your files just stopped serving some files after too many requests in a short time window. Which makes me reiterate my recommendation to consolidate them into one or a few zip files. You can easily generate unique filenames by prefixing the name of their current containing folder, eg: Fillers2_STE-001_mono.wav

Regarding some of your participants taking your experiment again: this is something you should avoid, so I think you should rather spend time and efforts reducing the likelihood that they would have to take your experiment again. I don’t know how you will be recruiting your participants and what resources you have access to, but my policy with online experiments has been to pay each participant only once, so they had no incentive to take the experiment again after completing it once, and to pay them even if they couldn’t finish the experiment as long as they can prove that they tried (usually by describing the content of the experiment and/or sending a screenshot). In my opinion, the counter-based conditional system that you describe seems overly complicated, and ultimately not necessary once the initial problem has been addressed.

Feel free to send me a link to your experiment here or by email at