Statistical Learning Experimenter Package

From Emcap

Contents

Download

Amaury Hazan, Piotr Holonowicz, Pompeu Fabra University 2008

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Download the software

What is the Module?

The Saffran Experimenter module is a computer simulation of the forced-choice task done by J.Saffran and colleagues [1]. In the original task, two groups of participants were exposed to the random sequences of tone words, each one consisting of three tones, coming from two artificial "languages". Then so called forced-choice task is performed: the subjects are exposed to pairs of tone words (one coming from the language they heard the sequence from, and another one from the other one) and have to choose the one of the two that is seem more familiar for them. The module simulates this experiment with the difference that the forced-choice task is performed not only after the exposure but also before it [2], and instead of human subject, the predictor based on artificial neural network is used. Then the results are compared with the original Saffran data as well as the data from repeated experiment done by Knast [2]. The graphical front-end allows to observe the forced-choice task phase step by step and compare visually the choices made by a human versus those made by a machine.

What are the inputs and outputs?

The input data are: the number of participants, the type of the experiment (non tone words vs partial tone words, see [2]), the type of predictor used, the languages (sets of tone words) used in symbolic form (constant, hard-wired into the source code), the length of the randomly generated sequence of tone words (constant, hard-wired into the source code) and the type of the encoding used for tone words. The outputs are: a graph with the distribution of the tone words, graphs with comparison of the performance of predictor, before and after exposure to the sequences vs human ground truth data. Additionally the user has the possibility of step by step observation of the prediction process by running user friendly GUI.


What are the parameters?

These are: the number of participants, the type of the experiment (non tone words vs partial tone words, see [2]), the type of predictor used and the type of the encoding used for tone words.


What are the system requirements?

The tested (and preferred) platform is kUbuntu Linux (from 7.04 Feisty Fawn), the software was run successfully also on Windows XP with the exception of graphical front-end which has not been tested.

The required packages:

Windows:

python, version >=2.5, http://www.python.org/download/

numpy and scipy (recent version, see http://www.scipy.org/Download)

qt library version >= 4.0.0 , Open Source version http://trolltech.com/downloads/opensource

pyqt4 (recent version, use binary packages from http://www.riverbankcomputing.co.uk/software/pyqt/download)

pyYAML (recent version, http://www.pyyaml.org/wiki/PyYAML)

Plus, you need to copy to the directory with the demo the library MSVCP71.DLL. You can download it from here: MSVCP71.DLL

Linux: use install script, it compiles and installs all the dependencies. If you have problems with some of them, refer to the script for details.

How to Cite

 [1] Saffran, J. R., Johnson, E. K., Aslin, R. N., & Newport, E.(1999) 

Statistical learning of tone sequences by human infants and adults. Cognition, 70(1), 27-52.

Bibtex:

 @Article{saffran1999,
 title =    "Statistical learning of tone sequences by human infants and adults",
 author =    "Saffran, J. R. and Johnson, E. K. and Aslin, R. N. and Newport, E.L.",
 journal =    "Cognition",
 year =     "1999",
 number =    "1",
 volume =    "70",
 pages =    "27-52"
 }
 [2] Hazan,A, Holonowicz P, Salselas I, Herrera P, Purwins H, Knast A, Durrant S (2008) 

Modeling the acquisition of statistical regularities in tone sequences.

 Proceedings of the CogSci conference, 2008

Bibtex:

  @inproceedings { 897,
  title = {Modeling the Acquisition of Statistical Regularities in Tone Sequences},
  journal = {30th Annual Meeting of the Cognitive Science Society},
  year = {2008},
  month = {23/07/2008},
  keywords = {statistical learning; computational modeling; music},
  URL = {http://mtg.upf.edu/files/publications/cogsci_camera.pdf},
  author = {Hazan, A. and Holonowicz, P. and Salselas, I and Herrera, P. and Purwins, H.;Knast, 
  A and Durrant, S.}
  }

How to Use

Installing


Linux:

The installation script will install the needed packages using 'apt', therefore you will be requested your password. Additionally it will compile and install other packages that are not available on the default Ubuntu repositories.

To launch the installation script, run the following command:

sh install


Windows: you have to install manually all the dependencies then follow the "RUNNING" section.

Running

In order to run the system the user must first enter the demo directory:

cd musicprojector/src

Finally the demos can be launched using the following commands:

python demoFilename.py # For the python scripts

sh demoFilename.sh # For the bash scripts

In this case run either './demoSaffranExperimenter.py' or ' ./demoSaffranGUI.py'

 './demoSaffranExperimenter.py' usage:
 Options:
  -h, --help           show this help message and exit
  -x EXPECTATIONMODEL  expectation model, 'SRN' or 'FNN' , default SRN, Simple Recurrent Network 
    or Feed-forward Neural Network
  -i EXPERIMENTID       experimentID, 'Exp1' or 'Exp2', Exp 1, words from language L1  are 
    non-words in language L2 or Exp 2, 
    words from one language were part-words in the other language,  refer to [2] for more detailed 
    description)
  -e ENCODING           the type of encoding, 'Prob' or 'Discrete'.
 

The difference is illustrated as follows:


The 'Prob' encoding:


 Tone name
      c=  [1,0,0,0,0,0,0,0,0,0,0,0]
      cS= [0,1,0,0,0,0,0,0,0,0,0,0]
      d=  [0,0,1,0,0,0,0,0,0,0,0,0]
      dS= [0,0,0,1,0,0,0,0,0,0,0,0]
      e=  [0,0,0,0,1,0,0,0,0,0,0,0]
      f=  [0,0,0,0,0,1,0,0,0,0,0,0]
      fS= [0,0,0,0,0,0,1,0,0,0,0,0]
      g=  [0,0,0,0,0,0,0,1,0,0,0,0]
      gS= [0,0,0,0,0,0,0,0,1,0,0,0]
      a=  [0,0,0,0,0,0,0,0,0,1,0,0]
      aS= [0,0,0,0,0,0,0,0,0,0,1,0]
      b=  [0,0,0,0,0,0,0,0,0,0,0,1]

where c,cS,d... are consecutive notes from chromatic scale. cS means 'C#' tone. So to each tone is assigned 12 bit binary word that corresponds to the node of input layer of the neural network.

The 'Discrete' encoding:

 # Discrete Encoding of tones
      c=  0
      cS=   1
      d=  2
      dS=   3
      e=  4
      f=  5
      fS= 6
      g=  7
      gS= 8
      a=  9
      aS= 10
      b=  11

So to each tone an integer is assigned.

-c EXTRAENCODING      extra encoding, selects additional option for encoding
  'ContourProb' - pitch contour with 'Prob' encoding. So it takes 2 consecutive tones and says if the tone is
  higher (value = +1), 
   the same (value = 0) or lower (value = -1).
  'ContourDiscrete' - pitch contour with 'Discrete' encoding
  'IntervalProb' - encoding of the intervals, 2 consecutive tones are taken and the output value is the 
   distance between them (eg. between 'c' and 'dS' is  +1.5  and
   between 'e' and 'c'  is  -2)
-n RUNS  number of runs [default 10] (corresponds to the number of participants in experiments on humans),

The output of the program are graphs of distribution of the tone words and prediction accuracy, plus log files containing the data that has been used to create the plots, plus snapshot files needed by the GUI to visualize the experiments (they are created only by the ./demoSaffranGUI.py).

The log files have the following names: 'results_PredictorType_ExperimentType_Encoding_ExtraEncoding_rRuns.dat', for example 'results_FNN_Exp1_Prob_None_r10.dat' The snapshot files have the following names: 'snapshot_nRuns_PredictorType_Encoding.snap' eg. 'snapshot_n10_FNN_Prob.snap'

When ./DemoSaffranGUI.py is launched:

GUI asks user to either generate the new data or use already computed ones, all the remaining GUI elements are inactive. To generate the new data, use the 'File->New Simulation', then the GUI asks for the following:

  1. number of runs,corresponds to '-n' option of './demoSaffranExperimenter.py'
  2. experiment type, corresponds to '-i' option of './demoSaffranExperimenter.py', 'Type 1'='Exp 1', 'Type 2'='Exp 2'
  3. Type of predictor: corresponds to '-x' option of './demoSaffranExperimenter.py'
  4. Type of encoding: corresponds to '-e' option of './demoSaffranExperimenter.py'
  5. Additional type of encoding: corresponds to '-c' option of './demoSaffranExperimenter.py'

Generation of the data can take a while, depending on the machine.

Then the user should choose 'File->Load Simulation' . He will notice that the GUI is activated and on the status bar he will see the parameters the file has been created with. On the screen the following elements are visible: Two rectangles with grids, representing a pair of tone words. The vertical axis shows the pitch (so there are 12 pitches) and the horizontal represents the tone from a tone word. The small, blue, hatched rectangles represent tones chosen by humans (the ground-truth) and the red ones represent predicted tones. If the tones overlap it results with pink (or violet) grid 'eye'. Green frame around one of the rectangles points the one that 'won' (has been chosen by the predictor). Below there are four buttons: Stop - resets the simulation, displays the first pair; '<' moves to the previous pair;'>' moves to the next pair; 'Play' - plays both tone words, the left and the right one. Right after the buttons is the pair counter that displays the number of the pair. Below is a list box that allows choosing the group of participants and two radio buttons that allow to select the phase of the experiment (before and after exposure for the random sequence of the tone words). Finally, at the bottom the slider with the participant index are displayed. Move the slider to the left or right to choose the participant.