Modified Cortical Filters for Auditory Saliency Model

From Emcap


Modified Cortical Filters Auditory Saliency Model

by Susan Denham <>

Centre for Theoretical and Computational Neuroscience, University of Plymouth


Modified by Amaury Hazan (UPF-MTG) and Martin Coath (UoP)


Modified Cortical Filters Auditory Saliency Model

What is the Module?

This is a modification of the Auditory Saliency Model in order to use cortical filter derived from drum sounds. We refer to the original model for further information.

What are the inputs and outputs?

results = auditoryPOnsets(fName,thresh0,div1,doDisplay, filterType)


  • fName name of the sound file containing the stimulus (.wav or .au)
  • thresh0 initial threshold for event detection; default = 1.
  • div1 divisor for use in threshold adaptation (should be >=1); default = 2.
  • doDisplay flag set for a plot of the saliency trace and detected

events and a soundtrack with the original stimulus superimposed on the detected event track; default 0.

  • filterType specifies which set of cortical filters will be used
 'vocal': trained with spoken numbers
 'drumd': trained with dry drum sounds
 'drumw': trained with wet and noisy drum sounds


results data structure in which the peripheral model responses are returned:

  • .stim
  • .sOrig stimulus
  • .sDet detected event track
  • .fs sampling rate
  • .saliency continuous saliency trace
  • .pOnsets discrete perceptual onsets extracted from the saliency using an adaptive decision threshold; stored as a matrix with column 1 containing event times in seconds, and column 2 a measure of event saliency; each row corresponds to a detected event.
  • .eResp response of the cochlear model
  • .tResp response of the transient model
  • .cortResp response of the cortical filters

How to Use, an example

The function call has the following form:

>> results = getResponse(fName,thresh0,div1,doDisplay, filterType)