ACNS 2017: 11th Annual Meeting of the Auditory Cognitive Neuroscience Society




11th Annual Meeting of the Auditory Cognitive Neuroscience Society

January 6-7, 2017 – Gainesville, Florida

Free and Open to Public

ACNS Schedule

On January 6th & 7th, the Hearing Research Center at the University of Florida and Dr. Andrew Lotto hosted the 11th annual meeting of the Auditory Cognitive Neuroscience Society (ACNS).  17 active researchers presented on the intersection of hearing and cognition.  Individual differences among participants characterized this year’s meeting.  Each presenter addressed the difficult question within his/her respective field of study, which include speech, music science, psychoacoustics, auditory neurophysiology, cognitive psychology, linguistics, computer science, etc.


ACNS is free and open to the public. However, we ask that you register here so we can estimate attendance and plan accordingly.

List of Presenters:

Michael Beauchamp, Baylor College of Medicine: Models and mechanisms of multisensory speech perception
Chris Braun, Hunter College: Language and meaning in the electric communication of weakly-electric fish
Dan Brenner, University of Alberta: Exploring individual differences in auditory lexical decision
Chris Brown, University of Pittsburgh: Binaural hearing en plein air: A veridical paradox in the horizontal plane
Nico Carbonell, University of Florida: Let’s get Flexible! An individual differences approach to examining perceptual flexibility
Bharath Chandrasekaran, University of Texas-Austin: Stability and plasticity in the neural representation of speech categories
Fred Dick, University of London: What can neuroscience learn from my violin teachers? 
Reyna Gordon, Vanderbilt University: If you don’t have rhythm, you might not have grammar: From a neurobiological basis of language impairment to music intervention
Lori Holt, Carnegie Mellon University: It’s always sunny in Gainesville: Weather prediction for auditory categories
Ed Lalor, University of Rochester: Decoding how attention and visual input affect the early-stage encoding of natural speech
Miriam Lense, Vanderbilt University: Singing to infants for social engagement: Does the sound matter?
Andrew Lotto, University of Florida: Drain the swamp! Time for a paradigm shift in the study of speech perception 
Bob Lutfi, University of Wisconsin: Individual differences in sound source segregation based on simultaneous spatial and spectral cues
Susan Nittrouer, University of Florida:The units of analysis in language processing
Arty Samuel, SUNY – Stony Brook & BCBL, Spain: Some people are more lexical than others
Joe Schlesinger, Vanderbilt UniversityTurn Down The Volume! Why are auditory medical alarms loud and annoying? 
Tiffany Woynaroski, Vanderbilt University: Ain’t nobody got time for that!: Automated alternatives to conventional coding of child vocalizations