Navigation auf


Department of Comparative Language Science


Download the revised program: X-PPL2021-ScheduleRevised

Online view of the program for Day 1 and Day 2.


Keynote speakers

Elena Lieven (University of Manchester):

Why studying acquisition cross-linguistically is essential  to building a theory of how children learn language

Any child can learn any language provided that they are typically developing, immersed and start at a reasonably early age. But languages differ typologically across an enormous range of features and children learn language in a wide variety of contexts.  This provides us with all the ingredients for a natural experiment, though one with a complex set of interacting factors.  In this talk, I will briefly summarise why increasing the range of languages studied and the contexts in which children grow up, is so critically important.  I then outline a number of studies that demonstrate what can be learned from research on comparative language acquisition and what it can tell us about the mechanisms and processes by which children learn language.


Asli Ozyurek (Max Planck Institute for Psycholinguistics, Nijmegen)

Multimodal approaches to cross-linguistic differences in language structures, processing and acquisition

One of the unique aspects of human language is that in face-to-face communication it is universally multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). All hearing and deaf communities around the world use vocal and/or visual modalities (e.g., hands, body, face) with different affordances for semiotic and linguistic expression (e.g., Goldin-Meadow and Brentani, 2015; Vigliocco et al., 2014; Özyürek and Woll, 2019). Hearing communities use both vocal and visual modalities, combining speech and gesture. Deaf communities can use the visual modality for all aspects of linguistic expression in sign language. Visual articulators in both cospeech gesture and sign, unlike in speech, have unique affordances for visible iconic, indexical (e.g., pointing) and simultaneous expressions due to the availability of multiple articulators. Such expressions have been considered in traditional linguistics as being “external” to the language system. I will however argue and show evidence for the fact that both spoken languages and sign languages combine such modality-specific expressions with arbitrary, categorical and sequential expressions in their language structures in cross-linguistically different ways (e.g., Slonimska Özyürek, Capirci, 2021; Özyürek, 2018; 2021). Furthermore they modulate language processing and language acquisition in typologically different languages (e.g., Furman, Kuntay, Özyürek, 2014), suggesting that they are an internal property of a unified multimodal language system. I will end my talk with discussion on how such a multimodal (but not unimodal one ) view can actually explain how the dynamic, adaptive and flexible aspects of our language system enable optimally to bridge the human biological, cognitive and learning constraints to the interactive, culturally varying communicative requirements of face-to-face context.