Navigation auf uzh.ch

Suche

Department of Comparative Language Science

Program

A detailed program will be announced when all authors of accepted contributions have confirmed their attendance.

Keynote speakers

Aylin Küntay (Koç University)
Caleb Everett (University of Miami)
T. Florian Jaeger (University of Rochester)
Nikolaus P. Himmelmann (University of Cologne)
Ina Bornkessel-Schlesewesky (University of South Australia)
Silvia Gennari (University of York)

Keynote presentations

Aylin Küntay (Koç University) | Word order and case markers in sentence processing of verb-final languages in early child language: Act-out and eye-tracking studies

Languages differ in how to convey who did what to whom, and young children learn to pay attention to devices such as word order and morphological marking to process sentences and access meaning. Whether processing of language is sensitive to language-specific cues from early on or uses more general heuristics initially is a matter of theoretical debate in language acquisition. Treating the first noun as the agent and relying on the verb to anticipate the semantic and syntactic structure of a construction have both been proposed as initial learning heuristics by children. Work in verb-final languages with flexible word order such as Turkish is critical in offering the critical empirical data because (a) first nouns are not always agents, and (b) verbs do not often appear early in a sentence, (c) morphology is critical sentence comprehension.

My talk will focus on work done with Turkish-learning children (aged 15 months to 5 years) in comparison to English, Mandarin, German, and Dutch learners in language comprehension paradigms using act-out and eye-tracking methodologies. We will present data about how children of different ages and learning different languages form and update sentential meaning in spoken sentence comprehension across these different languages. Common and language-specific strategies will be discussed in light of theoretical debates in language acquisition research.

Caleb Everett (University of Miami) | Does the brain favor certain forms of numerical language? Another look at the typological data

Human numerical cognition is supported by a phylogenetically primitive sense for approximate quantity discrimination. (Dehaene 2011) Along with this approximate number sense we exhibit a native capacity for tracking a small set of objects, and this object-tracking ability facilitates the precise discrimination of quantities less than four. The exact discrimination and mental storage of most quantities, however, also relies on symbols--typically verbal ones--for those quantities. (Everett 2017) These symbolic representations, numbers, are culturally variable but typically result from similar processes of embodied cognition. This is evidenced by the fact that most number bases are decimal, quinary or vigesimal. Yet the manual bias reflected in most number systems is not the only crosslinguistically evident influence on how people tend to construct those systems. Furthermore, there is an interesting parallel between another pattern in the crosslinguistic data and some neurobiological data: The latter data reveal that humans’ discrimination of small quantities is privileged by our mental hardware, more specifically a portion of the intraparietal sulcus. The other relevant pattern in the crosslinguistic data, meanwhile, also hints at the hardwired privileging of small quantities: Grammatical number systems distinguish 1, 2, and 3 items precisely but only refer to other quantities in a fuzzy manner. Furthermore, small cardinal and ordinal numbers are sometimes formally distinguishable from higher numbers in the same language.

Still, it cannot be stated that smaller quantities (1, 2, and 3) are always treated cohesively by the world’s languages. Instead we observe variability with respect to how formally distinct small and large numbers are from each other in a given language. This is perhaps surprising given the native facility humans have for discriminating smaller quantities. Some of the relevant variability in the representation of small quantities is well-known, for instance the lack of precise small numbers in some languages. One goal of this talk is to draw attention to lesser known variability--variability in the usage of small number words. Based on an analysis of 5940 lists of phonetically transcribed words in an online database, I show that words for 1 and 2 tend to be significantly shorter in large populations than small ones, possibly due to a greater reliance on numbers in larger societies. So it is not the case that cultures vary simply according to whether or not they have words for smaller quantities. They also apparently vary with respect to how often they utilize smaller numbers. The results surveyed in this paper suggest that the neurobiological and typological data are consistent, but perhaps do not dovetail as cleanly as we might predict. Numerical language is in fact constrained by biological factors, a key observation that is dependent on both kinds of data. Yet, despite some commonalities across languages, I suggest that even small number words and grammatical number vary in unexpected ways.

The issues discussed in this talk also underscore a methodological point: To truly illuminate the intersection of the brain and numerical language, we must pay attention to data gathered in laboratory settings but, just as crucially, we must continue to explore the diversity of numerical language (including its usage) across cultures.

References
Dehaene, Stanislas. 2011. The Number Sense. Oxford University Press.
Everett, Caleb. 2017. Numbers and the Making of Us. Harvard University Press.

T. Florian Jaeger (University of Rochester) | Seeds of change? Adaptation during language processing and production

All is in flux, nothing stays still. [Heraclitus, as quoted by Plato in Cratylus 402a]

Language exhibits large amounts of variability. The linguistic realization of the same meaning varies across languages, across speakers within a language, and within speakers of a language across time. This variability is central to many branches of the linguistic sciences—albeit at very different time scales (e.g., typology, historical linguistics, sociolinguistics, and psycholinguistics).

In this talk, I aim to illustrate some of the far-reaching consequences and functions of variability during language processing and production, i.e., at the scale of milliseconds. I present studies from my lab that highlight how listeners and speakers navigate this variability, by adapting their interpretations and productions. Critical to understanding how this is achieved, I argue, is the notion of inference under uncertainty. Listeners need to infer linguistic categories (phonemes, words, syntactic structures) incrementally from noisy and ambiguous input. Key to this are generative models of the input, i.e. processes that create probabilistic mappings from categories to input. However, talkers differ in how they map linguistic categories onto the speech signal. Listeners thus also need to infer which generative model to use to interpret the input at any given moment.

In the first part of the talk, I show how listeners seem to overcome this challenge by adapting to changes in the statistics of the input, exhibiting remarkable flexibility (although within bounds defined by their prior language experience). Time permitting I present both studies on adaptation to changes in the statistics of known categories and studies on the acquisition of novel (dialectal) categories. In the second half of the talk, I focus on language production and how speakers contribute to robust communication. I show that speakers seem to conduct inferences about the communicative consequences of their articulations, and that they adapt their productions based on causal inferences about the perceived communicative success of previous productions.

Understanding these adaptive processes in both comprehension and production can shed light on how variability in the input can spread across speakers and language communities.

Selected relevant readings from my lab

Buz, E., Tanenhaus, M. K., and Jaeger, T. F. 2016. Dynamically adapted context-specific hyper-articulation: Feedback from interlocutors affects speakers’ subsequent pronunciations. Journal of Memory and Language 89, 68-86.[10.1016/j.jml.2015.12.009]

Kleinschmidt, D. and Jaeger, T. F. 2015. Robust speech perception: Recognizing the familiar, generalizing to the similar, and adapting to the novel. Psychological Review 122(2), 148-203. [10.1037/a0038695]

Qian, T., Jaeger, T. F., and Aslin, R. 2016. Incremental implicit learning of bundles of statistical patterns. Cognition 157, 156-173. [10.1016/j.cognition.2016.09.002]

Weatherholtz, K., Campell-Kibler, K., and Jaeger, T. F. 2014. Socially-mediated syntactic Alignment. Language Variation and Change 26(3), 387-420.

Nikolaus P. Himmelmann (University of Cologne) | Universals of Language 3.0

The hypothesis is proposed that there are universal levels (or aspects) of linguistic structure that are directly derivative of the biological and social infrastructure for communication. Unlike universals of the Greenbergian and Chomskyan type, which typically involve controversial analytical categories such as ‘subject’ or ‘maximal projection’, the universal layer of linguistic structure targeted here is defined by being amenable to direct empirical testing. There may, however, be different ways for providing empirical evidence for a presumed universal of language 3.0. Thus, some proposals for a universal of this type may be investigated with psycholinguistic or neurolinguistic experiments, others with knowledge-free structure discovery algorithms, and still others with kinematic measures of articulatory gestures.

The core examples to be discussed in this presentation are two levels of prosodic structure, i.e. syllable and intonation phrase, but Dingemanse et al.’s (2013) proposal for “universal words” is also briefly commented on. Other kinds of phenomena that may be candidates for universals of language 3.0 include so-called information structure, indexical categories (person, demonstratives), and some register distinctions (narrative vs. non-narrative, for example), inter alia.

It is unclear at this point whether it is warranted and useful to subsume the fairly heterogeneous set of phenomena just mentioned under a single category. Importantly, in addition to being amenable to direct empirical falsifiability, the universal linguistic structures of the type intended here should provide a link between the general biological and social infrastructure for communication and specific, language-particular structures. That is, on the one hand it should be possible to show precisely and in detail how they are derived from the general biological and social infrastructure for communication, for which Levinson’s (2006) ‘human interaction engine’ is taken as a framework for the current argument. On the other hand, it should be possible to show how language-particular forms and constructions are derived from them (via grammaticisation, for example).

This latter point is of particular relevance for the current debate regarding language universals, as it implies a clear separation between an (empirically verifiable) universal (proto-)structure and language-specific instances thereof which may differ from each other in important regards.

Presentations

The authors of the following contributions have already confirmed their attendance.

Gertraud Fenk-Oczlon. The constant flow of linguistic information: subject first preference in word order and in ambiguity resolution

Johannes Gerwien. Predicting referents based on structural meaning – The case of the Mandarin Chinese bǎ-construction

Muqing Li, Johannes Gerwien and Monique Flecken. First things first: Cross-linguistic analyses of event apprehension

Julia Misersky, Asifa Majid and Tineke Snijders. The effects of grammatical gender on reference processing in German: An ERP study

Idoia Ros, Adam Zawiszewski, Mikel Santesteban and Itziar Laka. The impact of language-specific distributional patterns on the bias for short dependencies: Cross-linguistic evidence from Basque, Polish and Spanish