Piantadosi, Wednesday and Thursday

Steven Piantadosi (Rochester) will be here both Wednesday and Thursday, for a Psychology colloquium and a cognitive seminar respectively. The colloquium will be on Wednesday from 3:45-5pm in 420-041 and will be entitled “A rational approach to language” (abstract below). The cognitive seminar will be on Thursday, in Varian Physics room 102/103, from 10-11:30am. The seminar is on “A computational perspective on language acquisition and design”.

A rational approach to language
Jordan Hall 420-041, Feb 20, 3:45-5pm
I’ll present an overview of my research studying rational models of language form. I’ll argue that specific features of human language—such as the variation in word lengths and the presence of ambiguity–can be understood as information-theoretically efficient solutions to communicative problems. I’ll also discuss current experiments testing this general approach and present evidence that sentence processing mechanisms make sensible communicative inferences in decoding language across a noisy channel. These projects suggest that the cognitive mechanisms supporting human language are well-structured for solving problems of communication.

A computational perspective on language acquisition and design
Varian Physics 102/103, Feb 21, 10-11:30am
I’ll describe my computational and experimental work on language learning. I’ll discuss two primary lines of research that both focus on how learners might discover abstract aspects of language, including number words and quantifiers. In each domain, I’ll argue that the key aspects of meaning are not directly observable by learners, and that the inductive challenge this poses is best solved by statistically well-formed models that operate over the domain of rich semantic representations. I’ll show how such learning models can solve acquisition problems in theory, well-describe inferences made by children and adults, and lead to compelling developmental predictions. I’ll then discuss current and ongoing experiments with infants and toddlers testing the core assumptions of these learning models.