Archive for the ‘Groups’ Category

SMircle Workshop Meeting Monday (2/2) at 3:15PM: Kalivoda & Zyman

Please join the SMircle Workshop next Monday at 3:15 in the Greenberg Room as they hear from Nick Kalivoda and Erik Zyman (UC Santa Cruz) on the syntax of relative clauses in Zapotec.

Their title and abstract is given below.

On the Derivation of Relative Clauses in Teotitlán del Valle Zapotec

Much recent work argues that some or all externally headed relative clauses are derived by raising of the head NP from a relative-clause-internal position (Åfarli 1994, Kayne 1994, Bianchi 1999, Bhatt 2002). We present novel data from Teotitlán del Valle Zapotec (TdVZ) which show that relative clauses in this language lack the head-raising derivation entirely. The evidence comes from the failure of reciprocals in RC-heads to reconstruct into their relative clauses for Condition A (despite reciprocals’ regularly reconstructing for Condition A under A′-movement more generally) and from a subtle difference between TdVZ and English with regard to variable-binding possibilities in a particular configuration—a difference unexpected on a head-raising analysis of TdVZ relatives. We discuss two strands of evidence that seem to present challenges to our analysis—one involving apparent relativization of an idiom chunk that preserves the idiomatic reading, the other involving RC-internal interpretations of RC-head modifiers (Bhatt 2002). We argue that this counterevidence is only apparent, and the only plausible analysis of TdVZ relatives is one on which they are not head-raising structures. This shows that externally headed relative clauses are a cross-linguistically heterogeneous category: superficially similar relativization structures in different languages can have very different derivational histories.

Cognition & Language Workshop Thursday (2/12) at 4PM: Grodner

Mark you calendars! Join the Cognition & Language Workshop in two weeks, as they hear from Dan Grodner (Swarthmore). The title and abstract are given below.

A Bayesian Account of Conversational Inferences

Much if not most of the meaning that speakers convey with their words is implicit. The standard account of how perceivers recover implicit content is via a process of rational psychosocial inference: Perceivers appeal to a set of maxims to formulate a generative model of a cooperative speaker (Grice, 1975). This view requires that perceivers reason about the communicative intention of the speaker’s speech act (the whole utterance). Over the past 10-15 years, a number of researchers have argued that the standard account is inadequate because it cannot account for the existence of so-called local implicatures. These are cases where an inference appears to be generated within an embedded constituent within an utterance (e.g., Chierchia, Fox & Spector, 2012; Chemla & Spector 2011, Gajewsky & Sharvit, 2012). I will describe a probabilistic model that follows from the assumptions of the standard Gricean account (Russell 2012) and provide experimental evidence that supports it. I will show how this model can explain seemingly local implicatures without appealing to special grammatical operators. In addition to providing a formalization of Gricean reasoning, this approach allows us to preserve the traditional division of labor between semantics and pragmatics. The present approach is similar in spirit to other recent probabilistic approaches (e.g., Goodman & Stuhlmueller 2013) but covers different empirical territory and differs in its mechanics.

Psycholinguistics Group Meeting Thursday (1/29) at 4PM: Levy

Roger Levy will also speak at the Psycholinguistics Group meeting next Thursday at 4PM in the Greenberg Room.

Is grammatical knowledge probabilistic? Theory and evidence

Since the advent of generative grammar, the dominant characterization of human grammatical knowledge has been as categorical: a collection of rules or constraints determining the sentences in the language. Yet the same tradition has long recognized that acceptability judgments are graded. In this talk I take up the proposal that the reason for this gradedness is that grammatical knowledge is not categorical, but fundamentally probabilistic. Despite the recent proliferation of probabilistic methods in linguistics and related fields, this proposal remains controversial: on a skeptical view, perhaps probability is not part of grammatical knowledge per se, but simply proxies for extra-linguistic knowledge and describes inference under uncertainty in acquisition and processing. Here I argue that the classic criteria of descriptive and explanatory adequacy point towards a role for probability in grammar. I provide new evidence that a key constraint on syntactic coordination, the preference for like conjuncts, cannot be stated in categorical terms that are empirically valid, but has extensive coverage and support when stated probabilistically. When combined with previously adduced theory and data, this work yields the strongest case to date that at least some central components of grammatical knowledge are fundamentally probabilistic.

P-Interest Workshop Meeting Today (1/23) at Noon

Join the P-Interest Workshop Meeting today at noon in the Greenberg Room, as they discuss Daniel Silverman’s 2012 book, Neutralization: Rhyme and Reason in Phonology.

From the book:
The function of language is to transmit information from speakers to listeners. This book investigates an aspect of linguistic sound patterning that has traditionally been assumed to interfere with this function – neutralization, a conditioned limitation on the distribution of a language’s contrastive values. The book provides in-depth, nuanced and critical analyses of many theoretical approaches to neutralization in phonology and argues for a strictly functional characterization of the term: neutralizing alternations are only function-negative to the extent that they derive homophones, and most surprisingly, neutralization is often function-positive, by serving as an aid to parsing. Daniel Silverman encourages the reader to challenge received notions by carefully considering these functional consequences of neutralization.

P-Interest Workshop Meeting Today (1/08) at Noon

Join the P-Interest workshop as they hold a planning meeting at noon in the Greenberg Room. All are welcome!

Janet Pierrehumbert at Cognition & Language Workshop Thursday (1/15) at 4PM

The Cognition & Language Workshop is excited to announce that Janet Pierrehumbert will be presenting for the group next Thursday in the Greenberg Room.

REGULARIZATION IN LANGUAGE LEARNING AND CHANGE

Abstract:Language systems are highly structured. Yet language learners still encounter inconsistent input. Variation is found both across speakers, and within the productions of individual speakers. If learners reproduced all the variation in the input they received, language systems would not be so highly structured. Instead, all variation across speakers in a community would eventually be picked up and reproduced by every individual in the community. Explaining the empirically observed level of regularity in languages requires a theory of regularization as a cognitive process.

This talk will present experimental and computational results on regularization. The experiments are artificial language learning experiments using a novel game-like computer interface. The model introduces a novel mathematical treatment of the nonlinear decision process linking input to output in language learning. Together, the results indicate that:
– The nonlinearity involved in regularization is sufficiently weak that it can be detected at the micro level (the level of individual experiments) only with very good statistical power.
– Individual differences in the degree and direction of regularization are considerable.
– Individual differences, as they interact with social connections, play a major role in determining which patterns become entrenched as linguistic norms and which don’t in the course of language change.

P-Interest Meeting Today: Dozat

Join the P-Interest Group today as they hear from Timothy Dozat, who will be presenting on his computational phonology QP which models OT using neural networks. All are welcome!

Modeling OT constraints using Artificial Neural Networks

If one is to assume that OT is a plausible cognitive model of linguistic production and/or comprehension, then one must take a stance on whether constraint definitions are hardwired into humans’ brains from birth and must only be ranked, or inferred solely from the linguistic data learners are exposed to during acquisition, or some combination of the two. The strong position that all constraints are innate and the learner must only rank them is very difficult to support, suggesting that constraint definitions–as well as constraint rankings–must at least partially be learned. However, previous computational models attempting to show how constraint definitions can be learned from data have faced severe shortcomings, many stemming from the discrete nature of the the constraint definitions (e.g. assign a violation of weight w if features a and b are present in the input). I will show that allowing for continuous values in constraint definitions (e.g. assign p% of a violation of weight w if feature a is present in the input with weight v and feature b is present in the input with weight u) allows for constraints to be represented with artificial neural networks, which can make small changes to constraint definitions without radically changing their behavior or throwing them out entirely. This representation comes with all the perks of standard neural networks, to the effect that vowel harmony and constraint conjunction can be modeled with only small changes to the model.