Join the P-Interest Group today as they hear from Timothy Dozat, who will be presenting on his computational phonology QP which models OT using neural networks. All are welcome!
Modeling OT constraints using Artificial Neural Networks
If one is to assume that OT is a plausible cognitive model of linguistic production and/or comprehension, then one must take a stance on whether constraint definitions are hardwired into humans’ brains from birth and must only be ranked, or inferred solely from the linguistic data learners are exposed to during acquisition, or some combination of the two. The strong position that all constraints are innate and the learner must only rank them is very difficult to support, suggesting that constraint definitions–as well as constraint rankings–must at least partially be learned. However, previous computational models attempting to show how constraint definitions can be learned from data have faced severe shortcomings, many stemming from the discrete nature of the the constraint definitions (e.g. assign a violation of weight w if features a and b are present in the input). I will show that allowing for continuous values in constraint definitions (e.g. assign p% of a violation of weight w if feature a is present in the input with weight v and feature b is present in the input with weight u) allows for constraints to be represented with artificial neural networks, which can make small changes to constraint definitions without radically changing their behavior or throwing them out entirely. This representation comes with all the perks of standard neural networks, to the effect that vowel harmony and constraint conjunction can be modeled with only small changes to the model.
The Construction of Meaning Workshop presents:
Incremental quantification and the dynamics of pair-list phenomena
New York University
Friday, November 21, 2014, 3:30pm, Margaret Jacks Hall, Rm. 126
Distributive universals are unique among natural language quantifiers in the following three ways: (i) matrix interrogatives that contain them accept pair-list answers; (ii) indefinites and disjunctions in their scope may assume “arbitrary functional” readings; and (iii) they permit sentence-internal interpretations of a wide range of comparative adjectives, like ‘new’ and ‘different’. Because other quantifiers in the same environments do not give rise to these interpretations, the constructions provide a window into the semantic processes that support quantificational distributivity. In fact, both pair-list and internal readings have been independently argued to expose some of the compositional clockwork behind universal quantification, but the mechanisms they have been taken to reveal are entirely distinct. In contrast, I’ll propose that pair-list phenomena and internal readings of comparative adjectives are two sides of the same coin; they are both side effects of incremental quantification. To make this precise, I’ll analyze distributive universal quantifiers in terms of iterated, incremental update, in effect generalizing the sequential conjunction operator of standard dynamic semantics. This approach captures the tight empirical connection between pair-lists and internal adjectives, and at the same time provides a simpler and more robust account of the data than some of the specialized alternatives.
Join the Cognition & Language Workshop as they welcome Bob Slevc (Maryland), who will give a talk at 4PM in the Greenberg Room. All are welcome!
Language, Music, and Cognitive Control
Our impressive abilities to process complex sound and structure may be most evident in language and music. There is growing evidence that linguistic and musical processing draw on shared cognitive and neural processes, however, it remains unclear exactly what these shared processes are. I will discuss some work investigating structural (syntactic) processing in language and music, and suggest that language/music relations reflect, at least in part, shared reliance on domain general mechanisms of cognitive control.