Archive for the ‘Events’ Category

LSA practice talks rescheduled for Monday Dec. 15

We have a large number of student talks at the upcoming LSA annual meeting. We had one practice talk session on Wednesday, but Thursday’s scheduled talks were canceled due to inclement weather. Please join us for a makeup session on Monday December 15, 10AM-12:30PM in the Greenberg Room. 

  • Robin Melnick On the Time-Course of Discourse Linking: Experiments with Wh-In-Situ Islands
  • Robin Melnick and Eric Acton Function Words, Opposition, and Power: A socio-pragmatic “deep” corpus study
  • Kevin McGowan and Meghan Sumner A Phonetic Explanation for the Usefulness of Within-Category Variation
  • Prerna Nadathur Towards an Explanatory Account of Conditional Perfection
  • Masoud Jasbi The Semantics of Differential Object Marking in Persian

 

Department Holiday Party Today in the Greenberg Room!

‘Tis the season to relax with colleagues and celebrate the holidays and the end of quarter. Come on by to the Department Holiday Party, today from 3:30PM to 5:00PM in the Greenberg Room.

P-Interest Meeting Today: Dozat

Join the P-Interest Group today as they hear from Timothy Dozat, who will be presenting on his computational phonology QP which models OT using neural networks. All are welcome!

Modeling OT constraints using Artificial Neural Networks

If one is to assume that OT is a plausible cognitive model of linguistic production and/or comprehension, then one must take a stance on whether constraint definitions are hardwired into humans’ brains from birth and must only be ranked, or inferred solely from the linguistic data learners are exposed to during acquisition, or some combination of the two. The strong position that all constraints are innate and the learner must only rank them is very difficult to support, suggesting that constraint definitions–as well as constraint rankings–must at least partially be learned. However, previous computational models attempting to show how constraint definitions can be learned from data have faced severe shortcomings, many stemming from the discrete nature of the the constraint definitions (e.g. assign a violation of weight w if features a and b are present in the input). I will show that allowing for continuous values in constraint definitions (e.g. assign p% of a violation of weight w if feature a is present in the input with weight v and feature b is present in the input with weight u) allows for constraints to be represented with artificial neural networks, which can make small changes to constraint definitions without radically changing their behavior or throwing them out entirely. This representation comes with all the perks of standard neural networks, to the effect that vowel harmony and constraint conjunction can be modeled with only small changes to the model.