Archive for the ‘Community’ Category

Linguistics apps for fun and profit

Online Linguistics tools are getting better all the time. Here are two new ones worth knowing about:

 The Great Language Game: test how good you are at distinguishing languages after listening to short samples of them.

US Dialect App: iPhone app (also works on iPads). By Bert Vaux, and based on his New York Times dialect survey that went viral a while back. Answer questions about your lexical choice and pronunciation and it will try to place you on a map. You get to watch the heat map update in real time as it improves guesses based on your answer. Interesting to learn about high-entropy points of dialect variation, and your responses help improve the system.

(via Will Leben and Bert Vaux)

P-Interest Meeting Today: Dozat

Join the P-Interest Group today as they hear from Timothy Dozat, who will be presenting on his computational phonology QP which models OT using neural networks. All are welcome!

Modeling OT constraints using Artificial Neural Networks

If one is to assume that OT is a plausible cognitive model of linguistic production and/or comprehension, then one must take a stance on whether constraint definitions are hardwired into humans’ brains from birth and must only be ranked, or inferred solely from the linguistic data learners are exposed to during acquisition, or some combination of the two. The strong position that all constraints are innate and the learner must only rank them is very difficult to support, suggesting that constraint definitions–as well as constraint rankings–must at least partially be learned. However, previous computational models attempting to show how constraint definitions can be learned from data have faced severe shortcomings, many stemming from the discrete nature of the the constraint definitions (e.g. assign a violation of weight w if features a and b are present in the input). I will show that allowing for continuous values in constraint definitions (e.g. assign p% of a violation of weight w if feature a is present in the input with weight v and feature b is present in the input with weight u) allows for constraints to be represented with artificial neural networks, which can make small changes to constraint definitions without radically changing their behavior or throwing them out entirely. This representation comes with all the perks of standard neural networks, to the effect that vowel harmony and constraint conjunction can be modeled with only small changes to the model.

John Rickford and Sharese King Featured in Stanford Report

Research on bias against speakers of African American Vernacular English in the American justice system, conducted by John Rickford and Sharese King, was featured in the most recent issue of the Stanford News Report. Read about their work here!