Meaningful Lunch Tuesday (9/30) at 11:45 AM

Everyone with an interest in semantics is invited to Meaningful Lunch — an intermittent informal lunch meeting for all those at Stanford working on or interested in the study of natural language meaning, broadly construed!

The main purpose of these lunches is to keep everyone informed about current or nascent semantic-y research that is going on, scout out possibilities for collaboration, learn about plans of semantics-related events for the academic year, and generally have a good time in the company of your fellow meaning-folks.

This quarter’s lunch will take place on Tuesday, September 30, from 11:45am-12:50pm in Room 126 on the ground floor of Margaret Jacks Hall. Lunch will be provided.

Do come for some or all of that time!

SMircle Workshop Meeting Monday (9/29) at 3:15 PM: Silveira

SMircle’s first meeting of the year, next Monday at 3.15, is of special interest as Natalia Silveira will talk about Stanford Dependencies, a formalism used by the Stanford NLP group. People interested in NLP, parsing, the role of syntax in NLP and vice versa should come. Natalia’s description of the talk is below.

A look at dependency syntax in NLP

This will be a presentation on the work done on dependency syntax in the NLP Group. We’ll start off with some context, talking about the role of syntax more generally in NLP. Then the focus will be on Stanford Dependencies, a formalism for representing syntactic dependencies, and its evolution over the last two years as a large-scale annotation project was carried out. Finally, we’ll discuss some data that proved challenging for annotation, and hopefully get feedback about the analyses we present.

Cognition & Language Workshop Thursday (10/2) at 4PM: Kleinschmidt

Next Thursday we’ll kick off this year’s Cognition and Language Workshop with a talk by Dave Kleinschmidt (University of Rochester). It will be at 4PM in the GREENBERG ROOM. 

[Note new location relative to last year. All C&L talks this year will be in Greenberg.]

ROBUST LANGUAGE COMPREHENSION: Recognize the familiar, generalize to the similar, and adapt to the novel

Anyone who has used an artificial speech recognition system knows that robust speech perception remains a difficult and unsolved problem, yet one which human listeners achieve nearly effortlessly. Speech perception requires that the listener map continuous, variable acoustic cues to underlying linguistic units like phonetic categories and words. One of the substantial challenges that human listeners have to tackle is the lack of invariance, or the fact there is no single set of acoustic cue values which reliably indicates the presence of a particular linguistic structure. The lack of invariance is due in large part to the fact that the relationship between cues and linguistic units varies substantially from one situation to another, due to differences between individual talkers, registers, dialects, accents, etc.: one talker’s /p/ may be more like another talker’s /b/.

In this talk I will present a computational framework—the ideal adapter—which characterizes the computational problem posed by the lack of invariance, and how it might be solved. This framework naturally suggests three ways that listeners might achieve robust speech perception in the face of the lack of invariance: recognition of familiar situations/talkers, generalization to new situations/talkers similar to those encountered before, and rapid adaptation to novel situations/talkers. All three of these strategies have been observed in the empirical literature, bearing out a range of qualitative predictions—of the ideal adapter framework—and quantitative predictions—of an implemented model within this framework.

Finally, this framework provides a unifying perspective on flexibility in language comprehension across different levels, as well as tying language comprehension together with other, more general perceptual processes, which also show similar adaptive properties. These connections point out future directions for investigating how the kinds of computations necessary for achieving robust speech perception might be carried out algorithmically and could be implemented in neural mechanisms.

Fieldwork Workshop Group Meeting Wednesday (10/1) at 2:30 PM

A quick reminder that Fieldwork Group meetings will now be on Wednesdays at 2:30 in the Ivan Sag room!

At their first meeting, on October 1st, you can hear brief presentations from Daniel Galbraith (Faroe Islands), Sharese King (Bakersfield) and Ignacio Cases (epigraphic fieldwork) on their recent fieldwork experiences.

Each student will give a 10-minute highlight presentation of the data they gathered or their overall fieldwork experience and there will be a half hour for questions and collaboration.

All are welcome!

VPUE Undergraduate Intern Presentations Friday (10/3)

Come hear from our undergraduate VPUE interns as they present on their summer research projects next Friday in the Greenberg Room. More info to come soon!

Look Who’s Talking!

Dan Jurafsky will be giving a colloquium at UPenn’s Institute for Research in Cognitive Science today: “Extracting Social Meaning from Language: The Computational Linguistics of Food and the Spread of Innovation”.