Issue 2014/01/24

Colloquium Today (Friday, Jan. 24), 3:30 PM: Peter Graff

Peter Graff (Intel Corporation) will give a colloquium today (Friday Jan. 24) at 3:30 PM in the Greenberg Room, followed by a departmental social.

COMMUNICATIVE EFFICIENCY IN PHONOLOGY

Abstract: In this talk, I present novel typological and behavioral evidence suggesting that phonological patterns derive from communicative efficiency: The cross-linguistic patterning of sounds and words as well as the ways in which speakers produce them are geared towards achieving a high rate of information transmission given the effort invested by the speaker (Lindblom, 1990; Flemming 1995).

First, I show for the first time that the relative occurrence frequencies of different sounds in 60 languages from 25 major language families may be understood in terms of communicative efficiency. Building on well-known findings about the relative perceptibility of voicing contrasts in different contexts (Raphael, 1981), differences in the effort involved in articulating different voiced stops (Ohala & Riordan, 1979), and information theory in the sense of Shannon (1948), I derive a measure of communicative efficiency for frequency distributions over voiced and voiceless stops in context. I show that the efficiency of natural language frequency distributions over those categories is significantly greater than expected from chance.

Next, I present evidence that redundancy in the lexicon is not randomly distributed, but instead exists to supplement distinctions between meaningful linguistic units that are hard to perceive. Specifically, I show that the number of words disambiguated solely by a given contrast (i.e., minimal pairs) decreases as a function of the perceptibility of that contrast, beyond what is expected from the probabilistic patterning of the contrasting sounds. The lexicon as a whole is thus organized in ways that minimize the confusability of words given the effort invested in their production.

Finally, I present behavioral evidence suggesting that language production at the sound level seeks to maximize the rate of information transmission and minimize speaker effort (cf Aylett & Turk, 2004). I report on a phonetic corpus study of F2-transitions into stops and stop burst durations showing that these acoustic cues to place of articulation stand in a probabilistic trade-off relation. When stop bursts are long, F2-transitions are correspondingly small, while when stop bursts are short, F2-transitions are correspondingly large. This trade-off is expected if the articulatory effort invested in the production of the burst is reduced where formant transitions convey sufficient information for the listener to recover the place of a stop.

Taken together, these results suggest that communicative efficiency shapes human language phonology, the lexicon, and the ways in which humans use sounds and words to communicate intended meanings.

Dinner will be served following the colloquium.

Spoken Syntax Lab Meeting Today (1/24) at 1 PM

There will be a Spoken Syntax Lab meeting today from 1:00-2:30 in Cordura 100 at CSLI.

This first meeting of the quarter will revolve around making plans for future meetings and hearing some ideas for ongoing and future research.

All are welcome!

Laura Kalin seminar, Thursday 1/30 at noon

Differential Object Marking: Insight from Neo-Aramaic

Laura Kalin
UCLA

Thursday, January 30, 12noon
Margaret Jacks Hall, Terrace Room (4th floor)

Differential Object Marking (DOM) is a phenomenon that splits (direct) objects into two classes: in one class are objects that get overtly marked (“prominent”/“non-canonical” objects), and in the other class are ones that do not. On an inclusive conception of DOM, marking may take the form of case, an adposition, agreement, or clitic-doubling. Common factors distinguishing objects are definiteness, specificity, and animacy, with objects ‘high’ on the relevant scale (e.g., more definite) getting marked. Strikingly, DOM tends to be a “parasitic” phenomenon – the overwhelming majority of DOM languages employ a DOM marker that lives a double life, appearing elsewhere without a DOM function (i.e., not appearing based on animacy/definiteness). The most common DOM marker is dative case or a dative adposition, as found in Hindi, Spanish, and certain Neo-Aramaic languages. In the Neo-Aramaic language Telkepe, DOM takes an unusual form: specific objects obligatorily trigger agreement on the verb and are optionally also marked with dative case.

In this talk, I review the different ways that DOM has been accounted for theoretically, and show how some of these accounts fare better (or worse) in Neo-Aramaic; I specifically address how we might account for the tendency for DOM to be parasitic on oblique case. I also propose an account of DOM in the Neo-Aramaic language Telkepe and discuss the obstacles to extending this account to other DOM languages. This is work in progress, and I welcome feedback and suggestions.

Colloquium Friday January 31: Laura Kalin

Please join us for a colloquium by Laura Kalin (UCLA) in the Greenberg Room at 3:30PM on Friday February 1, followed by a departmental social.

Aspect and Argument-Licensing in Neo-Aramaic

In this talk, I present two empirical puzzles that involve intriguing interactions between aspect and agreement in Neo-Aramaic languages. Verbs in Neo-Aramaic come in several different ‘base’ forms that are built with root-and-template morphology and encode tense, aspect, or mood. The two verb bases of interest here are the imperfective base, for example, qatl (from the verb root q-t-l, ‘kill’), and the perfective base, for example, qtil. Subject and object agreement appear as suffixes on these bases.

The first puzzle I address is the various aspect-based agreement splits seen across Northeastern Neo-Aramaic: the form and configuration of subject and object agreement reverses depending on the aspect of the verb base, with the subject agreement morpheme of one base looking like the object agreement morpheme of the other, and vice versa. I propose that we can make sense of these aspect splits if we allow imperfective aspect itself to license an argument, with agreement being the overt manifestation of this licensing. The second puzzle is a secondary perfective strategy employed in many of these languages, which makes use of the imperfective verb base with an added prefix (qam-, varying phonologically by language). This secondary perfective verb form takes subject and object agreement as though it were imperfective, rather than perfective. I argue that this data reveals that there are two aspectual projections in the syntax, with only the lower aspectual projection determining the form of the verb base.
Finally, I put the two proposals together: If aspect can license an argument, and there are in fact two aspectual projections in the syntax, then I predict that each aspectual projection should be able to license an argument separately. This is precisely what we find in progressives in the Neo-Aramaic language Senaya. Overall, then, my two proposals (aspect as an argument-licenser and the existence of two aspectual projections) are able to capture a range of empirical phenomena in Neo-Aramaic and add to our understanding of the syntactic options provided by Universal Grammar.

Luc Steels lecture Feb. 3 at 4 PM

Luc Steels is Director of the Artificial Intelligence Laboratory of the Vrije Universiteit Brussel, head of the Sony Computer Science Laboratory in Paris and a visiting researcher at the Pompeu Fabra University in Barcelona. He will give the lecture described below on Monday, February 3 at 4 pm in Margaret Jacks Hall, Bldg. 460, Room 126. His abstract is given below.

CAN ROBOTS INVENT THEIR OWN LANGUAGE?

For more than a decade we have been doing robotic experiments to understand how language could originate in a population of embodied agents. This has resulted in various fundamental mechanisms for the self-organisation of vocabularies, the co-evolution of words and meanings, and the emergence of grammar. It has also lead to a number of technological advances in language processing technologies, in particular a new grammar formalism called Fluid Construction Grammar, that attempts to formalise and capture insights from construction grammar, and a new scheme for doing grounded semantics on robots.

This talk gives a (very brief) overview of our approach and discusses some details of the technical spin-offs that have come out of this work. The talk is illustrated with live software demos and videos of robots playing language games.

Look Who’s Talking!

Asya Pereltsvaig will speak about the history and geography of languages at the Festival of Sciences in Rome on January 26, together with Martin Lewis.

A Bit of Late Christmas Humor


Joke courtesy of Language Log!