We just had a fun week grilling Charles Yang over his book, The Price of Linguistic Productivity. We discussed the mechanisms and principles in his framework, and how to apply to them to declension class and gender features, and to V2 and verb raising, how to formulate rules, the difference between contexts of applications and lists, and a slew of other topics. Here are some conclusions, as suggested by Charles.
1. Everybody should read Aspects.
2. Linguists should avoid overfitting the data: not every linguistic pattern is real, or worth theorizing over.
3. Productivity is categorical, not gradient.
4. There is no shame in achieving good data coverage.
5. A theory of competence needn’t transparently yield a theory of performance — but it would be irresponsible to insist that it cannot.
6. Third factor considerations require direct empirical motivations, not conceptual ones.
7. Some linguistic principles cannot be regressed out of data.
8. “Analogy” should be purged from linguistics.
9. Indirect negative evidence is wrong; Bayesian packaging doesn’t make it better.
10. Inductive/abductive learning of language does not mean that there are no a priori UG hypotheses (i.e., “parameters”).
This week we grill Charles Yang, focusing on his 2016 book ‘The Price of Linguistic Productivity.’ For those who haven’t experienced the Tromsø Grill, it’s basically something like an hour talk with seven hours for questions.
Morris Halle has passed away. He was a colossus of linguistic theory, one of the most important linguists ever, and he was also a terrific person. I (Peter Svenonius) have learned an immense amount from his articles and I learned quite a lot from him personally as well. He taught a week-long minicourse here in Tromsø many years back (together with Nigel Fabb) which was terrific, and I had many good conversations with him at MIT when I spent my sabbatical there. He was full of wit and vigor and insight. He was a great discussant especially for issues of phonology and morphophonology but also for linguistic theory in general. He once told me that he believed that functional explanations were always wrong. I understood him to mean that all functional attempts to explain anything in linguistics have failed. I thought it was hyperbole, but I worked hard to come up with convincing counterexamples that weren’t completely trivial and failed.
There is a kind of an obituary with some comments and links here:
Morris Halle R.I.P.
Gillian Ramchand’s new Research Council of Norway-funded project on Modality will start up this year with a workshop in June. The project is an innovative approach to exploring the meaning of natural language as manifested in cognition. From the proposal: “The innovative aspect of the research is that we intend to build up a framework for the semantic composition of modal meaning that goes beyond the description of sentential truth conditions, aiming in addition to distinguish competing semantic descriptions on the basis of psychological evidence.”
The project will include two new positions, probably one post-doctoral and one PhD position.
What do you think are the most influential works in linguistics? Peter Svenonius has compiled a list here of the most cited works, according to Google Scholar. Near the top are general works by Saussure, Sapir, and Jespersen, Lakoff’s work on metaphor, Searle on speech acts, Grice on pragmatics, Halliday on functional linguistics, Brown and Levinson on politeness, and a whole bunch of Chomsky.
Other oldies include works by Jakobson, Greenberg, Quine, Labov, and Lenneberg.
Classics in generative linguistics include Ross’ thesis on islands, Heim’s thesis on indefinites, Abney’s thesis on DP structure, Baker’s book on incorporation, and Pollock’s paper on splitting Infl.
More recent entries (from the 90’s on) include Kayne’s antisymmetry book, Cinque’s 1999 book, Rizzi’s left periphery paper, and several of Chomsky’s Minimalist papers.
The conference on Structural and Developmental Aspects of Bidialectalism is nigh! Get ready for ten hours of structured talks and discussion, including international stars and proud local contributions, amply punctuated by breaks for refreshments and meals and informal discussion, preferably performed in alternating dialects.
This week we will be treated to two presentations by Roni Katzir on compression-based learning.
In the LAVA lunch session on Thursday (high noon), he will discuss “an adequate simplicity metric for learning grammars,” going over three alternatives and arguing that compression-based simplicity (roughly, aiming for minimal description length) is the best for induction.
Then, in the colloquium slot (a quarter past two), he will present a talk entitled “Comparing theories of UG using compression-based learning,” in which he will go into more depth on the qualities of compression-based learning. He will discuss two case studies, one from phonology (concerning the Richness of the Base hypothesis) and one from semantics (concerning quantificational determiners).
For abstracts, location details, and more, see the CASTL events page.
On Monday and Tuesday, our international workshop on the emergence of semantically interpretable features will take place, starting at nine sharp in the newly renovated UB auditorium with Anna Papafragrou from the University of Delaware. She’ll be speaking on “Semantics and cognition: The case of evidentiality.” The workshop is a collaboration with the OASIS network. Follow the link for the whole program.
CASTL is proud to present the following Friday Colloquium talk on October 6th at 14.15 in E0105:
Roksolana Mykhaylyk, Amazon and Harvard University
Machine Language Acquisition, and Why Speech Technology Needs More Linguists.
It is hard to imagine our life before the advent of speech technology. We now take for granted getting directions from the GPS voice in our cars, searching for information by talking to our smartphones, or communicating with personal smart home assistants. This presentation will focus on the role of linguistic and language knowledge in Natural Language Understanding, Automated Speech Recognition, and Text-to-Speech product development. As an example, I will demonstrate how Google Translate and Amazon Echo are able to “speak” and explain in the most basic terms how machines “learn” various languages. I will also suggest some ways through which linguists can enter the IT job market.
Big news: Marit and Terje have just landed a big and prestigious grant to spend a year (2019-2020) at the Center for Advanced Study in Oslo to work on grammatical gender with lots of international collaborators. Congratulations to everybody who was involved in the application process!