Long range coarticulation
By "long range" coarticulation, we mean the influence of vowels or consonants in one syllable with those in another syllable, sometimes over a span of several syllables, or a whole word or phrase.
Studies of long range coarticulation include:
- The MPhil and DPhil research of Paula West.
- John Coleman's data-mining study of the acoustic correlates of distinctive features, published in Coleman (2003). Discovering the acoustic correlates of phonological contrasts. Journal of Phonetics 31, 351-372.
- A major project by Greg Kochanski, John Coleman and Christina Orphanidou to examine acoustic and articulatory data (obtained using Dynamic MRI) about possible long range coarticulatory aspects of tongue root and tongue dorsum movements.
Articulation and Coarticulation in the Lower Vocal Tract
The aims of this project were as follows:
- To test whether phonological representation of speech sounds using feature bundles (such as [+/- voiced], [+/- ATR], [+/- nasal]) is sufficient to describe articulatory motions. In particular, we aim to determine whether or not there is articulatory evidence for the [+/- Advanced Tongue Root] feature in Southern British English vowels. (This is done by using mathematical models of speech motor control.)
- To see if models of speech motor control can be made to describe articulatory data. This may help us learn how phonological features are implemented, and perhaps how the articulators are controlled.
- To explore the connection between phonological features and acoustic (in)variance of sounds. Which speech sounds are resistant to changes in their context, and therefore have an articulatory target, and thus presumably have a specified feature? Vice versa, sounds that are affected strongly by coarticulation are likely to have unspecified features. In particular, if the [Advanced Tongue Root] feature is empirically supported, we will determine whether this feature exhibits coarticulation with neighbouring syllables.
- To determine the best methods for MR imaging of speech. We compared two different MRI techniques for monitoring speech production, real-time and stroboscopic (cardiac) images.
- To assess whether the repetitive production of a sentence is comparable to a single production of the same sentence. If they differ, how much and in which way? For us, this is a crucial part of our choice of MR techniques. Cardiac imaging seems to produce better images but requires a subject to produce the same sentence 10 times in succession; for this to be scientifically useful, we need to understand repetitive speech.
- To develop methods for recording speech in the noisy environment of an MRI scanner and for removing noise from those recordings.
(This project was funded by the ESRC under research award RES-000-23-1094.)