Legacy: Features and Demos
This section of the Haskins Laboratories website features ongoing research along with legacy demos and projects originally created at Haskins and other organizations, starting in the 1950s and continuing to the present time.
Note: The original Haskins web server moved from a Haskins site to a Yale University server and portions of the old site were never fully converted. We have taken the opportunity to update some of these features and demos and will be working on a few others during the coming months, if time permits. The older demos are intended to capture a bit of the history of some of the earlier work at Haskins.
You can get a better feel for the early days of speech research (the 1940s and 1950s) by taking a virtual tour of a talking machine and research tool created at Haskins Laboratories.
• The Adventure Film, 1954
In 1954, Haskins Laboratories was featured in an episode of the CBS television documentary show Adventure hosted by Charles Collingwood. The Adventure film shows Frank Cooper, Al Liberman, and Pierre Delattre demonstrating the use of the Haskins Pattern Playback.
The initial collaboration of Leigh Lisker and Arthur Abramson at Haskins Laboratories resulted in a paper, “A cross-language study of voicing in initial stops,” in 1964 in the journal Word that has become one of the most widely cited papers in all of phonetics. In this paper, they introduced the acoustic voice onset time (VOT) measure to characterize the nature of stop consonant voicing distinctions. .
An interactive demonstration of time-varying SineWave Synthesis and related research. This section lets you take part in an online perception test by listening to the synthesized tokens in the Sentences area.
A computational model of the vocal tract, begun at Bell Laboratories (Mermelstein, 1973) and subsequently refined by Rubin, Baer, and Mermelstein (1981) in the ASY program, used in studies of speech production and speech perception. (Forthcoming …)
• The Gestural Computational Model
Combines the Articulatory Phonology approach of Cathe Browman and Louis Goldstein with the Task Dynamic model of Elliot Saltzman, and the Haskins articulatory speech synthesis system (ASY) developed by Philip Rubin and colleagues.
TADA is a software implementation of the Task Dynamic model of inter-articulator speech coordination, also incorporating a coupled-oscillator model of inter-gestural planning, and a gestural-coupling model.
This project provides an overview of some of the exciting international work in auditory-visual speech and related areas. (Forthcoming …)
In the early 2000s, Douglas Whalen, Khalil Iskarous, and colleagues pioneered the pairing of ultrasound, used in this case to monitor speech articulators that cannot be seen, and Optotrak, an opto-electronic position-tracking device, used here to monitor visible articulators. Within the past decade, there has been interest in the clinical application of ultrasound to individuals with speech disorders.