Multimodal Database of vocal and gestural imitations elicited by sounds.
The database will be available soon for download.
The complete database consists of 3 complementary databases:
- A database (Ircam) of vocal and gestural imitations of 52 referent sounds produced by French lay imitators. The referent sounds are mechanical sounds (12), basic physical interactions (12) and abstract sounds (10). The database is also composed by video recordings, sensors data and acoustical descriptors of each imitation. The database is structured as repertories of families of referent sounds and their associated imitations, classified by imitators.
- A database (Ircam) of vocal imitations made by a subset of French lay imitators and A database of described in terms of the phonatory and articulatory configurations annotated by experienced phoneticians (KTH). The database is structured as continuous file of the 52 imitations for each imitator and the associated ELAN file https://tla.mpi.nl/tools/tla-tools/elan/
- A multimodal database (KTH) of vocal imitations of 50 referent sounds made by Swedish-speaking professional actors, described in terms of the phonatory and articulatory configurations employed for each of the sound productions. These configurations were assessed and annotated by experienced phoneticians, from recordings of the high-fidelity audio, dual-camera video, and electroglottographic (EGG) recordings.
For each database, a explanatory text detailing the structure and the different files.
Persons who want to download the recordings must first:
register with an institutional e-mail address
accept these clauses concerning recordings:
use only within the framework of their research or within an educational framework,
do not use for any artistic or commercial purpose,
make no copies other than for the research work,
always cite the source of the database in each publication or presentation referring to this database.
Reference: "Database of vocal and gestural imitations elicited by sounds, Copyright KTH & Ircam, created with the financial support of the Future and Emerging Technologies (FET) program within the Seventh Framework Program for Research of the European Commission under FET-Open grant number: 618067.