LIUM publishes new papers on named entity extraction and speech recognition

,

news.bridge partner LIUM has released two new academic papers on computation and language: End-to-end named entity extraction from speech and TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation. The papers were collaboratively written by Antoine Caubrière, Yannick Estève, Sahar Ghannay, Antoine Laurent, Natalia Tomashenko (University of Le Mans), François Hernandez and Vincent Nguyen (Ubiqus), and Emmanuel Morin (University of Nantes).

End-to-end named entity extraction from speech shows that it’s possible to recognize named entities in speech with a deep neural network that directly analyzes the audio signal. This new approach is an alternative to the classical pipeline which applies an automatic speech recognition (ASR) system first and subsequently analyzes the transcriptions. LIUM’s new method not only allows to simultaneously deal with speech recognition and entity recognition, it also makes it possible to obtain named entities only — and ignore the other words. The approach is interesting for at least two reasons:

  1. The system is easier to deploy (because you only need to set up a neural net).
  2. Performance will most likely be better (because the neural net is optimized for named entity extraction, whereas in the classical pipeline the different tools are not jointly optimized for the same task).

End-to-end named entity extraction from speech is available on arXiv under this link.

TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation is about describing (and providing) a new LIUM TED talk corpus as well as documenting experiments with it.

TED LIUM basically follows two aims: Train and improve acoustic models — and fix flawed TED talk subtitles while at it. Via its own ASR system, tailor-made for processing TED talks, LIUM creates new transcriptions of original audio. Subsequently, these transcriptions are compared to the old, often inferior subtitles. LIUM saves reliable segments, gets rid of unreliable material, applies some heuristics and finally provides the subtitles in file formats used by the international speech recognition community.

One of the (rather surprising) insights of the paper is that when transcribing oral presentations like TED talks, augmenting training data from 207 hours to 452 hours (+ 218.36%) doesn’t significantly affect state-of-the-art ASR system (i.e. Hidden Markov Models coupled with deep neural networks using a pipeline of different processes: speaker adaptation, acoustic decoding, language model rescoring). The word error rate (WER) dropped by merely 0.2%, from (an already low) 6.8% to 6.6%. The system seems to have reached a plateau.

However, training data augmentation absolutely benefits emergent ASR systems (fully neural end-to-end architecture, only one process, no speaker adaptation, no heavy language model rescoring). In this case, the same augmentation of training data led to a 4.8% drop in the WER. At 13.7%, it’s still rather high, but significantly lower than the 18.5% achieved by an Markov-based system at a comparable development stage in 2012.

Conclusion: Emergent fully neural ASR systems aren’t bad at all, very sensitive to training data augmentation, and can probably be exploited further. The big question in this context: How much data does it take to reach competitive results?

TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation is available on arXiv under this link and was also submitted to and accepted by SPECOM 2018.