LIUM publishes new papers on named entity extraction and speech recognition

,

news.bridge partner LIUM has released two new academic papers on computation and language: End-to-end named entity extraction from speech and TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation. The papers were collaboratively written by Antoine Caubrière, Yannick Estève, Sahar Ghannay, Antoine Laurent, Natalia Tomashenko (University of Le Mans), François Hernandez and Vincent Nguyen (Ubiqus), and Emmanuel Morin (University of Nantes).

End-to-end named entity extraction from speech shows that it’s possible to recognize named entities in speech with a deep neural network that directly analyzes the audio signal. This new approach is an alternative to the classical pipeline which applies an automatic speech recognition (ASR) system first and subsequently analyzes the transcriptions. LIUM’s new method not only allows to simultaneously deal with speech recognition and entity recognition, it also makes it possible to obtain named entities only — and ignore the other words. The approach is interesting for at least two reasons:

  1. The system is easier to deploy (because you only need to set up a neural net).
  2. Performance will most likely be better (because the neural net is optimized for named entity extraction, whereas in the classical pipeline the different tools are not jointly optimized for the same task).

End-to-end named entity extraction from speech is available on arXiv under this link.

TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation is about describing (and providing) a new LIUM TED talk corpus as well as documenting experiments with it.

TED LIUM basically follows two aims: Train and improve acoustic models — and fix flawed TED talk subtitles while at it. Via its own ASR system, tailor-made for processing TED talks, LIUM creates new transcriptions of original audio. Subsequently, these transcriptions are compared to the old, often inferior subtitles. LIUM saves reliable segments, gets rid of unreliable material, applies some heuristics and finally provides the subtitles in file formats used by the international speech recognition community.

One of the (rather surprising) insights of the paper is that when transcribing oral presentations like TED talks, augmenting training data from 207 hours to 452 hours (+ 218.36%) doesn’t significantly affect state-of-the-art ASR system (i.e. Hidden Markov Models coupled with deep neural networks using a pipeline of different processes: speaker adaptation, acoustic decoding, language model rescoring). The word error rate (WER) dropped by merely 0.2%, from (an already low) 6.8% to 6.6%. The system seems to have reached a plateau.

However, training data augmentation absolutely benefits emergent ASR systems (fully neural end-to-end architecture, only one process, no speaker adaptation, no heavy language model rescoring). In this case, the same augmentation of training data led to a 4.8% drop in the WER. At 13.7%, it’s still rather high, but significantly lower than the 18.5% achieved by an Markov-based system at a comparable development stage in 2012.

Conclusion: Emergent fully neural ASR systems aren’t bad at all, very sensitive to training data augmentation, and can probably be exploited further. The big question in this context: How much data does it take to reach competitive results?

TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation is available on arXiv under this link and was also submitted to and accepted by SPECOM 2018.

Insights from our first user testing sessions

, ,

newsbridge_user_testing

Getting early input from the people you are designing for is absolutely essential – which is why we invited about a dozen colleagues to give the latest beta version of news.bridge a test run at the DW headquarters in Bonn last month. We had two really inspiring sessions with journalists, project managers and other media people working for DW and associated companies — and we have gained a number of useful insights. While some of them are too project-specific to share (news.bridge is not in public beta yet), there are also more general learnings that should make for an interesting journo tech blog post. Here we go:

Infrastructure and preparation

When inviting people to simultaneously stream and play around with news videos, make sure you have enough bandwidth. This may sound trivial, but it’s important, especially in Germany (which doesn’t even make the Top 20 when it comes to internet connection speed).

To document what your beta testers have to say as quickly and convenient as possible, we recommend to prepare digital questionnaires (e.g. Google Forms) and send out a link well before the end of the session. That way, you get solid feedback from everyone. It’s also a good idea to add a screenshot/comment feature (e.g. html2canvas) to the platform that is being tested. In addition, open discussions and interview-type interactions provide very useful feedback.

Testing automatic speech recognition (ASR) tools

Thanks to artificial neural networks, ASR services have become incredibly sophisticated in the last couple of years and deliver very decent results. Basically all of our test users said the technology will significantly speed up the tiresome transcription process when producing multilingual news videos.

However, ASR still has trouble when:

  • people speak with a heavy dialect and/or in incomplete sentences (like some European football coaches who shall not be named)
  • people speak simultaneously (which frequently happens at press conferences, for example)
  • complicated proper names occur (Aung San Suu Kyi, Hery Rajaonarimampianina)
  • homophones occur (merry, marry, Mary)
  • there is a lot of background noise (which is often interpreted as language and transcribed to gibberish)

As a result, journalists will almost certainly have to do thorough post-editing for a while and also correct (or add) punctuation, which is crucial for the subsequent translation.

Testing machine translation (MT) tools

What has been said about ASR also applies to MT: The tech has made huge leaps, but results are not perfect yet. Especially when you are a professional editor and thus have high standards. Something really important to remember:

The better and more structured your transcript (or uploaded original script),
the better the translation you end up with.

As for the limits of machine translation during our testrun, we found that “exotic” languages like Pashto (which is really important for international broadcasters like DW) are not implemented really well. Few services cover them, and the translation results are subpar. This is not a big surprise, of course, as the corpus used to train the algorithms is so much smaller than that of a major Western language like French or German. This also means that it is up to projects like news.bridge to improve MT services by feeding the algorithms high-quality content, e.g. articles from DW’s Pashto news site.

While MT tools are in general very useful when producing web videos — you need a lot of subtitling in the era of mobile social videos on muted phones — there are some workflows that are hard to improve or speed up. For example: How do you tap into digital information carriers that are an individually branded, hard-coded part of a video created in software like Adobe Premiere? Well, for now we can’t, but we’re working on solutions. In the meantime, running news.bridge in a fixed tab and copy-pasting your translated script bits is an acceptable workaround.

Testing speech synthesis

Sometimes, computer voices are indispensable. For example, when you’re really curious about this blogpost, but can’t read it because you’re on a bike or in a (traditional) car.

In news production however, artificial readers/presenters are merely a gimmick. At least for the time being. That’s because once your scripts are finished, reading/recording them isn’t that time consuming and will provide much nicer results. Besides, synthetic voices aren’t yet available in all languages (once again, Pashto is paragon).

Nevertheless, news.bridge beta testers told us that the voices work fairly well, and even sound pretty natural in some cases. They can be trained, by the way, which is an interesting exercise we will try out at some point.

HLT services and news production in a nutshell

If we had to sum up the assessment of our beta testers in just a few sentences, they would read something like this:

HLT services and tools are useful (or very useful) in news productions these days: They get you decent results and save you a lot of time.

news.bridge is a promising, easy-to-use mash-up platform, especially when it comes to transcribing and translating and creating subtitles (another relevant use case is gisting).

news.bridge is not about complete automation. It’s about supporting journalists and editors. It’s about making things easier.

Hello, world!

We’re live: Our website is up, our Twitter is up, and the platform itself (please register via email) is ready for upgrades, rewiring and in-depth testing.

DNI project news.bridge officially started in January 2018, and we’ll be working on it until the end of June 2019. Stay tuned for updates on new features, new partners, upcoming events, and dispatches from the world of human language technology (HLT).