5 questions with… Yannick Estève

yannick

Four partners, four areas of expertise, four teams with a distinctive set of skills. For the second part of this series of posts on the people behind news.bridge, we’ve talked to Yannick Estève, Professor of Computer Science at the University of Le Mans, and project lead for LIUM.

Yannick, when and how did you first get in touch with human language technology?

Well, first of all, I’ve always been a fan of science fiction. So most certainly, books and movies like “2001” had a big influence on me. When I became a student in the 1990s, I was fascinated by computer science, but also by the humanities. Working on human language technology seemed like an excellent way to satisfy both interests.

What is the most fascinating aspect about news.bridge?

In my opinion, the most fascinating aspect is that you can handle complex and powerful technologies like speech recognition, machine translation, speech generation, and summarization through a very simple user interface. The platform offers easy access to global information in a vast number of languages, and that’s really fantastic!

What is the project’s biggest challenge?

The biggest challenge is probably related to integration. We need to manage heterogeneous technologies and services from several companies – and come up with one smart, unified application.

Who’s in your team and what are they currently working on?

We have four core members in this project: Sahar Ghannay is a post-doc researcher and an expert on deep learning for speech and natural language processing. Antoine Laurent is an assistant professor, his focus is on speech recognition. Natalia Tomashenko is a research engineer, she’s all about deep learning and acoustic models adaptation for speech recognition. Well, and I’m the professor and project lead; my expertise lies in speech and language technologies and deep learning.

We’re all members of LIUM, which can safely be called an HLT stronghold. For the last five years, our main research interest has been deep learning applied to media technology. Currently, we mainly work on neural end-to-end approaches for different tasks related to speech and language. End-to-end neural means that a single neural model processes the input (for example an audio signal containing speech) to generate the output (text), whereas in the “classical” pipeline, we apply different sequential systems and models between the input and the final output.

Where do you see news.bridge in five years?

In five years, news.bridge will have even better integrated services, cover even more languages, and offer new functionalities, like the smooth extraction of semantic information. Progress in HLT is very fast, and we still haven’t realized the full potential of the deep learning paradigm. Increasing computation power and training data is just a first step here.