The REVERB challenge data is currently available only through LDC. For more details, please visit Download tab.

If citing the challenge, please use the following reference;
K. Kinoshita and M. Delcroix and S. Gannot and E. Habets and R. Haeb-Umbach and W. Kellermann and V. Leutnant and R. Maas and T. Nakatani and B. Raj and A. Sehr and T. Yoshioka; "A summary of the REVERB challenge: state-of-the-art and remaining challenges in reverberant speech processing research" EURASIP Journal on Advances in Signal Processing, doi:10.1186/s13634-016-0306-6, 2016
[Online PDF]

Welcome to the REVERB challenge

Recently, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multi-channel de-reverberation techniques, and automatic speech recognition (ASR) techniques robust to reverberation. To evaluate state-of-the-art algorithms and draw new insights regarding potential future research directions, we are now launching and calling for participation* in the REVERB (REverberant Voice Enhancement and Recognition Benchmark) challenge that provides an opportunity to the researchers in the field to carry out a comprehensive evaluation of their methods based on a common database and on common evaluation metrics. This is a multidisciplinary challenge. We encourage participants from both the speech enhancement and speech recognition communities. All entrants will be invited to submit papers describing their work to a dedicated workshop held in conjunction with ICASSP 2014 and HSCMA 2014.

                                     *PDF version of call for participation is available here.

The challenge assumes the scenario of capturing utterances spoken by a single stationary distant-talking speaker with 1-channel (1ch), 2-channel (2ch) or 8-channel (8ch) microphone-arrays in reverberant meeting rooms. It features both real recordings and simulated data, a part of which simulates the real recordings.

The challenge consists of two tasks, namely speech enhancement (SE) and ASR tasks in reverberant environments. Participants are invited to take part in either or both tasks. The speech enhancement challenge task consists of enhancing noisy reverberant speech with single-/multi-channel speech enhancement technique(s), and evaluating the enhanced data in terms of objective and subjective evaluation metrics. The ASR challenge task consists of improving the recognition accuracy of the same reverberant speech. The background noise is mostly stationary and the signal-to-noise ratio is modest.

On this web site you will find everything you need to get started.

If you have any questions regarding the challenge, please do not hesitate to contact us.


  • Kaldi recipe for REVERB challenge is significantly updated to incorporate state-of-the-art techniques. The recipe provides front-end processing based on weighted prediction error (WPE) dereverberation (nara_wpe) and beamforming (BeamformIt), and state-of-the-art ASR acoustic modeling based on lattice free MMI. It also helps you run a complete set of the REVERB challenge experiments by providing speech enhancement and ASR metrics. You can check it out here. (Nov. 28 2018)
  • A summary journal-paper was published, and is available online. (Jan 18 2016) [Online PDF]
  • Proceedings of the challenge workshop are available at Proceedings tab. (May 16 2014)
  • The challenge results are publicly available at Result(ASR) tab and Result(SE) tab. (May 9 2014)
  • A Kaldi-based ASR baseline system is available thanks to Felix Weninger. It is available through the Download page. (May 9 2014)
  • Paper submission is now closed! Paper acceptance notification is planned for March 7, 2014. (Jan 30 2014)
  • The result submission is now closed.
  • Due to multiple requests, the result submission deadline is EXTENDED to Dec 15 2013. Accordingly, the challenge workshop paper submission is also extended to Jan 26 2014. (dec. 5, 2013)
  • There are some known issues with the SE evaluation tool. If needed you can find fix on the Download page. (dec. 5, 2013)
  • We have just sent e-mails to participants to detail the result submission process. The result submission tables can be found on the Download page. We also added a script to measure the computational time (wall-clock time) of SE methods, available at the the Download page. (Nov. 25, 2013)
  • The evaluation data set has been released. Registered REVERB challenge participants should have received an e-mail notification from LDC. We also released the task files for the evaluation data sets. The task files are available at the Download page. (Nov. 14, 2013)
  • We updated the regulation to clarify it. Please check the instruction page. (Nov. 8, 2013)
  • Due to the extension of the ICASSP submission deadline, we also would like to extend the REVERB challenge deadlines by 1 week (see Important dates). (Nov. 1, 2013)
  • The REVERB workshop webpage contains details about the workshop. It will be updated to include information about paper submission... (Oct. 22, 2013)
  • It is possible to use the challenge data for submissions to ICASSP2014 and HSCMA2014. In such cases, please use the following reference to cite the challenge. (Oct. 17, 2013)
    Keisuke Kinoshita, Marc Delcroix, Takuya Yoshioka, Tomohiro Nakatani, Emanuël Habets, Reinhold Haeb-Umbach, Volker Leutnant, Armin Sehr, Walter Kellermann, Roland Maas, Sharon Gannot, Bhiksha Raj; "The REVERB Challenge: A Common Evaluation Framework for Dereverberation and Recognition of Reverberant Speech" in Proc. of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2013


This challenge is part of the IEEE SPS AASP challenge series. We thank members of the AASP challenge subcommittee and IEEE TC committee chairs for their support and encouragement. We would also like to acknowledge with much appreciation support from LDC (Linguistic Data Consortium) for providing the challenge data to participants at no cost under the LDC challenge license agreement. We would like to thank Dr. Erich Zwyssig, Dr. Mike Lincoln and Prof. Steve Renals (University of Edinburgh) for letting us use the MC-WSJ-AV data and their effort to make it available through the LDC.


We are proud to acknowledge workshop sponsorship from,







Important dates

Jul 1, 2013

Release of development dataset and scripts for evaluation

Nov 5, 2013

Nov 12, 2013

Release of evaluation dataset

Dec 1, 2013

Dec 15, 2013

Deadline for submission of results

Jan 26, 2014

Deadline for submission of papers

Feb 28, 2014

March 7, 2014

Notification of acceptance

March 7, 2014

Opening of workshop registration

March 21, 2014

Camera-ready paper submission deadline

April 7, 2014

Author registration deadline

May 10, 2014

Workshop in conjunction with ICASSP2014 and HSCMA 2014


Keisuke Kinoshita

Marc Delcroix

Takuya Yoshioka

Tomohiro Nakatani



Emanuel Habets

Int. Audio Labs Erlangen


Reinhold Haeb-Umbach

Volker Leutnant

Paderborn Univ.


Armin Sehr

Beuth Univ. of Applied Sciences Berlin


Walter Kellermann

Roland Maas

Univ. of Erlangen-Nuremberg


Sharon Gannot

Bar-Ilan Univ.


Bhiksha Raj

Carnegie Mellon Univ.