Instructions to presenters
Please read the following instructions carefully.
For further questions regarding the presentations,
contact us at REVERB-challenge@lab.ntt.co.jp
Lecture presentations
Each presentation is allocated a 20 minutes time-slot. Each presentation should take about 15 minutes, leaving 5 minutes for questions from the audience and some extra time to introduce the next presenter.
The presentation should clearly discuss the motivations and characteristics of the proposed system and its results for the REVERB challenge task, to make the advantages of the system easy to understand. It is not needed to include details about the REVERB challenge task because it will be discussed in the challenge summary by the organizers.
We encourage presenters to upload their presentations to the workshop computer. This should be done during the morning coffee break for the presentations of the morning session and during the lunch time for the presentations of the afternoon session.
Poster presentations
The size of the poster boards is 100cm width x 250cm height. The poster size should not exceed the above poster board.
The title, authors and affiliations should appear at the top of the poster. The poster should clearly discuss the motivations and characteristics of the proposed system and its results for the REVERB challenge task, to make the advantages of the system easy to understand. It is not needed to include details about the REVERB challenge task because it will be discussed in the challenge summary by the organizers.
The poster boards will be identified by a number shown at
the top of the board. Please hang your poster during the
break before the poster presentation to the poster board
corresponding to your poster ID.
The poster ID, is the digit number shown before the title
in the program webpage,
e.g. p1.XX for first poster session
p2.XX for second poster session
Paper submission
Paper submission is now closed!
Please read carefully the following instructions before preparing your manuscript.
Paper submission is only open to REVERB challenge participants.
- REVERB challenge participants are invited to submit papers up to 8 pages (including references).
- The papers should describe the system used for the REVERB challenge and provide detailed experimental results to enable comparison with other challenge participants. Please check the challenge instructions and the information about the SE task and ASR task for details about information that should be included into the papers.
- The papers should also include a clear discussion on the motivations and advantages of the proposed system for the REVERB challenge, so that readers can easily grasp the idea of the proposed system.
- The papers should be formatted according to the ICASSP paper style. The corresponding paper-kit may be obtained here.
- Kinoshita, K.; Delcroix, M.; Yoshioka, T.; Nakatani, T.; Habets, E.; Haeb-Umbach, R.; Leutnant, V.; Sehr, A.; Kellermann, W.; Maas, R.; Gannot, S.; Raj, B., "The REVERB Challenge: A Common Evaluation Framework for Dereverberation and Recognition of Reverberant Speech," Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA-13), 2013.
- Robinson, T.; Fransen, J.; Pye, D.; Foote, J.; Renals, S., "WSJCAMO: a British English speech corpus for large vocabulary continuous speech recognition," Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing (ICASSP-95), vol.1, pp. 81-84, 1995.
- Lincoln, M.; McCowan, I.; Vepa, J.; Maganti, H.K., "The multi-channel Wall Street Journal audio visual corpus (MC-WSJ-AV): specification and initial experiments," Proceedings of the 2005 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU-05), pp. 357-362, 2005.
- Paul, Douglas B.; Baker, Janet M., "The design for the Wall Street Journal-based CSR corpus," Proceedings of the workshop on Speech and Natural Language (HLT-91), pp. 357-362, 1992.
- Young, S. J.; Evermann, G.; Gales, M. J. F.; Hain, T.; Kershaw, D.; Moore, G.; Odell, J.; Ollason, D.; Povey, D.; Valtchev, V.; Woodland, P. C., "The HTK Book, version 3.4," Cambridge University Engineering Department, 2006.