Navigation


Results for the ASR task

Here you can find the results submitted to the REVERB challenge for the ASR task.
For the results for the SE task, please see Result(SE) tab.

Note that the following figure does not display properly with some browsers, e.g. Internet Explorer. Please use Firefox, Google Chrome or any other HTML5 capable browser.

Details of individual systems will appear when you hold the mouse over the respective systems' names in the list.

Word error rate (WER)

Please use the following check boxes to select the systems to be displayed.

# of ch:
  • 8ch
  • 2ch
  • 1ch
Proc. Scheme:
  • Full batch
  • utterance-based
  • real-time
Acoustic model:
  • Own dataset
  • Multi-condition (provided by challenge)
  • Clean (provided by challenge)
Recognizer:
  • Own recognizer
  • Challenge baseline recognizer (w/ cmllr)
  • Challenge baseline recognizer (w/o cmllr)

Click to Select

    < prev     next >
    Note that results that slightly diverge from the challenge regulation are marked with * next to the author name in the legend. Please refer to the corresponding papers for the details. Updated results can also be found in the papers.


    created by Nuvio | Webdesign