posted by user: anoopk || 8582 views || tracked by 13 users: [display]

WMT 2016 : FIRST CONFERENCE ON MACHINE TRANSLATION

FacebookTwitterLinkedInGoogle

Link: http://www.statmt.org/wmt16/
 
When Jul 11, 2016 - Jul 12, 2016
Where Berlin
Submission Deadline May 8, 2016
Notification Due Jun 5, 2016
Final Version Due Jun 22, 2016
Categories    NLP   machine translation
 

Call For Papers

This conference builds on ten previous workshops on statistical machine translation:

the NAACL-2006 Workshop on Statistical Machine Translation,
the ACL-2007 Workshop on Statistical Machine Translation,
the ACL-2008 Workshop on Statistical Machine Translation,
the EACL-2009 Workshop on Statistical Machine Translation,
the ACL-2010 Workshop on Statistical Machine Translation
the EMNLP-2011 Workshop on Statistical Machine Translation,
the NAACL-2012 Workshop on Statistical Machine Translation,
the ACL-2013 Workshop on Statistical Machine Translation,
the ACL-2014 Workshop on Statistical Machine Translation, and
the EMNLP-2015 Workshop on Statistical Machine Translation.

IMPORTANT DATES
Release of training data for shared tasks January, 2016
Evaluation periods for shared tasks April, 2016
Paper submission deadline May 8, 2016
Notification of acceptance June 5, 2016
Camera-ready deadline June 22, 2016
OVERVIEW

This year's conference will feature ten shared tasks:

a news translation task,
an IT domain translation task (NEW),
a biomedical translation task (NEW),
an automatic post-editing task,
a metrics task (assess MT quality given reference translation).
a quality estimation task (assess MT quality without access to any reference),
a tuning task (optimize a given MT system),
a pronoun translation task,
a bilingual document alignment task (NEW),
a multimodal translation task (NEW)

In addition to the shared tasks, the conference will also feature scientific papers on topics related to MT. Topics of interest include, but are not limited to:

word-based, phrase-based, syntax-based, semantics-based SMT
neural machine translation
using comparable corpora for SMT
incorporating linguistic information into SMT
decoding
system combination
error analysis
manual and automatic method for evaluating MT
scaling MT to very large data sets

We encourage authors to evaluate their approaches to the above topics using the common data sets created for the shared tasks.

NEWS TRANSLATION TASK

The first shared task which will examine translation between the following language pairs:

English-German and German-English
English-Finnish and Finnish-English
English-Czech and Czech-English
English-Romanian and Romanian-English NEW
English-Russian and Russian-English
English-Turkish and Turkish-English NEW

The text for all the test sets will be drawn from news articles. Participants may submit translations for any or all of the language directions. In addition to the common test sets the conference organizers will provide optional training resources.

All participants who submit entries will have their translations evaluated. We will evaluate translation performance by human judgment. To facilitate the human evaluation we will require participants in the shared tasks to manually judge some of the submitted translations. For each team, this will amount to ranking 300 sets of 5 translations, per language pair submitted.

We also provide baseline machine translation systems, with performance comparable to the best systems from last year's shared task.
IT TRANSLATION TASK

This task focuses on domain adaptation of MT to the IT domain for the following languages pairs:

English-to-Bulgarian (EN-BG)
English-to-Czech (EN-CS)
English-to-German (EN-DE)
English-to-Spanish (EN-ES)
English-to-Basque (EN-EU)
English-to-Dutch (EN-NL)
English-to-Portuguese (EN-PT)

Parallel corpora (including in-domain training data) are available. Evaluation will be carried out both automatically and manually. See detailed information about the task.
BIOMEDICAL TRANSLATION TASK

In this first edition of this task, we will evaluate systems for the translation of scientific abstracts in biological and health sciences for the following languages pairs:

English-French and French-English
English-Spanish and Spanish-English
English-Portuguese and Portuguese-English

Parallel corpora will be available for the above language pairs but also monoligual corpora for each of the four languages. Evaluation will be carried out both automatically and manually.
AUTOMATIC POST-EDITING TASK

This shared task will examine automatic methods for correcting errors produced by machine translation (MT) systems. Automatic Post-editing (APE) aims at improving MT output in black box scenarios, in which the MT system is used "as is" and cannot be modified. From the application point of view APE components would make it possible to:

Cope with systematic errors of an MT system whose decoding process is not accessible
Provide professional translators with improved MT output quality to reduce (human) post-editing effort

In this second edition of the task, the evaluation will focus on one language pair (English-German), measuring systems' capability to reduce the distance (HTER) that separates an automatic translation from its human-revised version approved for publication. This edition will focus on IT domain data, and will provide post-editions (of MT output) collected from professional translators.
METRICS TASK
The metrics task (also called evaluation task) will assess automatic evaluation metrics' ability to:

Rank systems on their overall performance on the test set
Rank systems on a sentence by sentence level

Participants in the shared evaluation task will use their automatic evaluation metrics to score the output from the translation task and the tunable metrics task. In addition to MT outputs from the other two tasks, the participants will be provided with reference translations. We will measure the correlation of automatic evaluation metrics with the human judgments.
QUALITY ESTIMATION TASK

Quality estimation systems aim at producing an estimate on the quality of a given translation at system run-time, without access to a reference translation. This topic is particularly relevant from a user perspective. Among other applications, it can (i) help decide whether a given translation is good enough for publishing as is; (ii) filter out sentences that are not good enough for post-editing; (iii) select the best translation among options from multiple MT and/or translation memory systems; (iv) inform readers of the target language of whether or not they can rely on a translation; and (v) spot parts (words or phrases) of a translation that are potentially incorrect.

Research on this topic has been showing promising results in the last couple of years. Building on the last three years' experience, the Quality-Estimation track of the WMT15 workshop and shared-task will focus on English, Spanish and German as languages and provide new training and test sets, along with evaluation metrics and baseline systems for variants of the task at three different levels of prediction: word, sentence, and document.
TUNING TASK
This task will assess your team's ability to optimize the parameters of a given hierarchical MT system (Moses).

Participants in the tuning task will be given complete Moses models for English-to-Czech and Czech-to-English translation and the standard developments sets from the translation task. The participants are expected to submit the moses.ini for one or both of the translation directions. We will use the configuration and a fixed revision of Moses to translate official WMT15 test set. The outputs of the various configurations of the system will be scored using the standard manual evaluation procedure.
PRONOUN TRANSLATION TASK
Details TBC
BILINGUAL DOCUMENT ALIGNMENT TASK
Details TBC
MULTIMODAL TRANSLATION TASK
This is a new task where participants are requested to generate a description for an image in a target language, given the image itself and one or more descriptions in a different (source) language.
PAPER SUBMISSION INFORMATION

Submissions will consist of regular full papers of 6-10 pages, plus additional pages for references, formatted following the ACL 2016 guidelines. In addition, shared task participants will be invited to submit short papers (4-6 pages) describing their systems or their evaluation metrics. Both submission and review processes will be handled electronically. Note that regular papers must be anonymized, while system descriptions do not need to be.

We encourage individuals who are submitting research papers to evaluate their approaches using the training resources provided by this conference and past workshops, so that their experiments can be repeated by others using these publicly available corpora.
POSTER FORMAT
For details on posters, please check with the local organisers.

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
WMT-metrics 2024   WMT24 Metrics Task: Call for Participation
WMT-Testsuites 2024   'Help us break LLMs' - Test suite sub-task of the Ninth Conference on Machine Translation (WMT24)
CCITT 2025   4th International Conference on Computing and Information Technology Trends
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
LSIJ 2024   Life Sciences: an International Journal
DEPLING 2023   International Conference on Dependency Linguistics
ICSTTE 2025   2025 3rd International Conference on SmartRail, Traffic and Transportation Engineering (ICSTTE 2025)
MAT 2024   10th International Conference of Advances in Materials Science and Engineering
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus