关于baseline
使用的SVM regression, RBF kernel. 用 grid search 设定 hpyer parameter. 使用了17个feature:
<http://www.quest.dcs.shef.ac.uk/quest_files/features_blackbox_baseline_17>
number of tokens in the source sentence
number of tokens in the target sentence
average source token length
LM probability of source sentence
LM probability of target sentence
number of occurrences of the target word within the target hypothesis (averaged for all words in the hypothesis - type/token ratio)
average number of translations per source word in the sentence (as given by IBM 1 table thresholded such that prob(t|s) > 0.2)
average number of translations per source word in the sentence (as given by IBM 1 table thresholded such that prob(t|s) > 0.01) weighted by the inverse frequency of each word in the source corpus
percentage of unigrams in quartile 1 of frequency (lower frequency words) in a corpus of the source language (SMT training corpus)
percentage of unigrams in quartile 4 of frequency (higher frequency words) in a corpus of the source language
percentage of bigrams in quartile 1 of frequency of source words in a corpus of the source language
percentage of bigrams in quartile 4 of frequency of source words in a corpus of the source language
percentage of trigrams in quartile 1 of frequency of source words in a corpus of the source language
percentage of trigrams in quartile 4 of frequency of source words in a corpus of the source language
percentage of unigrams in the source sentence seen in a corpus (SMT training corpus)
number of punctuation marks in the source sentence
number of punctuation marks in the target sentence
关于任务背景
翻译评价任务有3个: Task 1 是句子级别的; Task 2 是单词级别的; Task 3 是文档级别的。
下边是所有参赛(评测任务)的小组,这里只关注句子级别(Task 2)的。
ID | Tasks | Participating team | Paper |
---|---|---|---|
DCU-SHEFF | 2 | Dublin City University, Ireland and University of Sheffield, UK | Logachevaet al., 2015 |
HDCL | 2 | Heidelberg University, Germany | Kreutzer et al., 2015 |
LORIA | 1 | Lorraine Laboratory of Research in Computer Science and its Applications,France | Langlois, 2015 |
RTM-DCU | 1,2,3 | Dublin City University, Ireland | Bicici et al., 2015 |
SAU-KERC | 2 | Shenyang Aerospace University, China | Shang et al., 2015 |
SHEFF-NN | 1,2 | University of Sheffield Team 1, UK | Shah et al., 2015 |
UAlacant | 2 | Alicant University, Spain | Esplà-Gomis et al., 2015a |
UGENT | 1,2 | Ghent University, Belgium | Tezcan et al., 2015 |
USAAR-USHEF | 3 | University of Sheffield, UK and Saarland University, Germany | Scarton et al.,2015a |
USHEF | 3 | University of Sheffield, UK | Scarton et al., 2015a |
HIDDEN | 3 | Undisclose |
评测的结果有两种,HTER 和 ranking。HTER (Human-targeted Translation Error Rate) 越小越好。评价指标是 MAE 和 RMSE。(通过计算 ranking 是将翻译的句子从好到坏排序,不考虑。)
ID | System | MAE↓ | RMSE↓ |
---|---|---|---|
RTM-DCU | RTM-FS+PLS-SVR | 13.25 | 17.48 |
LORIA | 17+LSI+MT+FILTRE | 13.34 | 17.35 |
RTM-DCU | RTM-FS-SVR | 13.35 | 17.68 |
LORIA | 17+LSI+MT | 13.42 | 17.45 |
UGENT-LT3 | SCATE-SVM | 13.71 | 17.45 |
UGENT-LT3 | SCATE-SVM-single | 13.76 | 17.79 |
SHEF | SVM | 13.83 | 18.01 |
Baseline | SVM | 14.82 | 19.13 |
SHEF | GP | 15.16 | 18.97 |
可以看出 RTM-DCU 和 LORIA 两组的效果最好, 后边就分析这两组的工作
所有论文都在这里: http://www.statmt.org/wmt15/W…
RTM-DCU
实际上就是一个Transductive Learning和Active Learning的组合,优化特征选择。