1.Task Specification
Given a question and it's corresponding document, your system should select one or more sentences as answers from the document.

2.Data Format
1).An example:
֥ӸʵӰŮ˭ \t ֥ӸʵӰ(Chicago International Film Festival)ǱʷõӰڡ \t 0
֥ӸʵӰŮ˭ \t ŮǽСͼŵĻ¡й \t 1
֥ӸʵӰŮ˭ \t ֥ӸӰÿ10¾ٰ죬1965һӰ֥ӸʵӰѳΪ֪ȵӰʢᡣ \t 0
֥ӸʵӰŮ˭ \t ҹӰıġն1990׻õӰ߽-- \t 0
֥ӸʵӰŮ˭ \t ֥ӸӰ֯ɵӰˡ˶.1964귢ġ \t 0
֥ӸʵӰŮ˭ \t ּΪͨӰ¼ֶμǿͬĻ֮͹ͨ \t 0
֥ӸʵӰŮ˭ \t ίر󽱣ͼŵĻ¡й \t 0
֥ӸʵӰŮ˭ \t 2002꣬30Һ͵90ಿƬ40ಿƬμ˵Ӱڣص6ڡ \t 0

2).Explanation
A question (the 1st column), questions corresponding document sentences (the 2nd column), and their answer annotations (the 3rd column) are provided. 
If a document sentence is the correct answer of the question, its annotation will be 1, otherwise its annotation will be 0. 
The three columns will be separated by the symbol \t.
All the dataset file are encoded in UTF-8.

3.Data Statistics
dataset             # of unique questions
train set           7895
development set     878
test set            5997

4.Evaluation Metrics
MRR, MAP, and ACC@1.


5. ļʽ

ÿֻһʵʾʾʹ𰸺ѡ֮Ĺϵ


磺

0.1534
2.7762
0.0097
15.2345
.
.
.
.
.

۹߻ᰴֵСÿʾĽ
