Home   >  Competitions   > 

 

Evaluation

 

Update 2018/11/2

 

We have set an upper limit for the number of predictions. Please submit results within 400,000 headcounts.

 

 

Update on 2018/10/18

 

Due to the massive computational time needed by previous evaluation metric, we changed the evaluation algorithm to PRECISION-RECALL (PR) curve. The score is based on the mAP value.

 

Please refer to https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/voc_eval.py and our sample submission file. The class label should be set within the range of 0 and 1.

 

Precision-Recall

 

Update on 2018/10/18

 

The X-axis (recall) and Y-axis (precision) are:

 

Recall = TP / (TP+FN)

 

Precision = TP /(TP+FP)

 

AP is defined by the area under the curve:

 

 

The curve is calculated by submission files and correct answers. The heads' location is defined by rectangles shown in the image below.  The overlap is defined by IOU shown below (please also refer to FDDB):

 

 

 

The competition uses Face Detection Data Set and Benchmark (FDDB) as the evaluation method. For more information, please refer to http://vis-www.cs.umass.edu/fddb/results.html#eval. The submission file should be in format described by the data webpage and the sample submission file.

 

Official website of FDDB

http://vis-www.cs.umass.edu/fddb/index.html

 

Relative Papers on FDDB:

FDDB: A Benchmark for Face Detection in Unconstrained Settings.

http://vis-www.cs.umass.edu/fddb/fddb.pdf

 

Evaluation

 

The competition uses Face Detection Data Set and Benchmark (FDDB) as the evaluation method. For more information, please refer to http://vis-www.cs.umass.edu/fddb/results.html#eval. The submission file should be in format described by the data webpage and the sample submission file.

 

 

2018 Cloudwalk Headcount

¥100,000

320 Teams

start

Final Submissions

2018-09-15

2019-01-18