Home   >  Competitions   > 

 

Important Update 2019/05/30

 

We will update the evaluation code for CrowdHuman on 2019/5/30 3:00 AM (UTC Time. All new submissions after that time will be evaluated with the new code automatically. Additionally, the old submissions will be re-evaluated and the former scores will be updated in the next week.

 

To better demonstrate the performance of each model, some errors of Jaccard Index are fixed so as to cope with the false positives. The evaluation code is now availuable on the 'data' page and you can download it for the better evaluation.

 

Updates on 2019/05/17

 

We have updated the evaluation code. New submissions will be evaluated under new methods automatically, while the scores of old submissions will be modfied in the next few days.

 

Background

 

Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. Therefore, MEGVII and Beijing Academy of Artificial Intelligence (BAAI) co-prepared two new benchmark datasets for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the natural scenes. Objects365 benchmark targets to address the large-scale detection with 365 object categories. CrowdHuman, on the other hand, is targeting the problem of human detection in the crowd. We hope these two datasets can provide diverse and practical benchmarks to advance the research of object detection. We hope that these two competitions based on the benchmarks, as well as the workshop which will be hosted at CVPR 2019, are able to serve as a platform to push the upper-bound of object detection research.

 

Task

 

The CrowdHuman Challenge was designed to advance the development of pedestrian detection technology and help solve the problem of inadequacy of existing pedestrian detection data sets.  

 

The CrowdHuman is a large, rich-annotated image dataset with high diversity. In the train and validation datasets, 470K human instances can be found, and each image contains 22.6 people on average, with various types of occlusions. Each human instance is annotated with head bounding-boxes, human visible-region bounding-boxes, and human full-body bounding-boxes.

 

Contestants can train their human detection models, which infer the Full Box of people in the image, based on the training set. The evaluation index of the algorithm performance will use the Jaccard Index (JI) as the main ranking basis.


Discussion Board

 

All participants can discuss the related topics in the discussion board, or can send email to support@biendata.com. We also have a wechat group for this competition, please add wechat id: shujujingsai to apply to join the group, with the real name and organizations.

CrowdHuman Detection Challenge

$10,000

135 teams

start

Final Submissions

2019-04-29

2019-06-13