Recognition of traffic signs is a challenging real-world problem of high industrial relevance. Although commercial systems have reached the market and several studies on this topic have been published, systematic unbiased comparisons of different approaches are missing and comprehensive benchmark datasets are not freely available.
Traffic sign recognition is a multi-class classication problem with unbalanced class frequencies. Traffic signs can provide a wide range of variations between classes in terms of color, shape, and the presence of pictograms or text. However, there exist subsets of classes (e. g., speed limit signs) that are very similar to each other.
The classier has to cope with large variations in visual appearances due to illumination changes, partial occlusions, rotations, weather conditions, etc.
Humans are capable of recognizing the large variety of existing road signs with close to 100% correctness. This does not only apply to real-world driving, which provides both context and multiple views of a single traffic sign, but also to the recognition from single images.
The competition task is a multi-class classication problem. Participating algorithms need to classify single images of traffic signs Their performance will be determined based on the 0/1 loss function.
Although the problem domain of advanced driver assistance systems implies constraints on the runtime of employed algorithms, this competition will not take processing times into account, as this puts too much emphasis on technical aspects like implementation issues and choice of programming language. We want to keep technical barriers as low as possible in order to encourage as many people as possible to participate in the proposed competition.
The benchmark is designed in a way that allows scientists from different fields to contribute their results. As we exclude detection, tracking and temporal integration, prior knowledge in the domain is not required.
We will provide example code concerning reading of images and writing results. In addition, there are many publicly available resources that can be used in order to access state-of-the-art methods useful for the competition, e. g., the OpenCV library for computer vision and the Shark library for machine learning.
The competition begins with the public availability of the training data set, corresponding ground truth data, and additional technical documentation on the competition website (see Dataset and Schedule). Potential participants are invited to develop, train and test their solutions based on these data sets.
We will provide results of established baseline algorithms. In addition, we will provide experimental results on human traffic sign recognition performance.
Shortly before the competition ends, the test set, without ground truth, will be published on the competition website. Participants can compute results for this dataset and submit results online using the convenient web interface. The performance is directly evaluated and the results are displayed in a leaderboard, so participants can compare their performance. The best four participants are invited to IJCNN for presenting their solutions.
Ground-truth for the online test set will be made available after the competition is closed to allow participants to access a larger data set for training during the final competition preparations.
During the workshop session, a final evaluation set is provided that is processed during the session and allows to identify the winner.
The training data will be made publicly available on December 1, 2010. The test set will be made available on January 19, 2011.
The submission website will be open for 2 days (until January 21, 2010).
The final competition winner will be identified during the conference session using a final evaluation dataset that is not made publicly available before.
The best four teams in the online competition are invited to present their approaches during the special session. Each team is awarded one free conference and workshop registration. We need to clarify that the conference registration is awarded to the best-performing teams under the premise of participation in the final competition during the conference (but independent of its outcome).