To remove the potential bias of using automated face detection which selects only easy faces , more than 70, face regions were hand-cropped. From these, we have labeled in total identities. Each labeled sequence contains around 10 images. We split up the UCCS database into a training, a validation and a test set.
In the training and validation set, which is made accessible to the participants at the beginning of the competition, each image is annotated with a list of bounding boxes.
We provide two scripts to run the evaluation on the validation set for part 2 and part 3 respectively, so that participants can optimize meta-parameters of their algorithms to the validation set data. We provide open source baseline algorithms for both parts based on Bob that the participants can compare against. Face detection score files need to contain one detected bounding box per line.
The confidence score can have any range, but higher scores need to mean higher confidences. Note that generally there is more than one bounding box per file.
Hence, there should be several lines for each image. The face recognition score file is an extension of the face detection score file. We accept up to 10 pairs, i. Unknown faces i. If any mis-detection i. If you plan to participate in both challenges, the face recognition score file can be used for evaluating both the detection and the recognition experiment.
Hence, only one score file needs to be submitted in this case. For face recognition, we use the VGG v2 face recognition pipeline. For each person, the features of the training set are averaged to build a template of that person.
Open-set recognition is performed by averaging all training features of unknown identities in a separate template, and another template for features extracted from background detections of the MTCNN detector. First, the faces in the training images are re-detected, to assure that the bounding boxes of training and test images have similar content.
Terrance E. Built by scientists, for scientists. Each expression sequence contains about frames. The images for each character is grouped into seven types of expressions - anger, disgust, fear, joy, neutral, sadness and surprise. They are of same age factor around 23 to 24 years.
Then, the faces are rescaled and cropped to a resolution of x pixels. Afterwards, features are extracted using the VGG v2 network. For each identity including the unknown identity -1 and background detections -2 , the average of the features is stored as a template. During testing, in each image all faces are detected, cropped, and features are extracted. Those probe features are compared to all templates using cosine similarity. For each detected face, the 10 identities with the smallest distances are obtained -- if identity -1 or is included, all less similar identities are not considered anymore.
If you do not wish to run the baseline face recognition system, you can download the resulting Baseline face recognition score file. Learning from our first challenge, in both we use the total number of False Alarms or False Identifications, respectively, in logarithmic scale on the x-axis; and the Detection Rate or Detection and Identification Rate, respectively, on the y-axis.
The dotted gray line represents equal number of false and correct detections or identifications, respectively. An implementation of the two evaluation scripts for the validation set is provided in the Baseline package.
As face recognition applications progress from constrained sensing and driver's license and passport photos) to unconstrained scenarios. Keywords: optics, sensors, eye detection, face recognition, feature the constrained and unconstrained verification ( matching) problems.
Please refer to this package for more details about the evaluation. Terrance E. Boult Website Dr. Addressing concerns from the non research community In the news there have been several discussions and published articles about this dataset. Factual Error 2: Goverment is using this data.
Goverment funded research does not translate to goverment effort to collect data for facial recognition software Some reports suggest the government is using the data. It is worth noting that the data was never provided to any government agency. Face detection and recognition benchmarks have shifted toward more difficult environments.
The challenge presented in this paper addresses the next step in the direction of automatic detection and identification of people from outdoor surveillance cameras. While face detection has shown remarkable success in images collected from the web, surveillance cameras include more diverse occlusions, poses, weather conditions and image blur. Although face verification or closed-set face recognition have surpassed human capabilities on some datasets, open set identification is much more complex as it needs to reject both unknown identities and false alarms from the face detector.
We show that unconstrained face detection can approach high detection rates albeit with moderate false detction rates. However, open set face recognition is currently weak and requires much more attention. Skip to main content Accessibility information.