OULU-NPU - a mobile face presentation attack database with real-world variations

In recent years, software-based face presentation attack detection (PAD) methods have seen a great progress. However, most existing schemes are not able to generalize well in more realistic conditions. The objective of OULU-NPU database is to assess the generalization performances of face PAD techniques in mobile scenarios under some real-world variations, including previously unseen input sensors, attack types and acquisition conditions. This database was created at the University of Oulu in Finland and the Northwestern Polytechnical University in China.

Database description

The Oulu-NPU face presentation attack detection database consists of 4950 real access and attack videos. These videos were recorded using the front cameras of six mobile devices (Samsung Galaxy S6 edge, HTC Desire EYE, MEIZU X5, ASUS Zenfone Selfie, Sony XPERIA C5 Ultra Dual and OPPO N3) in three sessions with different illumination conditions and background scenes (Session 1, Session 2 and Session 3). The presentation attack types considered in the OULU-NPU database are print and video-replay. The attacks were created using two printers (Printer 1 and Printer 2) and two display devices (Display 1 and Display 2). Figure 1 shows some sample images of real accesses and attacks captured with the Samsung Galaxy S6 edge phone. The videos of the 55 subjects were divided into three subject-disjoint subsets for training, development and testing. The following table gives a detailed overview about the partition of the this database.

Real

Print 1

Print 2

Video-replay 1

Video-replay 2

Figure 1. Sample images of real and attack videos captured with Samsung Galaxy S6 edge phone.

Samsung

HTC

MEIZU

ASUS

Sony

OPPO

Figure 2. Sample images demonstrating the image quality of the different smartphone cameras.

Evaluation protocols

For the evaluation of the generalization capability of the face PAD methods, four protocols are used.

Protocol I:

The first protocol is designed to evaluate the generalization of the face PAD methods under previously unseen environmental conditions, namely illumination and background scene. As the database is recorded in three sessions with different illumination condition and location, the train, development and evaluation sets are constructed using video recordings taken in different sessions.

Protocol II:

The second protocol is designed to evaluate the effect of attacks created with different printers or displays on the performance of the face PAD methods as they may suffer from new kinds of artifacts. The effect of attack variation is assessed by introducing a previously unseen print and video-replay attack in the test set.

Protocol III:

One of the critical issues in face PAD and image classification in general is sensor interoperability. To study the effect of the input camera variation, a Leave One Camera Out (LOCO) protocol is used. In each iteration, the real and the attack videos recorded with five smartphones are used to train and tune the algorithms, and the generalization of the models is assessed using the videos recorded with the remaining one.

Protocol IV:

In the last and most challenging protocol, all above three factors are considered simultaneously and generalization of face PAD methods are evaluated across previously unseen environmental conditions, attacks and input sensors.

The following table gives a detailed information about the video recordings used in the train, development and test sets of each protocol.

Baseline method

The Matlab source code of a color texture analysis based face PAD method [1] is provided as a baseline. After choosing ten random frames from each video, the LBP features are extracted from 64x64 images in the YCbCr and HSV color spaces. The resulting histograms computed over the color spaces are concatenated and fed into a Softmax classifier. The ten face images are classified separately and the average of the resulting scores is used as a final score for the whole video sequence.

[1] Z. Boulkenafet, J. Komulainen, and A. Hadid, "Face anti-spoofing based on color texture analysis", IEEE International Conference in Image Processing (ICIP), Quebec City, 2015, pp. 2636-2640.

PDF

Download procedure:

  1. Register to get updated with the latest information about the database.

  2. Download, sign, and send the End User License Agreement (EULA) to us. (Note: We do not accept emails from public domains such as gmail, yahoo, hotmail, etc.).

  3. The EULA must be signed and sent by a person with a permanent position at the institute.

  4. Download the database using the link provided to you when your request is processed.

Acknowledgements

If you use this database, please cite the following paper:

Z. Boulkenafet, J. Komulainen, L. Li, X. Feng and A. Hadid, "OULU-NPU: A Mobile Face Presentation Attack Database with Real-World Variations", 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), 2017, pp. 612-618, doi: 10.1109/FG.2017.77.

PDF

@INPROCEEDINGS{OULU_NPU_2017,

author={Boulkenafet, Zinelabinde and Komulainen, Jukka and Li, Lei and Feng, Xiaoyi and Hadid, Abdenour},

booktitle={12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017)},

title={{OULU-NPU}: A Mobile Face Presentation Attack Database with Real-World Variations},

year={2017},

pages={612-618},

doi={10.1109/FG.2017.77}}