Authors:
Youcef Ouadjer
1
;
Chiara Galdi
2
;
Sid-Ahmed Berrani
1
;
3
;
Mourad Adnane
1
and
Jean-Luc Dugelay
2
Affiliations:
1
École Nationale Polytechnique, 10 Rue des Frères Oudek, 16200 El Harrach, Algiers, Algeria
;
2
Department of Digital Security, EURECOM, 450 Route des Chappes, 06410 Biot, France
;
3
National School of Artificial Intelligence, Route de Mahelma, 16201 Sidi Abdellah, Algiers, Algeria
Keyword(s):
Active Biometric Verification, Multimodal Fusion, Self-Supervised Learning.
Abstract:
This paper focuses on the fusion of multimodal data for an effective active biometric verification on mobile devices. Our proposed Multimodal Fusion (MMFusion) framework combines hand movement data and touch screen interactions. Unlike conventional approaches that rely on annotated unimodal data for deep neural network training, our method makes use of contrastive self-supervised learning in order to extract powerful feature representations and to deal with the lack of labeled training data. The fusion is performed at the feature level, by combining information from hand movement data (collected using background sensors like accelerometer, gyroscope and magnetometer) and touch screen logs. Following the self- supervised learning protocol, MMFusion is pre-trained to capture similarities between hand movement sensor data and touch screen logs, effectively attracting similar pairs and repelling dissimilar ones. Extensive evaluations demonstrate its high performance on user verification
across diverse tasks compared to unimodal alternatives trained using the SimCLR framework. Moreover, experiments in semi-supervised scenarios reveal the superiority of MMFusion with the best trade-off between sensitivity and specificity.
(More)