Using VR To Study Facial Recognition At Passport Control

Updated: Apr 22

As humans, we are good at recognising the faces of people that we already know, but in many situations the faces of unfamiliar people, who we have never met before, must also be identified. Some important tasks depend on unfamiliar face identification, but how good are we at such tasks? A team of researchers from the Universities of Kent and York, led by Professor Markus Bindemann, have turned to Virtual Reality (VR) to study this question in the passport control environment of an airport.


THE ISSUE

One of the ways in which people with criminal intentions can move through borders undetected is to use photo-identity documents of someone with a similar appearance. How these identity impostors can be detected is currently studied intensively in Psychology, but most of these studies rely on laboratory tests that remove the context within which these person identifications are actually made. Professor Bindemann and his colleagues have become interested in the influence of the social context and physical environment on these identification decisions.

THE SOLUTION

For their ESRC-funded project, the research team had already created an Airport using Vizard software and basic person avatars. The team then consulted with Target3D, who supplied and provided training for an Artec 3D scanner, to develop photo-realistic avatars of real people. Before the Covid pandemic brought this activity to a halt, 120 humans had already been scanned for this project, from a child of four up to an 87-year-old woman, and were converted into avatars for VR. The team focused on creating avatars with a range of physical appearances, skin tones and hairstyles, to mirror real airport life.


Whilst progress has been slower than expected during the pandemic, the team came up with novel solutions for testing people with VR online and now have data to compare the airport with their laboratory data, something which Professor Bindemann views as a really important first step: