IIIT-D team builds algorithm that will help identify synthetic images.
Neha Alawadhi reports.
As facial recognition technology becomes increasingly intelligent, devising ways to fool systems are also not far behind.
It has become relatively easier to generate conflicting or confusing images through artificial intelligence (AI) algorithms, designed to fool such facial recognition systems.
In order to address a possible misuse, a team of researchers at the Indraprastha Institute of Information Technology, Delhi (IIIT-D), has built an algorithm to help machine learning-based facial recognition systems identify synthetic images, which could potentially confuse even the most robust systems.
The research, led by Mayank Vatsa and Richa Singh at the image analysis and biometrics (IAB) at IIIT-B, is expected to have far-reaching impact on authentication systems, ranging from smartphones accessing doors and even public places.
"To fool the AI, attackers use something called adversarial perturbations. Facial recognition systems are becoming easy prey to such attacks," said Vatsa.
"The research was to protect the integrity of these algorithms. We argued that when a model sees a facial image, can we predict whether there has been an attack or whether it's a real image or synthetic," added Vatsa.
A possible scenario that the research considered was that an attacker can take a person's facial features which a given facial recognition model is using.
They could paint it in some fashion or take a 3D print out of the facial features and have them embeded in the spectacles.
The research will be able to help the algorithm detect this (a synthetic image), a point that algorithms run the risk of missing.
"This has many use cases, particularly when we are uploading photographs on the social media or Instagram. With this, the people will have a hard time trying to misuse other people's images if we have such a defence mechanism," Vatsa added.
Facial recognition technology has had a long and turbulent history.
While governments, law enforcement agencies and even private organisations make the case for using the tech for better management and crime detection, the high possibility of misuse or fooling facial recognition systems is a very real threat.
For example, consider an autonomous car driving on a road at a speed of 60 km per hour.
To fool the car system that reads road signs, a miscreant can change the appearance of a common symbol.
For example, on a stop sign, an attacker could add stickers so that when the autonomous car sees the sign it reads it as 'slow speed', and not a 'stop' sign.
A human would understand the sign and would stop, but the autonomous car would only slow down which could cause an accident.
Similarly, for an automatic facial recognition system, an attacker or miscreant can make minor changes in the facial image, so that a human may be able to spot the actual person.
But there is a likelihood that an algorithm may identify it as a different person altogether.
San Francisco has banned the use of facial recognition technology, taking a stand against its potential misuse, drawing appreciation as well as criticism in equal measure from different quarters.
According to digital security company Gemalto, basic facial biometrics require a two or three dimensional sensor that 'captures' a face.
It then transforms it into digital data by applying an algorithm.
This can then be compared to an existing image of the person, kept in a database.
These automated systems can be used to identify or check the identity of individuals in just a few seconds based on their facial features: Spacing of the eyes, bridge of the nose, contour of the lips, ears and chin, among others.
They can even do this in the middle of a crowd and within dynamic and unstable environments.
Nearly all large US-based technology companies -- Facebook, Google, Amazon, Microsoft and Apple -- have invested in building their facial recognition systems.
It is, however, the Chinese who are using the technology for surveillance and law enforcement on perhaps the largest scale.
Yitu Technologies is a Chinese company whose facial recognition software is used to screen visitors at ports and airports.
Another firm called SenseTime has worked with everyone from government agencies to retailers and online entertainment to healthcare providers.
Apple, which introduced FaceID that identifies a user and unlocks the phone, has had several complaints from users who say the iPhone does not recognise their face in the morning.
Chinese users accused the Apple AI of being racist after people were able to log onto each other's phones in the country.
The Amazon facial recognition system called Rekognition has also been under fire from not just civil liberties activists and lawmakers, but also its shareholders.
*Kindly note the image has been posted only for representational purposes.