With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.
According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.
Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.
The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.
These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.
What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.
The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.
“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”
The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.
On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.
Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.
Once again, the grey areas are being exploited.
As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.
In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.
Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.
Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.
Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.
This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.
The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.
Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.
Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.
A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.