September 22, 2023

AI Lie Detectors At Borders: Who Does The EU’s AI Act Actually Protect?

The Court of Justice of the European Union (CJEU) has upheld the decision to restrict public access to information about the use of artificial intelligence (AI) lie detectors at EU borders.

This ruling has sparked concerns among civil rights groups, shedding light on the limitations of the EU’s AI Act in safeguarding the rights of migrants and refugees. The Act is set to become the world’s first comprehensive AI law.

AI-Powered Lie Detectors at EU Borders

At the center of this controversy lies iBorderCtrl, an emotion recognition pilot project aimed at streamlining border control procedures. 

During the trial phase, a computer-animated border guard asked prospective travelers questions via webcam. The AI-powered software would analyze their small facial movements and gestures to look for signs they were lying.

Low-risk travelers proceeded without delay, while those deemed higher-risk faced additional checks.

The pilot project ran from 2016 to 2019 in Greece, Latvia, and Hungary and received €4.5 million ($5.4 million) in funding from the EU.

Lack of Transparency

In 2018, European lawmaker Patrick Breyer requested access to documents about the development of this technology.

The EU Research Executive Agency (REA) granted partial access to one document while withholding others, citing commercial interests. 

Breyer challenged this decision. However, the General Court supported the REA’s stance, prioritizing protecting commercial interests over the public’s right to information about the project’s early stages. 

Breyer appealed, stressing the role of transparency and public discourse, particularly for projects with implications for human rights. He also questioned the scientific validity of the technologies.

Nevertheless, on September 7, 2023, the CJEU upheld the General Court’s decision.

Ethical Concerns and Effectiveness of AI

Ella Jakubowska, representing the digital rights group EDRi, expressed doubts about AI’s effectiveness in making critical decisions.

“Human expressions are varied, diverse (especially for people with certain disabilities) and often culturally-contingent,” she stated in an email to Reuters. She added, “[iBorderCtrl] is by no means the only dystopian technological experiment being funded by the EU.”

The iBorderCtrl project’s website acknowledged that the technology “may imply risks for fundamental human rights.” However, it asserted that since participation in the pilot was voluntary, issues related to discrimination and human dignity should not have arisen. 

Last year, The Intercept also reported on the technology’s issue with false positives, casting doubts on its reliability.

The deployment of AI lie detectors at borders carries far-reaching consequences, essentially shaping immigration policies through technology.

Petra Molnar, an immigration lawyer and associate director of the nonprofit Refugee Law Lab, told Wired that these systems cast everyone as suspicious, shifting the burden of proof onto travelers.

Molnar also pointed out that individuals may avoid eye contact with border or migration officials for numerous innocent reasons, such as cultural differences, religious practices, or trauma-related behaviors. However, these actions can sometimes be misinterpreted as signs of deception. 

Calls for bans

The latest draft of the EU’s AI Act labels emotion recognition and AI lie detectors for border or law enforcement as high-risk technologies. This means that they would need to comply with a range of requirements such as technical robustness, data training and data governance, transparency, and human oversight.

However, advocates like Molnar say this needs to go further, calling for a complete ban on such technology.

Similarly, human rights organization ARTICLE 19 cites concerns about mass surveillance, discrimination, privacy violations, and the potential for abuse.

Expressing disappointment with the CJEU’s decision, they call for greater transparency about the use of publicly funded technologies and a complete ban on emotion recognition technologies. 

“‘Emotion recognition’ technologies – tested in the iBorderCrtl research project – raise numerous challenges for fundamental rights, especially at borders, where identity-based profiling is already common practice,” argues Barbora Bukovská, ARTICLE 19’s Senior Director for Law and Policy.

“Moreover, emotion recognition’s pseudoscientific foundations render this technology untenable. It should be completely banned.”

Samara Linton

Community Manager at POCIT | Co-editor of The Colour of Madness: Mental Health and Race in Technicolour (2022), and co-author of Diane Abbott: The Authorised Biography (2020)