Risky Algorithms, Real Rights: Unpacking Human Rights Impact Assessments for Facial Recognition and Predictive Policing in the EU
- Human Rights Research Center
- 2 days ago
- 12 min read
Author: Antonia Vasileiadou
July 22, 2025
![[lmage source: True Anthem. (n.d.). The importance of human and AI collaboration]](https://static.wixstatic.com/media/7972a5_1ec3825f750d404bbc35716daee87951~mv2.jpg/v1/fill/w_512,h_341,al_c,q_80,enc_avif,quality_auto/7972a5_1ec3825f750d404bbc35716daee87951~mv2.jpg)
Unpacking Human Rights Impact Assessments for Facial Recognition and Predictive Policing in the EU
What happens when innovation moves faster than regulation?
In a world where artificial intelligence, biometric surveillance, and predictive algorithms are rapidly reshaping society, this question is no longer hypothetical – it’s urgent. While technological progress promises efficiency and growth, it also brings serious risks to fundamental rights like privacy, non-discrimination, and freedom of expression.
When viewed through the lens of high-risk technologies, it becomes clear that tools like facial recognition and predictive policing raise serious concerns. Facial recognition systems are now deployed in public spaces such as train stations, often without individuals’ informed consent, while predictive policing relies on historical crime data that can reinforce systemic biases and patterns of over-policing. As these technologies become increasingly integrated into daily life, it is crucial to critically assess how they function, whom they affect, and why they may pose significant risks to fundamental human rights.
I. Defining Human Rights Impact Assessments (HRIAs): Purpose and Scope
To understand why it's important to consider the ethical and legal aspects of developing technology, we first need to look at what a Human Rights Impact Assessment (HRIA) is. An HRIA is a structured process used to identify and address how laws, policies, projects, or technologies might affect people's human rights. Its main goal is to prevent harm by listening to those who could be impacted – especially vulnerable groups – and helping decision-makers take steps that respect human rights. HRIAs are based on key values like transparency, participation, accountability, and fairness. They are becoming vital tools for responsible governance and business practices, especially when it comes to high-risk technologies like AI, surveillance systems, and biometric tools.
But how does it actually work?
The Human Rights Impact Assessment (HRIA) process consists of five interconnected phases.
Phase 1 – Planning and Scoping: involves defining the scope, assembling the HRIA team, and establishing terms of reference.
Phase 2 – Data Collection and Baseline Development: focuses on gathering relevant data, developing a baseline, and identifying human rights indicators.
Phase 3 – Analyzing Impacts: assesses the types and severity of human rights impacts.
Phase 4 – Impact Mitigation and Management: includes actions for addressing impacts, ongoing monitoring, and ensuring access to remedy.
Phase 5 – Reporting and Evaluation: covers the preparation of reports and evaluation of the process.
At every stage, meaningful stakeholder engagement is crucial. It refers to the process of actively involving individuals, groups, or organizations that are affected by, can affect, or have an interest in a decision, project, or policy. This includes rights-holders (such as individuals or communities whose rights may be impacted), duty-bearers (like governments or companies responsible for respecting those rights), and other key actors. Their participation helps ensure that the process is inclusive and the outcomes are meaningful. Each phase of the process is supported by practical tools and guidance, including resources for mapping stakeholders, selecting indicators, developing assessment frameworks, and designing strategies to prevent or reduce negative impacts.
HRIAs & High-Risk Technologies – How are these two connected?
Human Rights Impact Assessments (HRIAs) and High-Risk Technologies are closely connected by a common goal: to protect fundamental rights amid rapid technological change. Innovations such as artificial intelligence, facial recognition, and predictive policing are increasingly used in sensitive domains like law enforcement, healthcare, and employment. Although these systems are often introduced to improve efficiency or objectivity, they frequently operate in opaque and unaccountable ways – raising concerns over bias, surveillance, and the erosion of individual rights. When deployed without adequate safeguards, these technologies can cause serious harm, particularly to already vulnerable or marginalized groups.
This is where HRIAs become a critical tool. They provide a structured method for identifying, assessing, and addressing the potential human rights impacts of high-risk technologies. By analyzing who is affected and in what ways, HRIAs help ensure that rights such as privacy, equality, and freedom of expression are upheld at every stage – from design and testing to deployment and oversight. They serve not only as a safeguard but also as a proactive mechanism to embed ethical and legal standards into the development of new technologies, helping to prevent harm before it occurs.
II. High-Risk Technologies in Focus: Facial Recognition and Predictive Policing
Behind the Term: High-Risk Technologies Explained
High-Risk Technologies refer to tools, systems, or applications – often driven by artificial intelligence, automation, or biotechnology – that carry a heightened potential to negatively affect human rights, public safety, or societal structures. These technologies warrant closer scrutiny and regulation due to their capacity to amplify discrimination, enable invasive surveillance, or influence critical decisions without transparency or accountability. Examples include AI systems used in predictive policing, biometric surveillance tools like facial recognition, and algorithmic decision-making in hiring or making welfare assessments. Because of their potential to disrupt democratic values and infringe on fundamental rights, these technologies demand responsible development and strong human rights safeguards.
A. Facial Recognition
![[Image source: ChatGPT]](https://static.wixstatic.com/media/7972a5_da1e5a25390340a2b27df994b1045654~mv2.png/v1/fill/w_512,h_512,al_c,q_85,enc_avif,quality_auto/7972a5_da1e5a25390340a2b27df994b1045654~mv2.png)
When discussing facial recognition technology, it is common to first think of its use in unlocking smartphones – a powerful way to ensure that personal data remain inaccessible. However, its applications extend far beyond personal device security, encompassing areas such as public surveillance, law enforcement, border control, and commercial services. For example, facial recognition technology is increasingly used in law enforcement to identify individuals by comparing facial features captured through cameras or mobile devices with images stored in police databases.
It includes several steps: detecting and analyzing a face; converting the image to data, while even recognizing expressions, eye movement, or attempts to fool the system; and finding a match. It works in two main ways: one-to-one (like Face ID) and one-to-many (used in surveillance to find someone in a crowd). This wide use makes it valuable in places like airports, stores, and everyday devices. However, it also raises important concerns about privacy, ethics, and accuracy – especially when used without people's knowledge.
While facial recognition offers many practical benefits, it also brings significant risks. A major concern is that facial data can be collected and analyzed without consent, raising serious ethical and legal issues. The technology’s ability to operate remotely and at scale makes it prone to being used in mass surveillance, potentially leading to the erosion of civil liberties. Additionally, facial recognition systems can suffer from bias and inaccuracies, especially when applied across diverse populations – resulting in higher error rates for certain demographic groups. These risks highlight the need for strong governance, transparency, and safeguards to ensure responsible and fair use.
B. Predictive policing
![[Image source: ChatGPT]](https://static.wixstatic.com/media/7972a5_42526e3d8bc94e8fa957a94014504d52~mv2.png/v1/fill/w_512,h_341,al_c,q_85,enc_avif,quality_auto/7972a5_42526e3d8bc94e8fa957a94014504d52~mv2.png)
As facial recognition becomes a standard feature in modern surveillance, another controversial technology is quietly shaping the future of law enforcement: predictive policing. Predictive policing systems use data-driven algorithms to forecast potential criminal activity and guide law enforcement interventions before crimes occur. They typically analyze historical crime data – such as times, locations, and types of offenses – to identify patterns. These systems extrapolate past trends to predict future “hotspots” or individuals likely to offend, allowing police to allocate resources strategically.
The process works in three main stages:
Data Collection: Crime reports and situational data (e.g., weather, demographics) are compiled into large datasets.
Prediction Stage: Algorithms – ranging from hotspot mapping to machine learning – detect patterns and generate forecasts about when and where crimes may occur.
Action Stage: Police respond by increasing patrols or surveillance in flagged areas, aiming to deter crime or apprehend potential offenders.
![[Image source: Cogent Infotech. (n.d.). From Predictive Policing using Machine Learning (With Examples)]](https://static.wixstatic.com/media/7972a5_f81576b2f84645c1be1d361d7df02b7e~mv2.jpg/v1/fill/w_512,h_401,al_c,q_80,enc_avif,quality_auto/7972a5_f81576b2f84645c1be1d361d7df02b7e~mv2.jpg)
Although promoted as tools to enhance efficiency and support proactive policing, predictive systems generate serious worries regarding fairness, transparency, and the risk of discrimination. Like facial recognition, predictive policing operates at the intersection of innovation and risk – embedding complex technologies into public safety with profound implications for human rights. It shifts law enforcement from reacting to crimes to trying to prevent them before they happen – often based on data and risk scores. One major problem is bias: these systems use past crime data, which can reflect over-policing in certain areas, especially poorer or minority communities. This creates a feedback loop, where more policing leads to more recorded incidents, which then justifies even more policing. Another issue is the lack of transparency – people often don’t know how decisions are made or how to challenge them. Without strong rules and oversight, predictive policing can unfairly target certain groups, damage trust in authorities, and increase surveillance in vulnerable communities.
To address these challenges, it is essential to incorporate human rights frameworks such as HRIAs into the development and use of predictive technologies. HRIAs provide a structured way to evaluate who might be affected, how rights could be compromised, and what safeguards are needed. By embedding these assessments early in the design and deployment process, developers, policymakers, and law enforcement can better anticipate risks and ensure that innovation aligns with democratic values. In this way, technology can serve the public interest without undermining the very rights it should protect.
III. The European Regulatory Landscape: AI Act, GDPR, and Complementary Frameworks
The goals of Human Rights Impact Assessments (HRIAs) and the AI Act closely align, as both aim to ensure that the development and use of high-risk technologies respect fundamental rights. While HRIAs offer a flexible, rights-based method to identify and address risks, the AI Act provides a legal framework that enforces such safeguards, especially for systems classified as high-risk. Together, they create a complementary approach, one grounded in human rights principles, the other in regulatory oversight, to guide responsible and ethical AI use in society.
A. The AI Act: Ensuring Accountability and Safety in Artificial Intelligence
![[Image source: Legaltech Talk. (2024, March 14). EU AI Act takes latest step through European Parliament.]](https://static.wixstatic.com/media/7972a5_7691081f30a74fd083e8465ca18d69f7~mv2.jpg/v1/fill/w_512,h_260,al_c,q_80,enc_avif,quality_auto/7972a5_7691081f30a74fd083e8465ca18d69f7~mv2.jpg)
The AI Act is a new law enacted by the European Union (EU) to make sure that artificial intelligence (AI) is used safely and fairly. It is the first law of its kind in the world, and it focuses on protecting people’s rights when powerful AI technologies are used. The law organizes AI systems into different levels of risk, from low to high, and sets special rules for the most dangerous ones, such as facial recognition or AI used in hiring or policing. For these “high-risk” systems, the AI Act requires developers to follow strict rules, like checking for errors, being transparent, and making sure people can still understand and challenge decisions made by AI. This helps ensure that AI supports society without causing harm.
Building on its risk-based approach, the AI Act also emphasizes the protection of fundamental human rights by requiring thorough assessments of how AI systems might impact individuals and communities. This is closely linked to Human Rights Impact Assessments (HRIAs). By integrating HRIAs into its framework, the AI Act ensures that developers and users of AI take responsibility for respecting rights such as privacy, non-discrimination, and freedom of expression. This proactive stance helps create a more ethical AI ecosystem where technological innovation goes hand in hand with safeguarding human dignity and democratic values.
B. Understanding the GDPR: Safeguarding Personal Data and Rights
![[Image source: Barraud Consulting. (n.d.). GDPR diagram: Understanding GDPR principles. Retrieved June 30, 2025.]](https://static.wixstatic.com/media/7972a5_1bc05823ea6648b49a458ad8ee9d4d85~mv2.jpg/v1/fill/w_512,h_281,al_c,q_80,enc_avif,quality_auto/7972a5_1bc05823ea6648b49a458ad8ee9d4d85~mv2.jpg)
Another key EU law is the General Data Protection Regulation (GDPR) that protects people’s personal data and privacy. It applies to any organization that collects or processes data about individuals in the EU. The GDPR gives people rights over their data, such as the right to know what is collected and how to correct it or to have it deleted, and holds companies accountable for using data responsibly. In the context of high-risk technologies like facial recognition or predictive policing, the GDPR helps ensure that personal information is handled fairly, transparently, and with respect for individual rights.
The GDPR and Human Rights Impact Assessments (HRIAs) are closely linked because both aim to protect individuals’ rights—especially privacy and data protection. While the GDPR sets legal obligations for how personal data must be collected, stored, and used, HRIAs go a step further by assessing the broader human rights impacts of technologies, including privacy, equality, and freedom of expression. HRIAs can help identify risks that may not be fully addressed by the GDPR alone, especially in complex or high-risk technologies. Together, they offer a more complete framework for ensuring that new systems respect and uphold fundamental rights.
C. Beyond the AI Act and GDPR: Other European Legal Instruments
In addition to the General Data Protection Regulation (GDPR), the European Union has developed several key legal frameworks to address privacy, security, and digital innovation:
The ePrivacy Directive (soon to be replaced by the ePrivacy Regulation) regulates electronic communications and online tracking, complementing GDPR protections.
The Digital Services Act and Digital Market Act introduce rules to ensure fair competition and accountability for large online platforms.
Meanwhile, the Data Governance Act promotes secure and transparent data sharing across the EU, and the proposed AI Act sets the first comprehensive framework to regulate artificial intelligence systems, emphasizing risk management and fundamental rights protection.
Overall, European regulations reflect a forward-looking and rights-based approach to digital governance. By setting high standards for data protection, cybersecurity, platform accountability, and emerging technologies like AI, the EU positions itself as a global leader in shaping ethical and secure digital environments. These regulations not only protect fundamental rights but also foster innovation and trust in the digital economy.
IV. The Critical Role of HRIAs in Protecting Fundamental Rights and Directions for Future Research
Human Rights Impact Assessments (HRIAs) help prevent harm by identifying and addressing potential human rights risks early in the development of laws, policies, or technologies. By using these assessments before decisions are made or systems are launched, HRIAs allow governments and companies to avoid negative impacts on people and communities. This makes their actions more responsible to and respectful of human rights. HRIAs also help organizations follow international standards like the UN Guiding Principles on Business and Human Rights and can improve trust with the public. HRIAs guide companies in making sure their work respects human rights from the start.
Step 1: Identifying Key Challenges in Conducting HRIAs
![[Image source: ChatGPT]](https://static.wixstatic.com/media/7972a5_fa9787416e71471db61681c26965461d~mv2.png/v1/fill/w_512,h_512,al_c,q_85,enc_avif,quality_auto/7972a5_fa9787416e71471db61681c26965461d~mv2.png)
Despite their benefits, Human Rights Impact Assessments (HRIAs) face several limitations and practical challenges. One major issue is that they are often voluntary, which means many organizations choose not to conduct them, especially when there is no legal obligation. Even when HRIAs are carried out, they may lack transparency, meaningful stakeholder participation, or follow-up actions. In fast-moving fields like artificial intelligence, it can also be difficult to predict all possible human rights impacts in advance. As a result, some risks may go unnoticed or unaddressed. Additionally, organizations may lack the expertise, resources, or political will to carry out effective assessments.
Step 2: Establishing Clear Guidelines for Effective HRIAs
To improve the effectiveness of HRIAs, governments should consider making them a legal requirement, particularly for high-risk technologies. Clear guidelines and independent oversight should be established to ensure HRIAs are conducted thoroughly and transparently. Capacity-building initiatives can help organizations understand and apply human rights standards, while strong stakeholder engagement, especially with affected communities, should be mandatory. Finally, HRIAs should not be one-time exercises but ongoing processes that include regular reviews and updates as technologies and their impacts evolve.
The future of human rights protection in AI governance depends on our ability to move from abstract principles to concrete, enforceable practices that center on human dignity. As AI systems become more embedded in public and private decision-making, ensuring meaningful safeguards against harm is no longer optional – it is essential. Tools like Human Rights Impact Assessments, when made mandatory, transparent, and participatory, offer a vital means of identifying risks early and embedding accountability into technological development. However, for these tools to be effective, they must be supported by strong legal frameworks, continuous oversight, and genuine engagement with affected communities. Human rights must not be an afterthought in the age of AI, but a foundation for innovation that serves the common good. The challenge ahead lies in building governance structures that are both adaptable and principled – capable of keeping pace with technological change while upholding the core values of justice, equality, and freedom.
Glossary
Adequate: Enough or satisfactory for a particular purpose.
Bias: Unfair preference or prejudice that affects judgment or results.
Biometric: Related to physical or behavioral characteristics used for identification (e.g., fingerprints, facial recognition).
Extrapolate: To make a prediction or inference based on existing data or trends.
Facial Recognition: a technology that identifies or verifies a person’s identity by analyzing and comparing their facial features from a photo or video.
Feedback loop: A process where the outputs of a system are fed back into it as inputs, influencing future behavior or performance—often used to improve or adjust the system.
Governance: The way rules, policies, or systems are managed and enforced.
High-Risk Technologies: Systems or tools – often based on advanced digital, AI, or automated processes – that have the potential to significantly impact people’s rights, safety, or well-being, and therefore require strict oversight due to their capacity to cause serious harm.
Hotspot: A specific area with high levels of activity or incidents, such as crime.
Invasive: Intruding into someone's privacy or personal space.
Opaque: Not transparent or hard to understand; lacking clarity.
Predictive Policing: The use of data analysis and algorithms to forecast where crimes are likely to happen or who might be involved, aiming to prevent crime before it occurs.
Scrutiny: Careful and detailed examination or inspection.
Welfare: Basic support for people's well-being, often through health, housing, or financial aid.
Sources
Wikipedia contributors. 2025. Human Rights Impact Assessment. Wikipedia. https://en.wikipedia.org/wiki/Human_Rights_Impact_Assessment
Human Rights Centre, University of Copenhagen. 2025. Human Rights Impact Assessment Guidance Toolbox: Introduction to Human Rights Impact Assessment. https://www.humanrights.dk/tools/human-rights-impact-assessment-guidance-toolbox/introduction-human-rights-impact-assessment
Regulation (EU) 2024/1689. Artificial Intelligence Act – European Parliament and Council, 13 June 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
Roy, Kavika. 2021. What Is Facial Recognition: Its Use and Future Scope? Medium (DataToBiz), 9 December 2021. https://medium.com/datatobiz/what-is-facial-recognition-its-use-and-future-scope-985dac5f25c8
Monika Simmler, Giulia Canova, Facial recognition technology in law enforcement: Regulating data analysis of another kind,
Computer Law & Security Review, Volume 56, 2025, https://www.sciencedirect.com/science/article/pii/S0267364924001572
Kaspersky. (n.d.). What is facial recognition? Kaspersky. https://www.kaspersky.com/resource-center/definitions/what-is-facial-recognition
Imaoka, H., Hashimoto, H., Takahashi, K., Ebihara, A.F., Liu, J., Hayasaka, A., & Morishita, Y. (2021). The future of biometrics technology: from face recognition to related applications. APSIPA Transactions on Signal and Information Processing. https://www.cambridge.org/core/journals/apsipa-transactions-on-signal-and-information-processing/article/future-of-biometrics-technology-from-face-recognition-to-related-applications/98B13157669DFC22D36F284228A0CE42#
Pearsall, B. (2010). Predictive Policing: The Future of Law Enforcement? NIJ Journal, Issue 266, 16–19. U.S. Department of Justice, National Institute of Justice (NCJ 230414). 230414.pdf
Meijer, A., & Wessels, M. (2019). Predictive Policing: Review of Benefits and Drawbacks. International Journal of Public Administration, 42(12), 1031–1039. https://doi.org/10.1080/01900692.2019.1575664
Strikwerda, L. (2020). Predictive policing: The risks associated with risk assessment. The Police Journal, 94(3), 422-436. https://doi.org/10.1177/0032258X20947749
European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Regulation (EU) 2024/1689. Artificial Intelligence Act – European Parliament and Council, 13 June 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
European Union. (n.d.). General Data Protection Regulation (GDPR) – Data protection rules. EUR-Lex. Retrieved June 30, 2025, from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM%3A310401_2
European Parliament & Council. (2002). Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector (ePrivacy Directive). Official Journal of the European Union. Retrieved June 30, 2025, from https://eur-lex.europa.eu/eli/dir/2002/58/oj/eng
European Commission. (n.d.). Digital Services Act package. Digital Strategy. Retrieved June 30, 2025, from https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
European Commission. (n.d.). Data Governance Act. Digital Strategy. Retrieved June 30, 2025, from https://digital-strategy.ec.europa.eu/en/library/data-governance-act
Danish Institute for Human Rights (2016). Human Rights Impact Assessment Guidance and Toolbox. https://www.humanrights.dk/tools/human-rights-impact-assessment-guidance-toolbox