Protecting Candidates’ Rights: The Role and Limits of DPIAs in AI Recruitment Tools
- Human Rights Research Center
- 5 hours ago
- 9 min read
Author: Antonia Vasileiadou
February 5, 2026
![[Image source: Wallpapersden, ‘Cybersecurity Core Wallpaper’ (Wallpapersden, Published 16 April 2024. https://wallpapersden.com/cybersecurity-core-wallpaper/ accessed 9 January 2026.]](https://static.wixstatic.com/media/e28a6b_b04f4bafd46c49fbb9917be74db4bfa4~mv2.png/v1/fill/w_49,h_28,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/e28a6b_b04f4bafd46c49fbb9917be74db4bfa4~mv2.png)
I. Beyond Compliance: Protecting Human Rights in High-Risk AI with DPIAs
Data Protection Impact Assessments (DPIAs) are a key tool under the General Data Protection Regulation (GDPR) for identifying and mitigating risks to human rights, including privacy, fairness, and dignity. Their relevance is particularly high in AI-driven systems, where automated decisions can affect thousands of people and often operate opaquely. While DPIAs provide a structured approach to assess risks, plan safeguards, and demonstrate accountability, they are not always sufficient to prevent discrimination, bias, or long-term societal impacts. This article examines how DPIAs intersect with human rights in high-risk AI systems, highlighting their strengths, limitations, and the need for complementary approaches such as Human Rights Impact Assessments, multi-disciplinary review, and rights-centered design. Through a case study of an AI recruitment system, it demonstrates how organizations can move beyond compliance to ensure AI governance that is both legally robust and ethically responsible.
Analysing DPIAs through a human rights lens is crucial because these assessments sit at the intersection of privacy, autonomy, equality, and human dignity, and the choices made within them directly shape how AI systems affect people’s lives. A DPIA that is treated only as a data-protection checklist can easily miss discriminatory outcomes, chilling effects on behaviour, or structural forms of exclusion that emerge when automated decisions are scaled across sectors such as work, welfare, and policing. By explicitly examining how identified data-protection risks map onto broader human rights -such as non-discrimination, the right to work, access to justice, and freedom of expression- organisations can move from a narrow compliance exercise to a more substantive evaluation of power, vulnerability, and social impact. This interaction matters not just for legal robustness under European fundamental rights frameworks, but also for building trustworthy AI governance that is responsive to those most affected by algorithmic decisions.
II. What a Data Protection Impact Assessment (DPIA) Is and Why It Exists
A Data Protection Impact Assessment (DPIA) is a formal, documented process required under the GDPR whenever processing personal data is likely to pose a high risk to individuals’ rights and freedoms, especially when new technologies are involved. Its main objective is to outline the planned processing, evaluate its necessity and proportionality, identify potential privacy risks, and define measures to mitigate those risks before implementation – ensuring data protection by design and by default.[1]
Under Article 35 GDPR, a DPIA is mandatory in cases such as large-scale profiling, automated decision-making with legal or similarly significant effects, systematic monitoring of public spaces, or extensive processing of sensitive data.[2]The DPIA is closely linked to fundamental rights; by safeguarding privacy and data protection, it also supports broader human rights such as dignity, autonomy, equality, and freedom from discrimination, particularly where personal data processing intersects with these rights.
The connection between DPIAs and human rights is rooted in the GDPR’s emphasis on protecting individuals’ “rights and freedoms,” which is interpreted in light of EU fundamental rights and broader human rights standards. DPIAs put the right to privacy and data protection into practice by limiting unjustified intrusions, enforcing data minimisation, and ensuring transparency about how personal data is used. At the same time, they relate to human dignity and autonomy by preventing situations where individuals are reduced to opaque profiles or subjected to automated decisions that compromise their ability to make independent choices. By scrutinising profiling and high-risk processing, DPIAs also help protect equality and non-discrimination by highlighting potential disproportionate impacts on specific groups.[3]
III. Why DPIAs Are Essential for High-Risk AI Systems
High-risk AI systems, such as recruitment tools or automated decision-making engines, can have a significant impact on people’s access to jobs, credit, welfare, or other services. Conducting a DPIA is therefore essential to understand and manage how these systems affect individuals’ rights and lives. A DPIA provides a structured framework before deployment to map the data the AI uses, how decisions are made, and which groups may be disproportionately impacted, enabling organisations to identify risks such as bias, discrimination, lack of transparency, or over-reliance on fully automated decisions.[4]
In the case of a recruitment AI system, a DPIA is particularly important because it requires the controller to assess whether large-scale profiling and automated scoring are necessary, whether there is a lawful basis for processing, and whether candidates’ rights – including the right not to be subject to solely automated decisions – are adequately protected. It also guides the design of concrete mitigation measures, such as limiting the types of data used, running bias and accuracy tests, ensuring meaningful human oversight, and providing clear avenues for candidates to contest AI-driven outcomes. These steps are central to deploying AI in a way that is both legally compliant and ethically responsible.
![[Image Source: Aisera. (n.d.). AI Recruitment: The 2026 Guide to Agentic AI and Hiring. Retrieved from https://aisera.com/blog/ai-recruiting/]](https://static.wixstatic.com/media/e28a6b_0d9f3e2f9b7247ba86fb51c069a0dc22~mv2.png/v1/fill/w_49,h_26,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/e28a6b_0d9f3e2f9b7247ba86fb51c069a0dc22~mv2.png)
IV. Real Case Scenario
One of the clearest real-world examples for this discussion is Amazon’s experimental AI recruitment tool, which shows how high-risk AI systems can reproduce and amplify structural discrimination if not carefully assessed. The system was designed to score and rank CVs for technical roles using around a decade of historical hiring data, in which successful candidates were predominantly male. From a DPIA and human-rights perspective, this meant the model’s training data reflected existing gender imbalances rather than neutral, job-related criteria.[5]
In practice, the AI learned to downgrade applications that suggested a female identity, such as CVs mentioning “women’s” clubs or certain women-only colleges, while favouring patterns common in men’s CVs. Even after engineers removed explicit gender indicators, the system could still infer gender from correlated features, making the bias difficult to detect and explain. This example illustrates how automated profiling and scoring – designed to improve efficiency and objectivity – can in fact entrench indirect discrimination, threaten equality in access to work, and undermine the dignity and autonomy of candidates who are silently filtered out.
V. Integrating Fairness, Privacy, and Transparency
A strong DPIA for a high-risk AI system should treat discrimination, privacy, fairness, and transparency as interconnected issues, not just separate boxes to tick. Research on “fairness and DPIAs” shows that when AI is used for profiling or automated decision-making, the assessment should check whether:
The training and operational data include hidden indicators of protected characteristics (like race or gender).
AI outcomes are unequal across different groups.
There are methods to measure and detect bias.[6]
This requires mapping how data flows, testing for bias before and after deployment, and examining how decisions are made by the system. This way, potential unfairness or indirect discrimination is identified before the AI is used, not only discovered afterward.
At the same time, DPIAs still cover traditional data protection concerns: identifying personal and sensitive data, ensuring data minimisation and proper use, and evaluating security and inference risks. With Big Data AI systems, there is a higher risk of re-identifying individuals or causing privacy harms, which DPIAs must also address.[7]
Linking detection to action is essential. To reduce discrimination and promote fairness, possible measures include:
Adjusting or cleaning training data.
Removing features that could unfairly influence outcomes.
Choosing models and thresholds based on fairness metrics.
Ensuring human review for high-impact decisions.
In particular, cleaning data to remove gendered or race‑indicating tokens should follow ethical criteria that balance fairness, equity, privacy, and transparency, rather than blindly deleting all such signals. Tokens should not be removed if they are needed to detect, measure, or correct discrimination, as doing so can hide structural bias and undermine equity goals. Removal or transformation is appropriate when tokens are unnecessary for the task, pose privacy risks, or encode prejudicial stereotypes that drive discriminatory predictions. Fairness should be assessed not only numerically but also in terms of equity, checking whether cleaning improves or worsens outcomes for different groups in real‑world contexts such as hiring, credit, or healthcare. Transparency and accountability call for documenting which tokens are removed or retained, why, and how these choices affect model behavior, ensuring that decisions can be reviewed by auditors, regulators, or affected communities[8].
For privacy, DPIAs should enforce data minimisation, strong security, and careful handling of sensitive attributes used for bias checks. For transparency, recommendations include:
Providing multi-layered explanations for different audiences.
Documenting model choices and trade-offs.
Maintaining logs and audit trails so regulators and affected people can understand or challenge AI decisions.
Many scholars argue that DPIAs for high-risk AI should either expand into algorithmic impact assessments or be combined with human-rights or AI-specific impact assessments, so that systemic fairness and accountability are considered alongside traditional privacy risks. [9]
In conclusion, DPIAs for high-risk AI systems must go beyond traditional privacy checks and address fairness, transparency, and human rights risks as integral parts of the assessment. By combining bias detection, data protection measures, and actionable mitigation strategies, such as human oversight, secure handling of sensitive data, and explainable AI, organizations can reduce harm before deployment. Integrating DPIAs with algorithmic or AI-specific impact assessments ensures that high-risk AI systems are not only legally compliant but also ethically responsible, promoting accountability, equity, and trust in AI-driven decisions.
VI. Ensuring Ethical and Accountable AI through Comprehensive DPIAs
While DPIAs are essential for identifying and mitigating privacy risks in high-risk AI systems, they are inherently limited by their focus on individual rights and data protection compliance. In the context of Big Data, AI-driven recruitment, and automated decision-making, many harms -such as systemic discrimination, structural inequalities, and chilling effects on behaviour- are diffuse, societal, and difficult to attribute to individual data subjects. DPIAs alone cannot fully capture these structural consequences, nor can they resolve broader questions about fairness, autonomy, or the legitimacy of complex AI ecosystems.
This highlights the need to complement DPIAs with broader human-rights or fundamental-rights assessments, participatory governance, and independent oversight mechanisms. By combining traditional data-protection analysis with systemic review, organizations can better address the collective and long-term impacts of AI, balancing privacy, fairness, and transparency while maintaining flexibility for evolving ethical and legal norms. In other words, DPIAs remain a critical tool, but effective AI governance requires a multi-layered, rights-oriented, and adaptable approach that extends beyond compliance checklists.[10]
Glossary
Automated decision-making: Decisions about individuals made solely or primarily by a computer or AI system without human intervention, often affecting their rights, opportunities, or access to services.
Big Data AI system: A Big Data AI system is an artificial intelligence system that relies on very large, complex, and diverse datasets to detect patterns, make predictions, or automate decisions. These systems use advanced analytics and machine learning techniques to process high-volume, high-velocity, and high-variety data, often creating heightened risks for privacy, bias, and re-identification.
Chilling effects: Chilling effects refer to the phenomenon where individuals alter, limit, or refrain from lawful behaviour, such as expressing opinions, seeking information, or exercising their rights- because they fear surveillance, data collection, profiling, or potential negative consequences, even if no direct sanction is imposed.
DPIA: A Data Protection Impact Assessment (DPIA) is a structured, documented process used to identify and mitigate the privacy and data protection risks of personal data processing, especially when it is likely to result in high risk to individuals’ rights and freedoms.
High-risk AI systems: A high-risk AI system is an artificial intelligence application that, due to its purpose or context, can significantly affect individuals’ rights, safety, or fundamental freedoms.
Large-scale profiling: Large-scale profiling is the systematic analysis or evaluation of personal data on a wide scope – covering many individuals, extensive datasets, or significant volumes of data – to predict, categorize, or influence their behavior, preferences, or decisions.
Sensitive data: Sensitive data (also called special category data) is personal information that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic or biometric data, health information, or data concerning a person’s sex life or sexual orientation, which requires higher protection under the GDPR.
Systematic monitoring: Continuous or regular observation, tracking, or collection of personal data about individuals, often in public or online spaces, using structured methods or technology.
Token: A token refers to an individual element or unit of data that can carry specific information, such as a word, phrase, or feature, that may indicate a person’s gender, race, or other protected characteristic. It’s a piece of the data that the model uses, which could reveal sensitive attributes and therefore needs careful ethical consideration when cleaning or transforming the dataset.
Training data: The datasets used to teach an AI system to recognize patterns, make predictions, or perform tasks; the quality and representativeness of training data directly affect the AI’s accuracy and fairness.
Welfare: The health, well-being, and social or economic support of individuals or groups, often considered when assessing the impact of policies, services, or technologies.
Footnotes/References
[1] GDPR.eu, ‘Data Protection Impact Assessment (DPIA) – template’ (GDPR.eu, April 2019) https://gdpr.eu/data-protection-impact-assessment-template/ accessed 9 January 2026.
[2] European Commission, ‘When is a Data Protection Impact Assessment (DPIA) required?’ (European Commission, updated 2025) https://commission.europa.eu/law/law-topic/data-protection/rules-business-and-organisations/obligations/when-data-protection-impact-assessment-dpia-required_en accessed 9 January 2026.
[3] Danish Institute for Human Rights, Guidance on Human Rights Impact Assessment of Digital Activities: Introduction (2020) https://www.humanrights.dk/files/media/document/A%20HRIA%20of%20Digital%20Activities%20-%20Introduction_ENG_accessible.pdfaccessed 9 January 2026.
[4]Article 29 Data Protection Working Party, Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (4 April 2017) https://www.pdpjournals.com/docs/887932.pdf accessed 9 January 2026.
[5] Jeffrey Dastin, ‘Insight – Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 11 October 2018) https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/accessed 15 January 2026.
[6] Information Commissioner’s Office, ‘What about fairness, bias and discrimination?’ (Guidance on AI and data protection, ICO UK GDPR Guidance and Resources) https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/ accessed 15 January 2026.
[7] Petar Radanliev, ‘AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development’ (2025) 39 Applied Artificial Intelligence2463722 https://www.tandfonline.com/doi/full/10.1080/08839514.2025.2463722 accessed 15 January 2026.
[8] Ibid.
[9] Petar Radanliev, ‘Privacy, Ethics, Transparency, and Accountability in AI Systems for Wearable Devices’ (2025) 7 Frontiers in Digital Health1431246 https://doi.org/10.3389/fdgth.2025.1431246 accessed 15 January 2026.
Margot E Kaminski and Gianclaudio Malgieri, ‘Algorithmic impact assessments under the GDPR: producing multi-layered explanations’ (2021) 11 International Data Privacy Law 125 https://doi.org/10.1093/idpl/ipaa020 accessed 15 January 2026.
[10] Serge Gutwirth, Ronald Leenes and Paul Hert (eds), Data Protection on the Move: Current Developments in ICT and Privacy/Data Protection (Springer Dordrecht, Law, Governance and Technology Series 24, 2016) https://doi.org/10.1007/978-94-017-7376-8 accessed 15 January 2026.
