Automated decision making

From Justice Definitions Project

What is Automated Decision Making ?

There are various definitions :“Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on factual data, as well as on digitally created profiles or inferred data.[1] “An AI system, as explained by the OECD’s AI Experts Group (AIGO), is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine and/or human-based inputs to perceive real and/or virtual environments; abstract such perceptions into models (in an automated manner e.g. with ML or manually); and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.[2]

Official Definitions of "Automated Decision Making"

‘Term’ as defined in legislations

Digital Personal Data Protection Act, 2023

The DPDP Act, 2023 defines “automated” but does not contain a definition of “automated decision‑making.” Section 2(b) provides: “(b) ‘automated’ means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data;[3]Section 2(x) defines “processing” as “a wholly or partly automated operation or set of operations performed on digital personal data.”

‘Term’ as defined in international instruments

OECD Recommendation of the Council on Artificial Intelligence (22 May 2019)

The OECD does not define “automated decision‑making” as such, but the Recommendation addresses AI systems that “make or support decisions affecting individuals or groups,” and requires that such system respect human rights, the rule of law, and democratic values, and be transparent, explainable, robust, secure, and accountable.

Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (CETS 225)

The Convention regulates AI systems that make or influence decisions affecting individuals, particularly in high‑risk contexts, and requires that activities in the lifecycle of such systems be consistent with human rights, democracy, and the rule of law, including effective remedies for adverse effects.

‘Term’ as defined in case law(s)

European Union

Schufa Holding v Rina – Case C‑634/21

The Court of Justice held that the automated creation of a probability value concerning an individual’s likelihood of meeting future payment commitments, which is used by third parties to inform credit decisions, may constitute “automated individual decision‑making” within the meaning of Article 22 GDPR, even if a human formally approves the final decision, provided the algorithm plays a decisive role in the outcome.

The judgment stresses that Article 22 aims to prevent individuals from being subject to decisions taken essentially by algorithmic means, and that human review must be meaningful rather than a mere formality.

Ligue des droits humains v Council – Case C‑817/19

The Court examined the use of automated processing of passenger data for risk assessment purposes and held that such systems must be subject to strict safeguards, including necessity, proportionality, and effective judicial review.

The judgment underlines that automated systems used in sensitive security‑related contexts must be transparent, proportionate, and compatible with fundamental rights, and that human oversight must be genuine rather than symbolic.

England and Wales (UK)

R (Bridges) v Chief Constable of South Wales Police EWCA Civ 1058

The Court of Appeal held that the use of automated facial‑recognition technology in public‑space policing must comply with data‑protection and human‑rights standards, including safeguards against disproportionate interference with privacy and the need for human oversight of the final decisions.

The court emphasised that fully automated or algorithm‑driven deployment of such tools without adequate human review and control may infringe fundamental rights and may be unlawful where adequate safeguards, transparency, and oversight are not in place.

International Experience  

Implementation of Automated Decision Making across countries

While the previous section explains what Automated Decision Making means in law, this section is concerned with on how these rules are applied in practice.

European Union

In the European Union, automated decision-making is operationalised with a dual approach of individual rights protection and system-level governance. Under the General Data Protection Regulation (GDPR) it ensures that individuals affected by automated decisions, have the right to seek intervention; can challenge the decisions; and are entitled to meaningful information. It operationalises at the individual level, ensuring that automated decisions do not operate without accountability[4].

In addition to this the European Union has also introduced a broader regulatory framework through the Artificial Intelligence Act. This framework adopts a risk-based approach, where AI systems that pose significant risks such as those used in employment, credit scoring, or law enforcement are subject to stricter obligations. These include mandatory registration of high-risk systems, transparency requirements, and the creation of public databases to track such systems[5].

United Kingdom

The United Kingdom has a similar framework but is more concerned with procedural safeguards. Under the Data Protection Act, 2018, it is mandatory individuals must be informed when significant decisions are made using automated processing. More importantly, they are provided with the right to request reconsideration by a human decision-maker. This ensures that automated decisions are not final and can be reviewed where necessary[6].

Realistically and practically , regulatory guidance further clears that human involvement needs to be meaningful. A purely formal or superficial review does not meet the legal standard. Instead, the human decision-maker must have the authority and ability to influence the outcome.

United States

In copntract to the European approach, the United States particularly under the California Privacy Protection Agency (CPPA) framework adopts a more flexible and user-centric model. Instead of restricting the use of automated decision making, the concern is majorly on empowering individuals through transparency and choice[7].

This means that individuals must be informed when automated decision-making systems are used, provided access to relevant information, and given the option to opt out of certain forms of automated processing.

Challenges

Automated decision-making , even though it is efficient and widely adopted, does face some critical concerns across jurisdictions. These challenges primarily are of transparency, fairness, and accountability, and directly affect the understanding of individuals.

Lack of Transparency

One of the most challenging issue in automated decision-making is the signifiant lack of transparency. systems operate as “black boxes,” meaning that their internal logic, decision-making criteria, and data inputs are not accessible or understandable to individuals affected by them[8].

Recent academic and policy discussions highlight that such opacity creates a serious barrier to legal accountability, as individuals cannot effectively challenge decisions they do not understand.Regulatory authorities like the ICO and EDPB have similarly warned that lack of transparency undermines the right to explanation and contestation under data protection law.

Bias and Discrimination

Another challenge is that ADM systems can produce biased and discriminatory outcomes. This is because such systems rely on historical data, which often reflects existing social inequalities. Research analyses that AI systems has a tendency to replicate and amplify bias, leading to unfair outcomes in areas such as hiring, lending, and criminal justice.[9] Further , human rights authorities have warned that AI can reinforce racism and sexism, especially when trained on biased datasets or used without proper oversight[10].

Automation Bias and Limits of Human Oversight

It is an assumption that human involvement can correct automated decisions. However, research shows that this is often not the case. Studies have reflected that humans tend to rely on and follow algorithmic recommendations, even when those recommendations are biased[11].

Empirical research in hiring contexts has shown that individuals working with biased AI systems often mirror the AI’s discriminatory choices rather than correct them[12].

Accountability Gaps

Yet another challenge is the difficulty in assigning responsibility . ADM systems are complex and have multiple actors, including developers, data providers, and deploying organisations. This creates “diffusion of responsibility,” where it is unclear who is legally accountable for a decision. In many jurisdictions, AI systems are already influencing decisions in areas like hiring, healthcare, and housing, without adequate oversight frameworks in place[13].

Indian Context

These challenges are particularly significant in India, where ADM systems are widely used in governance but lack a unified legal framework. Research shows that ADM is used in , welfare, policing, taxation and even public administration[14]. However they face the issues of limited transparency, unclear accountability and minimal regulation[15]. Further studies highlight that the distinction between decision-support and decision-making is blurred[16].

Appearance of ‘Automated Decision Making’ in Database

Indian Context- Absence of a Centralised ADM Registry

India does not maintain a centralised or publicly accessible registry that lists automated decision-making systems used by public authorities or courts. Official policy documents themselves acknowledge that India’s AI governance framework is still developing and is currently focused more on datasets and computational infrastructure rather than on cataloguing ADM systems[17].

Data Platforms vs ADM Registries

India has developed AIKosh[18], the Open Government Data Platform[19], and the National Data and Analytics Platform (NDAP)[20]. These platforms aim to make datasets “accessible, interoperable, and user-friendly”, and to support AI development across sectors. However, these platforms, host datasets and AI models do not identify whether these are used in decision-making systems, and do not disclose how such systems affect individuals or produce outcomes. In other words, they support data availability, but not decision-making transparency.

No Systematic Disclosure of ADM Use

Government strategies and policy documents (including those by NITI Aayog and MeitY) discuss AI applications in governance but do not establish any official database that lists ADM systems in operation. Even proposed mechanisms AI incident reporting system are designed to record harms or risks after they occur, instead of providing a forward-looking register of systems currently in use[21].

Supporting Data Infrastructure Without ADM Visibility

Platforms like NDAP[20] aggregate large volumes of administrative data, including datasets on welfare, crime, and economic indicators. These datasets are used in research and feed into algorithmic systems such as predictive policing or welfare targeting. However, the platforms themselves, provide only datasets and visualisations, do not disclose decision logic, and do not indicate whether automated systems are being used in governance decisions.

Comparative Perspective- Algorithm Registers in Other Jurisdictions

Unlike India, several jurisdictions have started to institutionalise transparency in automated decision-making through formal registers.

European Union

Under the EU Artificial Intelligence Act, the European Commission is required to establish and maintain a public database of high-risk AI systems. The database is designed to include information such as system identification, intended purpose, and compliance with regulatory requirements[22]. It ensures that there is visiblity, traceability, and subjectivity to oversight.

Municipal Algorithm Registers: Amsterdam and Helsinki

Amsterdam and Helsinki have developed public algorithm registers[23] that provide detailed information about AI systems used in governance. These registers include information such as purpose of the system, datasets used, level of human oversight etc[24] These registers are intended to create “transparency, explainability, and public trust” .

Standardisation of Algorithm Registers

European initiatives have also developed standardised formats for algorithm registers, including fields such as system purpose, input data, risk classification, and human oversight mechanisms[25].


Research that engages with ‘Automated Decision Making’

This section has key academic and policy research engaging with automated decision-making , focusing on how scholars conceptualise its legal, ethical, and practical implications.

Wachter, Mittelstadt & Floridi (2017)- "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the GDPR"[26]

The paper challenges the widely accepted idea that the GDPR provides a right to explanation” for automated decisions. It argues that, in reality, the law only guarantees only a limited “right to be informed” about the logic and consequences of such decisions, rather than a full explanation. Legal protections around ADM are weaker and more ambiguous than commonly assumed.

The authors conclude that existing legal frameworks lack clear and enforceable safeguards, and stronger mechanisms are needed to ensure transparency and accountability in automated decision-making.

Lukács & Váradi (2023) - GDPR-Compliant AI-Based Automated Decision-Making in the World of Work[27]

The paper examines how automated decision-making is increasingly used in employment and highlights the legal challenges in applying existing data protection laws to these systems.The study concludes that while GDPR provides a foundation, specific sectoral challenges require more tailored regulation, especially to protect employee rights.

Wiedemann (2022)- Profiling and Automated Decision-Making under the GDPR[28]

The paper explains that automated decision-making and profiling are two interconnected processes, where profiling acts as a preliminary step and decision-making follows. It is further held that , ADM cannot be understood in isolation and It operates through data-driven profiling systems. The author concludes that distinguishing between profiling and decision-making improves legal clarity and allocation of responsibility in ADM systems.

Cobbe, Lee & Singh (2021)- Reviewable Automated Decision-Making: A Framework for Accountable Systems[29]

The authors argue that focusing only on explanations is insufficient for ensuring accountability in ADM. Instead, they propose the concept of “reviewability,” which looks at the entire system within which decisions are made. They are concerned with that ADM systems are socio-technical , accountability must extend beyond algorithms to organisational processes, and legal systems should focus on whether decisions can be meaningfully reviewed and challenged. The paper concludes that effective regulation of ADM requires adopting reviewability frameworks, drawing from administrative law principles, to ensure decisions are transparent, contestable, and accountable.


Comparative Note

ADM in Practice - Use in Governance Systems

Research shows that automated decision-making is being used in key areas of governance in India, such as welfare distribution, employment planning, and skill development. These systems use data and algorithms to help decide who gets benefits, jobs, or services. One study notes that such systems create “structural challenges for transparency, bias mitigation and procedural fairness.” This means that it can lead to situations wherein people may not understand how decisions are made, systems may unintentionally favour or disadvantage certain groups, and decision-making processes may not be fair or easy to challenge[30].

Constitutional Perspective - The Need for New Rights

Scholars have analyzed automated decision-making (ADM) from a constitutional perspective and argue that existing legal protections are not sufficient to deal with algorithm-based governance. They have suggested that the Constitution should evolve to include protections such as the right to know how automated decisions are made, the right to fair and unbiased treatment by algorithms, andthe right to challenge such decisions effectively. This idea is referred to as “algorithmic due process”, meaning that automated decisions must follow the same principles of fairness, transparency, and accountability as traditional government decisions[31].

Common View Across Studies

There is one clear main argument across various researches, General laws like IT law or data protection law are not enough to regulate ADM. Instead, scholars argue that automated systems have a need for clear rules on transparency (how decisions are made), reason-giving (why decisions are taken), proper human oversight, and systems for audit and accountability[14].

Gap in Research

While there is strong discussion about what should be done, many studies point out that we still lack enough information about how ADM systems actually work in practice. There is limited evidence on who designs and controls these systems, how much officials rely on them, whether human review is meaningful, and how people experience and challenge automated decisions[32].


Way Ahead

The aforementioned section is based on recommendations from official Indian policy documents, regulators, and leading academic work. It focuses on practical steps to improve transparency, accountability, and fairness in automated decision-making (ADM) systems.

Standardising Data, Models, and System Information

A key recommendation from India’s AI Governance Guidelines is to improve the quality and standardisation of data and models. The Guidelines emphasise the need for reliable and representative datasets, and standardised evaluation datasets to test bias and fairness. They also highlight the importance of understanding the “AI value chain” how data flows between developers, deployers, and users. This requires standardised disclosures about system purpose, datasets used, and roles of different actors[17].

Learning from Global Practices: Algorithm Registers

European cities such as Amsterdam and Helsinki use public algorithm registers that disclose purpose of the system, datasets used,risks and safeguards, and level of human oversight[23]. These registers make ADM systems visible and understandable to the public, and allow feedback from citizens. Similar standardised registers could be adopted in India to document public-sector ADM systems[24].

Improving Data Collection and Systemic Analysis

The AI Governance Guidelines recommend improving data systems by expanding access to high-quality datasets, ensuring representation of India’s linguistic and cultural diversity, and integrating AI with Digital Public Infrastructure (such as Aadhaar and UPI)[17]. To ensure better governance, scholars and policy bodies recommend algorithmic impact assessments, especially in high-risk sectors like welfare, policing, and public services.These assessments help in data quality, bias and discrimination risks, and potential impact on rights[32]

Due Process, Contestability, and Remedies

Indian scholarship argues for extending constitutional principles into ADM through notice of automated decision-making,access to explanations, and effective review and appeal mechanisms[15]. The Guidelines also recommend grievance redress mechanisms, and voluntary compliance frameworks that may later become mandatory[17]. International frameworks similarly emphasise transparency, human-rights impact assessments, and independent oversight to protect individuals’ rights[33].

Explainability and Human Oversight

The Guidelines adopt principles such asn“People First”, and “Understandable by Design” .This means humans should retain final control over decisions, and systems should provide explanations that are understandable to users[17]. In the financial sector, the RBI’s FREE-AI Committee recommends clear governance structures, risk categorisation of AI systems, and internal audit mechanisms before deployment[34]. Legal scholarship further emphasises that ADM systems affecting rights must:allow meaningful human review, and provide explanations sufficient to satisfy natural justice principles.

Reference List

  1. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling
  2. https://www.oecd.org/content/dam/oecd/en/publications/reports/2019/06/artificial-intelligence-in-society_c0054fa1/eedfee77-en.pdf
  3. https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
  4. https://gdpr-info.eu/art-22-gdpr/
  5. https://gdpr-info.eu/recitals/no-71/
  6. https://www.legislation.gov.uk/ukpga/2018/12/section/14/enacted
  7. https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf
  8. https://officialblogofunio.com/2026/03/27/navigating-the-black-box-ai-bias-and-the-future-of-the-burden-of-proof-in-the-eu/?
  9. https://www.mdpi.com/2413-4155/6/1/3?
  10. https://www.theguardian.com/technology/2025/aug/13/ai-artificial-intelligence-racism-sexism-australia-human-rights-commissioner?
  11. https://www.edps.europa.eu/data-protection/our-work/publications/techdispatch/2025-09-23-techdispatch-22025-human-oversight-automated-making_en
  12. https://www.washingtonpost.com/business/2025/11/25/biased-ai-hiring-research-university-of-washington-study/
  13. https://apnews.com/article/artificial-intelligence-ai-explained-policy-technology-regulations-discrimination-d3226c9139d3d06af263e7ff467d0666
  14. 14.0 14.1 https://publications.clpr.org.in/the-philosophy-and-law-of-information-regulation-in-india/chapter/automated-administration-administrative-law-and-algorithmic-decision-making-in-india/
  15. 15.0 15.1 https://tijer.org/tijer/papers/TIJER2506078.pdf
  16. file:///Users/ananyagupta/Downloads/7._Dr._Vaishali_SCOPUS_18_revised_1_1_new.pdf
  17. 17.0 17.1 17.2 17.3 17.4 https://static.pib.gov.in/WriteReadData/specificdocs/documents/2025/nov/doc2025115685601.pdf
  18. https://indiaai.gov.in/hub/aikosh-platform?utm_source=chatgpt.com
  19. https://www.data.gov.in/
  20. 20.0 20.1 https://indiaai.gov.in/news/niti-aayog-launches-the-national-data-analytics-platform
  21. https://internetfreedom.in/green-light-for-ai-orange-for-rights/?utm_source=chatgpt.com
  22. https://artificialintelligenceact.eu/article/71/
  23. 23.0 23.1 https://ai.hel.fi/en/ai-register/
  24. 24.0 24.1 https://ai.hel.fi/wp-content/uploads/White-Paper.pdf
  25. https://eurocities.eu/latest/nine-cities-set-standards-for-the-transparent-use-of-artificial-intelligence/
  26. https://www.researchgate.net/publication/312597416_Why_a_Right_to_Explanation_of_Automated_Decision-Making_Does_Not_Exist_in_the_General_Data_Protection_Regulation
  27. https://www.sciencedirect.com/science/article/pii/S0267364923000584
  28. https://www.sciencedirect.com/science/article/abs/pii/S0267364922000103
  29. https://arxiv.org/abs/2102.04201?
  30. file:///Users/ananyagupta/Downloads/7._Dr._Vaishali_SCOPUS_18_revised_1_1_new%20(2).pdf
  31. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5650770
  32. 32.0 32.1 https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf
  33. https://www.opengovpartnership.org/open-gov-guide/digital-governance-automated-decision-making/
  34. https://dvararesearch.com/summary-of-the-rbi-free-ai-committee-report/