AI System
AI System
This page is created by Team 7 AI System and High - Risk system Team Members
- Himank Thakur
- Aryan
- Shaila Mahajan
- Harshita
- Vikhyat Malhotra
- Vaisini S
WHAT IS AN 'ARTIFICIAL INTELLIGENCE SYSTEM'?
An AI system (artificial intelligence) is a human-engineered, technology-based system that is formed to simulate human intelligence while performing and solving complex tasks, which include, but are not limited to, learning, reasoning, and decision-making through the process of analyzing large datasets, by which it produces results based upon the patterns and generates predictions it has produced after proper analysis. The autonomy of this system is based and varies from different levels, which might be from semi-independent to under direct supervision of the humans.
The AI system in technology law is closely linked with the debates around data governance and liability. Autonomy is always a big factor that is widely present in these debates because for the AI system to evolve, it needs operational parameters that are based upon its training and the data that are being provided and fed to the AI system, due to which it has limited human oversight to go through the framework designated for its deterministic software. This is the main reason as to why the global networks and the governance systems are seeking a precise legal definition, which could establish the boundaries under which the AI system could be properly defined, which will help in the assigning of regulatory obligations for the developers and deployers while creating or improving their AI systems.
AI systems are encompassed in a broad spectrum of various technologies. It includes everything from simple predictive algorithms used for content recommendation to autonomous decision-making systems, which are often used in credit scoring (banking industry) and medical diagnostics (healthcare sector). The AI system also has its use in the complex natural language processing models (LLMs) for producing the solutions for complex problems.
Unlike the software technology, which follows a fixed system of explicitly programmed rules, the AI system runs on a unique pattern of learning the patterns from the data and improves continuously, without the necessity of reprogramming it after every prompt and usage of the system. The AI systems are categorized for operation into two main phases:
- Training Phase: The AI system is given large amounts of data, like structured data (spreadsheets, databases, and CSV files); unstructured data (the free-form content that is available on the internet at large); and finally, semi-structured data (JSON, XML, and emails with tags). Through this process the algorithms identify patterns that adjust the internal parameters, which are pivotal for the optimization of goals. There are various techniques under which this date is amalgamated for the AI system:
- Supervised training, which labels the present data for further classification for the system to observe and learn.
- Unsupervised training, the process of finding patterns
- Reinforcement learning, the process of trial and error with rewards (meaning more data and algorithms being provided to the AI system).
- Deployment Phase—The trained AI system/model is provided with new inputs and data, which helps it generate more realistic outputs. The modern AI systems are able to adapt post-deployment through the use of techniques like fine-tuning or online learning.
At the very core of the AI system lies machine learning, which is overseen by soft computing (neural networks and machine logic). Through this, the AI system accepts the uncertainty and approximation instead of demanding exact logical arguments, making the production of solutions a much faster and easier method. Hardware acceleration (GPUs and TPUs) enables the processing of big datasets as well as sophisticated models, such as transformers. Key components of the AI system Just like software, the AI systems are incomplete without their key components:
- Data Layer: Sources, both structured and unstructured; data ingestion procedures; cleaning; and storage make up the data layer. The fuel is diverse, high-quality data
- Data Processing: Transformation, feature extraction, and vector embedding pipelines are examples of data engineering and processing. These are used for semantic search in generative systems.
- Core Algorithm: Machine learning and deep learning models, such as LLMs, CNNs for vision, and RNNs or transformers for language, are developed using basic algorithms. This includes knowledge bases and reasoning engines.
- Orchestration and Inference Layer: Frameworks for multi-step reasoning, chain models like LangChain, workflow management, and large-scale output delivery.
- Cloud or edge servers, GPUs or TPUs, storage, networking, and MLOps tools are examples of hardware and infrastructure for deployment, monitoring, versioning, and scaling.
- Perception and Actuation in Embodied Systems: The actuators or API providers output actions, while the sensor technology, such as the cameras, microphones, and LiDAR, gathers important informational input. The mechanism for monitoring of bias, drift, security, explainability, and compliance is part of the governance and safety layer. This is especially important in regulated situations.
OFFICIAL DEFINITION OF AI SYSTEMS
The Indian legal jurisprudence and the technology have still not defined ‘artificial intelligence’ or the ‘AI system’ in any act, legislation, rule, notification, or gazette document. India follows a techno-legal principle-based approach where, through government schemes like the IndiaAI Mission and the existing laws IT Act 2000 and the DPDP Act 2023, the fundamentals and the base of such definitions are first made. Authoritative functional descriptions appear in high-level policy guidelines and official white papers issued by MeitY, the Office of the Principal Scientific Adviser (PSA), and the Supreme Court of India. These descriptions are deliberately broad and functional to prevent freezing, rapidly changing technologies.
AI systems in Legislation
Although "AI" and "AI systems" are not defined in the Digital Personal Data Protection (DPDP) Act, 2023, AI systems are treated as data fiduciaries when they decide how and why to process personal data (such as court records or litigant information). Any AI that processes digital personal data is subject to obligations regarding consent, purpose limitation, data minimization, and accountability.[1] The term "synthetically generated information" (i.e., AI-generated or altered audio/visual content that appears real) is defined in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (amended 2026), but the underlying AI system is not .[2]
Legal provision(s) relating
As mentioned earlier, there is no certain legal definition of an AI system, but certain provisions of various different acts and laws do contain the idea of it:
- The DPDP Act of 2023 (Sections 8, 9, 10, and 11) classifies large-scale AI processors as Significant Data Fiduciaries, requiring Data Protection Impact Assessments (DPIAs), independent audits, and the appointment of a Data Protection Officer.[3] Directly applicable to judicial AI tools (SUPACE and SUVAS) that handle case data.
- According to the IT Act of 2000 (Section 43A, Section 79, and amended Intermediary Rules), AI systems that generate or modify content lose intermediary safe harbor protection if they "initiate transmission, select recipients, or modify data."[4] "Recent amendments in 2026 require labeling of AI-generated/synthetic content.
- Bharatiya Nyaya Sanhita (BNS) and Bharatiya Nagarik Suraksha Sanhita (BNSS): Deepfakes and AI-generated misinformation are punishable under existing laws against forgery, cheating, and defamation.[5] These provisions regard AI as a tool subject to human oversight rather than a legal person.
- Tamil Nadu's Safe & Ethical Artificial Intelligence Policy 2020 establishes a framework for responsible AI adoption in governance, emphasising safety, transparency, and fairness. It focuses on ethical, inclusive AI for public service, utilising the "DEEP-MAX" scorecard (Diversity, Equity, Ethics, Privacy, Misuse Protection) to evaluate systems, while fostering a local AI startup ecosystem.[6]
AI system in International Instruments
The official definition of artificial intelligence has not been formed or internationally accepted, but according to
- Under the European Union AI Act under article 3(1), an “AI system" is a machine-based system that is designated to operate within different levels of autonomy. The results it generates are based upon the generated predictions, content, or decisions that might or might not have explicit or implicit objectives in the physical or virtual environments in the real world.” [7][8]
- According to the OECD Recommendation on Artificial Intelligence (updated 2023/2025), "an AI system is a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." India refers to this as the global baseline.
- The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021/2022) states, "AI systems are information-processing technologies that integrate models and algorithms... capable of performing tasks that normally require human intelligence."[9]
- The Council of Europe Framework Convention on Artificial Intelligence (2025) places a strong emphasis on definitions that are human rights-centric. By Indian standards, they are "inspired by but not bound by" these instruments.
- Monetary Authority of Singapore Artificial Intelligence (AI) is generally defined as technologies that assist or replace human decision-making, covering both complex algorithms and straightforward data analytics. The nation focuses on trusted AI, utilizing the AI Verify toolkit to test for fairness, explainability, and safety.[10]
'AI systems' in Official Documents
According to the India AI Governance Guidelines, the primary definition refers to MeitY, 5 November 2025.
- Artificial intelligence is a general-purpose technology. An AI system is able to combine information, reason, plan, and act with little human involvement in a range of media and circumstances and learn and improve constantly[11]. Published at indiaai.gov.in and cited in all subsequent government communication.
- Other official documents
- The IndiaAI Mission papers and PIB releases (2024-2026) have the same functional description.
- The Supreme Court e-Committee and NIC project documentation refer solely to AI systems as assistance tools (SUPACE, SUVAS, and LegRAA) requiring human supervision.
'AI system' as defined in official government reports
Report A is the Indian AI Governance Guidelines. (MeitY Drafting Committee Report, November 2025)
- The basic document on AI for India, consisting of 68 pages and published not more than two months ago, was developed after 2500 public discussions.
- It rejects a rigid statutory definition in favor of a functional, lifecycle-based description that includes development, deployment, and post-deployment adaptations.
- It splits concerns into India-specific categories (malicious use, discrimination against vulnerable groups, deepfakes, and autonomous loss of control) rather than the EU's four-tier risk pyramid.
- Proposal Policies.
- The "Safe & Trusted AI" principles that we encourage are the following: Fairness, responsibility, transparency, safety, and privacy. There is a greater liability throughout the value chain.
- The establishment of the AI Governance Group and AI Safety Institute, as well as a national incident database, has been reported.
- Consider techno-legal alternatives (watermarking, bias audits, explainability tools) rather than bans. The plan provides concrete specifics on computational infrastructure, data sets, and judges’ pilot schemes for 2026-2028.
Report B, titled White Paper on Artificial Intelligence and Judiciary was released by the Supreme Court Center for Research and Planning in November 2025.
- Describes AI as
- Machine systems that are able to do tasks usually needing human intellect, such as reasoning, pattern recognition, understanding language, and making decisions in a structured way.[12]
- It keeps a record of active AI tools such as SUPACE, which deals with summarization, and SUVAS, which translates. Suggestions of ensuring human involvement in AI decisions, ownership of training data for courts and ethics training for judges.
Report C: PSA White Paper: AI Governance Some Techno-Legal Framework (January 2026)
- Reinforcing the MeitY definition and suggesting institutional mechanisms (AI incident database, inter-ministerial mechanism.) India won’t legally define AI to maintain flexibility in policy.[13]
Types of AI Systems
As of March 2026, no law in India requires the categorization of AI systems. The India AI Governance Guidelines (MeitY, November 2025) are based on a light-touch, principles-based, and innovation-friendly approach that advocates for voluntary compliance, contextual risk assessment, and the avoidance of prescriptive classification that could stifle growth. The suggestions categorize AI systems as probabilistic, generative, adaptive, or agentic but do not create a definitive taxonomy like the EU AI Act’s four-tier risk pyramid (prohibited, high-risk, limited-risk and minimal-risk). Instead, classifications are premised on global standards (OECD, UNESCO), local policy notes, and sectoral applications (especially judicial), from an India lens targeting harms to backward groups (castes, genders, linguistic minorities, deepfakes on women/children).
Main Classifications of the AI systems
With respect to capability level, as is often done in Indian policy, and Niti Aayog Legacy Framework.
- This is the key global and Indian classification (as stated in NITI Aayog’s 2018 National Strategy for AI 2025–2026).[14]
- Narrow AI (ANI) or Weak AI refers to computers that are better than humans in a specific task.
- The prevailing situation of all the installed diseases in India, 2022.
- Image recognization, voice assistants, recommender engines and judicial aids like SUPACE, which is case Summarisation and SUVAS which is translation.
AGI, or strong AIs, have at least human-level intelligence, meaning they can perform a wide range of intellectual tasks without being retrained on task-specific data. Not realized yet; may be theoretical.
- Labs in India (e.g., funded by India AI Mission) are designed to create adaptive multilingual models.
- Super AI (ASI) surpasses human intelligence in every aspect.
- The danger that something may happen in the future has been identified in the guidelines as a possible “loss of control” danger. Requires constant vigilance and monitoring
By Processing style
According to MeitY guidelines and global references, classification can also be based on functionality. Reactive machines possess no memory and react based on current inputs. It relates to making decisions based on past experiences. Most modern systems, like self-driving prototypes and predictive policing tools, are examples of this. Understanding your own and others’ emotions, beliefs, and intentions is the theory of mind. A hypothetical form of AI that possesses consciousness and self-awareness.
Through core technology and modern applications (dominant in 2025-2026 Indian discourse)
Discriminative AI focuses on categorization and pattern recognition (for example, fraud detection and court document analysis). Generative AI creates new content through its ability to produce text graphics code and summaries which Indian users most commonly use through chatbots and legal drafting tools. Agentic AI creates autonomous agents that possess the ability to plan and reach conclusions through stepwise reasoning while using tools to execute their tasks but this technology brings both great potential and significant danger because it leads to users losing operational control. Multimodal AI enables users to work with multiple data formats which include text and image and audio content while its Indian language models continue to develop. Predictive AI forecasts outcomes (e.g., case pending prediction pilots). Conversational AI refers to natural language interfaces (such as judicial chatbots in SUPACE).
Regional and contextual variations (Legal Experts and Practising Lawyers' Perspectives)
Legal critiques such as Prashant Mali, Ronin Legal Consulting, Bar Council committees, and the Supreme Court e-Committee emphasize India's particular adaptations for the AI system in our ever-growing economy and technologies:
- The AI systems need to support all 22 official languages and the important vernacular languages with the dialects (as the language diversity of India is unique in the world) because the SUVAS uses custom language classification to solve access-to-justice problems. Through this advocates can argue that the lack of standardized legal terminology in regional languages poses specific prejudice issues.
- Central guidelines from MeitY and IndiaAI establish principles that High Courts and states will implement differently in their own respective methods, as shown by Kerala and Odisha using their internal tools. Practising lawyers emphasise uniform developer/deployer liability but different enforcement exists because digital infrastructure varies across systems.
- The Supreme Court White Paper from 2025 together with the e-Committee mandate, establishes that AI systems function as assistive technologies with restricted capabilities that cannot perform judicial functions. The system supports research and summarization through SUPACE while enabling translation with SUVAS and transcription through TERES and metadata extraction using LegRAA. Any predictive/risk-scoring instrument (for example, recidivism) that has been classed as high-risk/sensitive and requires a pilot study as well as an ethics assessment.
Vulnerable-Group Lens researchers recommend that organizations should use caste and language and gender discrimination as their main focus whereas EU-style tier systems should serve as their secondary priority. The private member proposals from 2025 establish statutory ethics committees, which will oversee high-impact systems.
India uses a practical system of AI classification which focuses on an AI systems entire operational lifecycle and its potential to cause harm according to their formal standards. The system uses OECD-based capability/functionality methods to evaluate risks which match Indian social and cultural diversity patterns because it does not use European Union banned practices that require total prohibition but rather emphasizes voluntary "Safe & Trusted AI" sutras: Trust, People First, Fairness & Equity, Accountability, Understandable by Design, Safety/Resilience/Sustainability, Innovation over Restraint.
The framework enables fast implementation of judicial efficiency tools while protecting against specific Indian security threats which create a clear distinction between this framework and existing international systems.
APPEARANCE IN THE OFFICIAL DATABASE
The term "AI system" appears in official and non-government collections which focus on AI regulation and technology governance in the Indian justice system.
India lacks a centralized mandatory pre-market registry for AI systems because this requirement does not exist until March 2026. The EU AI Act requires developers to create public databases for high-risk systems which include their system purpose and training data and mitigation measures before they can launch their products.
The concept finds application within volunteer governance frameworks, various incident-tracking database systems, data-protection compliance methods, and project documentation that supports judicial work. In the Indian court system the term "AI system" refers to assistive/narrow tools that include SUPACE and SUVAS and LegRAA and other systems that operate in e-Courts Phase III[15] and IndiaAI Mission papers. The public registry does not serve as the sole traceability method because traceability uses the IndiaAI Mission and DPDP Act duties and the national AI incidents database to achieve its goals. The table below summarizes its main appearances (focusing on judicial relevance):
| Database Type | Title / Source | Managing Entity | Nature of Appearance | Access Link |
| Official | India AI Governance Guidelines (November 2025) | MeitY/National AI Mission | Defines “AI system” with India-specific risk classification and lifecycle stages; judicial examples (SUPACE, SUVAS) explicitly referenced; fields include purpose, training data parameters, risk mitigations, human oversight requirements, and Safe & Trusted AI sutras. | https://indiaai.s3.ap-south-1.amazonaws.com/docs/guidelines-governance.pdf |
| Official | National AI Incident Database (schema via TEC 57090:2025) | MeitY / Telecom Engineering Centre (TEC) + Office of the Principal Scientific Adviser | Tracks “AI system” incidents/harms across sectors (including judiciary); fields: AI Application Name(s), version, purpose, incident type, severity, bias/hallucination logs, mitigation measures. Judicial tools flagged for explainability. | https://www.tec.gov.in/pdf/SDs/Standard_TEC_57090_2025_for_AI_Incident.pdf |
| Official | Significant Data Fiduciary Notifications under Digital Personal Data Protection (DPDP) Act 2023 (Rules phased 2025–2027) | Data Protection Board of India (MeitY) | AI systems processing large-scale personal data (e.g., court records) designated as Significant Data Fiduciaries; mandatory DPIA, audit logs, risk assessment fields. Applies directly to judicial AI handling litigant data. | https://meity.gov.in/ (DPDP section) & Rules notifications |
| Official | IndiaAI Ecosystem Portals (AIKosh + Case-Study Repository) + e-Courts Phase III Monitoring | MeitY / IndiaAI Mission + Supreme Court e-Committee / NIC | Catalogs deployed AI systems in judiciary (SUPACE, SUVAS, TERES, LegRAA); fields: system architecture, training data (court-owned), performance metrics, deployment status. AIKosh lists models/datasets used in justice tools. | https://aikosh.indiaai.gov.in/ and https://indiaai.gov.in/ (case studies) |
| Mixed-Entity | PSA Techno-Legal Framework White Paper + IndiaAI Mission Reporting | Office of the Principal Scientific Adviser + IndiaAI Mission | References “AI system” in judicial context with proposed national database linkage; fields: sector (judiciary), risk category, governance compliance, incident logs. | https://psa.gov.in/CMS/web/sites/default/files/publication/AI-WP_TechnoLegal.pdf |
These databases represent a techno-legal and data category (project proposals, DPIAs and incident logs) definition of what an “AI system” is each where model provenance/traceability is integral to the voluntary, principle based approach in India. For judicial sector, government/NIC-developed assistive tech focused per e-Courts and IndiaAI repository along with zero EU-like public pre-market registration. Rather, compliance revolves around the 2025.
RESEARCH IN AI SYSTEMS IN THE INDIAN JUDICIARY SYSTEM CONTEXT
Research on AI systems in India's courts has grown a lot since 2021-2022, especially around 2025-2026. This increase matches the rollout of tools like SUPACE, SUVAS, LegRAA, and TERES during the e-Courts Phase III initiative. The published work includes academic papers, policy reports, think-tank reviews, and official documents. These mainly focus on how AI could help clear the huge backlog of cases—more than 50 million across the country—improve access to justice, especially for people facing language barriers, and address ethical and technical risks. Compared to formal documents, this research explores both practical impacts and challenges, sometimes highlighting issues or gaps that official sources might not fully cover.[16]
Overview of Key Research Clusters (2024–2026)
Evaluations of current tools, which includes the SUPACE (used for legal research and summarizing), SUVAS (which translates between 36,000 and 50,000 decisions into over 16 languages), LegRAA (for document analysis), and transcription aids like TERES/Adalat AI.
- Key references:
- Supreme Court’s Centre for Research and Planning released a white Paper on Artificial Intelligence and the Judiciary in November 2025, which maps Indian tools, compares them globally, and discusses risks such as hallucinations and bias, along with possible innovative solutions such as involving humans in the process and conducting audits.
- Case studies from IndiaAI.gov.in and PIB releases (2025-2026) highlight the time saved in legal research and improved multilingual access.
- Academic papers by Atlantis Press (2026), IJLSI (2024), and KUEY (2024) seek to explore the potential to reduce case backlogs by automating routine clerical tasks.
Rights-Based and Ethical Critique
These focus on constitutional issues, especially Articles 14 and 21, which cover equality, timely justice, and fairness.[17]
- The main report by UNDP-DAKSH-Digital Futures Lab titled AI for Justice: Ethical, Fair, and Robust Adoption in India's Courts (February 2026) looks at unofficial, unregulated uses, problems with governance, violations involving bias and privacy, and suggests practical tools for courts to assess AI use.
- Other sources like The Hindu Centre Policy Watch (2024-2025 updates) and Taylor & Francis publications (2026) investigate access to justice, accountability of algorithms, and transparency.
Risk and Bias-Focused Analysis
Concerns include bias based on caste, gender, and language in data sets, hallucinations where AI produces false citations causing incorrect rulings, lack of transparency in algorithms, and difficulties explaining AI decisions.
Research from IJRASET (2025), Virtuosity Legal (2025), and several 2026 papers warn that these issues can deepen present inequalities. The Supreme Court White Paper records real examples, like trial courts relying on fake precedents.
Comparative & Future-Oriented Perspectives
Compared to China, the US, and the EU, a gradual, rights-first approach to adoption is suggested for India. Various studies from 2024 to 2026 show that India presently uses AI primarily as a support tool, while other countries are experimenting with different and more autonomous systems.
How Research Goes Beyond Official Documents
Official reports such as the India AI Guidelines 2025, e-Courts Vision, and Supreme Court PIB releases mostly present a positive, technical view that AI boosts efficiency and requires human control, focusing on implementation benefits.
- Research, however, digs deeper:
- It connects AI use to constitutional rights concerning equality and speed of justice, warning about the risks of bias (as seen in DAKSH-UNDP and Hindu Centre reports).
- It points out problems like hallucinations, opaque systems, and excessive reliance on AI, issues that official materials have largely been downplaying.
- On governance, DAKSH-UNDP proposes ethical evaluation tools, while academics call for mandatory safeguards like explainable AI and bias audits that current loose standards don’t require.
- The critique considers India's social realities, like caste, language diversity, and digital access gaps, promoting localised AI solutions and indigenous knowledge integration, which contrasts with the global or EU-focused official references.
- On accountability, recent Supreme Court rulings reveal risks such as false citations by AI leading to misbehaviour charges and call for institutional reforms like AI committees and audits beyond routine administration.
In short, official documents treat AI as a safe efficiency tool under strict human control, whereas research views AI as a high-stakes intervention needing strong constitutional protections, real-world testing, and governance focused on rights to preserve trust in the judiciary. The literature from 2025–2026 is moving toward advocating for proactive regulations before AI scales further.
DATA CHALLENGES
The evaluation of AI tools in India’s judiciary (such as SUPACE, SUVAS, LegRAA, and TERES/Adalat AI) faces significant challenges due to entrenched data, institutional, ethical, and infrastructure barriers. These difficulties are highlighted in the Supreme Court’s White Paper (November 2025), DAKSH-UNDP-Digital Futures Lab report (February 2026)[18], academic research (2025–2026), and policy critiques[19]. They limit the ability to properly assess impact, detect bias, explain decisions, ensure compliance with rights, and scale up tools.
Main Data Challenges
- Poor Data Quality, Incomplete, and Inconsistent Judicial Records: Digitised records from e-Courts Phases I–III often have scanning and OCR errors, particularly for regional languages, missing metadata like litigant caste or gender, case types, and have inconsistent formats and incomplete fields. This reduces the accuracy of AI models, leading to errors in translation or precedent retrieval. Reports note that lacking representative data keeps systemic inequalities in place.
- Fragmented and Non-Interoperable Systems: Judicial data is spread across different platforms like NJDG, ICMIS, High Court portals, and state registries without a uniform system, making analysis across courts or regions difficult. Rural and lower courts lag in digitisation, which worsens the digital divide.
- Privacy, Confidentiality, and Security Limits (under DPDP Act 2023): Court records hold sensitive personal information. Strict anonymisation is required, and sharing data outside the court risks privacy breaches. [20]Courts keep data internal, limiting independent academic research. DAKSH-UNDP warns about risks from using insecure tools like public clouds.
- Opaque Tools, Lack of Transparency and Auditability: Information about training data, model settings, performance, or error logs for tools like SUPACE or SUVAS is limited. Hallucinations and decision-making priorities are not clear, making independent checks hard. No central database tracks AI issues in courts.
- Algorithmic Bias and Representation Gaps: Training data reflects historic inequalities relating to caste, gender, class, and region. Underrepresented groups and dialects risk being misrepresented or ignored, leading to biased outputs. The Supreme Court White Paper points to global bias examples and warns this could worsen caste-based discrimination digitally.
Infrastructure and Resource Shortfalls
Advanced AI models require strong computing resources, which are limited. Hardware, network access, and digital skills are weaker in rural courts. Despite large budgets in e-Courts Phase III, transitioning from older systems remains challenging. Evaluations are mostly anecdotal, mentioning shorter research times but lacking hard data on case disposal rates, error rates, or rights effects after AI use. No standard system tracks wider impacts. Together, these obstacles hinder evidence-based policymaking, constitutional compliance with Articles 14 and 21, and fair AI deployment.
Way Ahead
To tackle these data problems and issues, various senior judges, stakeholders, academicians, and researchers have suggested ways to
- The proper standardization and harmonization of existing data and data patterns
- Drastically improve future data collection methods
- Provide adequate support for systemic analysis.
Standardizing and Harmonizing Data
Recommendations for the application of uniform metadata standards across the e-Courts Phase III platforms, which require anonymized fields, and common formats for the management of mostly the joint collaboration of NIC and the Supreme Court. The usage of the IndiaAI Dataset Platform (AIKosh) for proper, secure, and anonymized subsets is something that is suggested, alongside cooperation between central and state governments.[21] Legacy datasets should undergo bias and fairness audits before use, following privacy techniques like differential privacy. Court-owned repositories that comply with data protection laws are preferred.
- Improving Future Data Collection: Proposals that are presented to the mentioned parties include building of privacy- by-design features like the automatic redaction and consent frameworks for non-sensitive info. AI-assisted tools could help spot errors and enrich metadata during filing, as shown by IIT Madras prototypes. Expanding multilingual and inclusive data gathering with dialect-aware speech recognition and sampling from marginalized groups is encouraged. Formal data governance policies should govern acquisition, quality checks, and version control. Judicial data labs, coordinated by IndiaAI, could provide secure environments for researchers under strict agreements.
- Enabling Systemic Analysis: Advises on the adoption of the DAKSH-UNDP framework, which covers the institutional readiness, rights impact by use case, technical assessments, and ongoing monitoring with audit logs and bias tracking.[22] Creating a central database to record AI incidents and performance, following national technical standards, is recommended. Explainable AI features and human oversight need to become mandatory. Independent, transparent audits with public reporting and holding vendors accountable should be enforced. Pilots and long-term studies with academic collaboration on impacts like case backlog reduction are important. Judicial training on AI ethics, forming AI committees in high courts, and funding infrastructure improvements in rural courts are essential.
These steps, linked to e-Courts Phase III funding and IndiaAI Mission coordination, aim for responsible scaling by 2027 to 2030, protecting judicial integrity while tackling delays and access issues. The overall agreement is that AI must stay as a support tool, with data control and rights protection as core principles.
REFERENCES
- ↑ Digital Personal Data Protection Act (2023). available at: https://www.indiacode.nic.in/ (Accessed on 20/03/26)
- ↑ Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (2021). available at: https://www.meity.gov.in/ (Accessed on 20/03/26).
- ↑ Digital Personal Data Protection Act (2023), s. 8-11. available at: https://www.indiacode.nic.in/ (Accessed on 20/03/26).
- ↑ Information Technology Act (2000), s. 43A, 79. available at: https://www.indiacode.nic.in/ (Accessed on 20/03/26).
- ↑ Bharatiya Nyaya Sanhita (2023). available at: https://www.indiacode.nic.in/ (Accessed on 20/03/26).
- ↑ Information Technology Department, Government of Tamil Nadu, Safe and Ethical Artificial Intelligence Policy 2020 (Government of Tamil Nadu 2020) https://it.tn.gov.in/sites/default/files/2021-06/TN_Safe_Ethical_AI_policy_2020.pdf accessed 28 March 2026.
- ↑ Artificial Intelligence Act (2024), art. 3(1). available at: https://eur-lex.europa.eu/ (Accessed on 20/03/26).
- ↑ Recommendation of the Council on Artificial Intelligence,” (2023). available at: https://legalinstruments.oecd.org/ (Accessed on 20/03/26).
- ↑ UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” (2021). available at: https://unesdoc.unesco.org/ (Accessed on 20/03/26).
- ↑ Infocomm Media Development Authority, ‘Artificial Intelligence – AI Verify’ (IMDA, 2025) https://www.imda.gov.sg/about-imda/emerging-technologies-and-research/artificial-intelligence accessed 28 March 2026.
- ↑ Ministry of Electronics and Information Technology (MeitY), “India AI Governance Guidelines,” (November 2025). available at: https://www.meity.gov.in/ (Accessed on 20/03/26).
- ↑ Supreme Court Center for Research and Planning, “White Paper on Artificial Intelligence and Judiciary,” (November 2025). available at: https://main.sci.gov.in/ (Accessed on 20/03/26).
- ↑ Office of the Principal Scientific Adviser, “AI Governance Some Techno-Legal Framework,” (January 2026). available at: https://psa.gov.in/ (Accessed on 20/03/26).
- ↑ National Strategy for Artificial Intelligence,” (2018). available at: https://niti.gov.in/ (Accessed on 20/03/26).
- ↑ Department of Justice, ‘Policy and Action Plan (Phase III) of the e-Courts Project’ (2023) https://doj.gov.in/ accessed 20 March 2026.
- ↑ UNDP, DAKSH, and Digital Futures Lab, “AI for Justice: Ethical, Fair, and Robust Adoption in India's Courts,” (February 2026). available at: https://www.in.undp.org/ (Accessed on 20/03/26).
- ↑ Constitution of India, arts 14, 21
- ↑ UNDP, DAKSH and Digital Futures Lab, ‘AI for Justice: Ethical, Fair, and Robust Adoption in India’s Courts’ (Report, February 2026) https://www.in.undp.org/ accessed 20 March 2026.
- ↑ Supreme Court Center for Research and Planning, ‘White Paper on Artificial Intelligence and Judiciary’ (November 2025) https://main.sci.gov.in/ accessed 20 March 2026.
- ↑ Digital Personal Data Protection Act 2023.
- ↑ Ministry of Electronics and Information Technology, ‘IndiaAI Mission: National Data Management Office and AIKosh Guidelines’ (2024).
- ↑ UNDP, DAKSH and Digital Futures Lab (n 2).
