Algorithmic Transparency
What is Algorithmic Transparency?
Algorithmic transparency is the ability of internal or external actors to obtain information, monitor, test, critique, or evaluate the logic, procedures, and performance of an algorithmic system.It is based on the principle[1] that the factors which influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Algorithmic transparency should enable the monitoring, checking, criticism, or intervention by interested parties. Note that there is no single definition[1] for algorithmic transparency. The definition is intentionally very broad, and can represent multiple meanings.
Put simply, it is the idea that when a computer system makes a decision that affects a real person's life, the reasons behind that decision should not be permanently hidden.
It is also important to note what algorithmic transparency is not. It is not simply the act of publishing source code. Giving access to the source code does not necessarily lead to meaningful transparency. So, intellectual property or trade secrets could be protected while achieving meaningful transparency. Furthermore, this definition of algorithmic transparency emphasises the interpretability of information[2], rather than the disclosure. Meaningful transparency requires not just openness, but comprehensibility.
As stated by Marc Rotenberg, “At the intersection of law and technology - knowledge of the algorithm is a fundamental right.”
Importance of Algorithmic Transparency
Algorithmic transparency has become essential in response to the expanding role of automated decision-making in everyday life. As digital systems increasingly shape access to services, information, and opportunities, individuals must be able to understand how and why these systems function. Transparency enables[3] users to recognise the business models behind the apparent 'free’ technologies, particularly how personal data is collected and used, and encourages more informed choices about privacy. Algorithms are not neutral as they can reinforce inequality, produce harmful errors, and even influence democratic processes. This makes it critical to hold those deploying such systems accountable for their broader societal impact.
Beyond awareness, transparency directly affects how users engage with algorithmic systems[4]. By revealing otherwise hidden processes, it helps individuals understand that decisions are being mediated by algorithms rather than neutral forces. This allows users to move beyond blind trust and instead critically evaluate outputs. When users are given explanations of how inputs produce results, they are better equipped to assess correctness, identify errors, and question decisions where necessary. Transparency also makes systems more interpretable and less arbitrary, fostering informed reliance while enabling users to challenge potential biases and demand accountability.
Transparency is closely tied to accountability in governance[5]. It is widely seen as a key mechanism for ensuring that both governments and corporations can be held responsible for algorithmic outcomes. However, full disclosure, such as revealing source code, is often unnecessary and far from helpful. More effective accountability can be achieved through targeted disclosures like performance benchmarks and aggregate results, which balance oversight with concerns about trade secrets and system manipulation. Policy frameworks further reinforce this by outlining principles such as awareness, explanation, auditability, and access to remedies, all aimed at ensuring fair and contestable systems[6].
The absence of transparency can have serious consequences. Opacity[7] often leads to speculation about hidden biases or manipulation, eroding trust in both technology companies and public institutions. Real-world examples, such as concerns surrounding Spain’s VioGen system[8], show how lack of oversight in sensitive contexts can undermine fairness and accountability, highlighting the need for audits and user engagement.
Algorithmic transparency is increasingly recognised as a human rights issue[9]. The right to access information supports demands for openness, particularly when automated systems are used by the state. In the age of AI, transparency is also crucial to address the ‘black box’[10] nature of complex systems, where decision-making processes are difficult to interpret. Ensuring that the logic, inputs, and purpose of algorithms are understandable is therefore key to enabling audits, detecting bias, and ensuring fairness, making transparency an ethical and legal necessity for responsible AI deployment.
History and Evolution
Although the phrase ‘algorithmic transparency’ was coined[11] in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit, its history goes way back.
In the 1970s, consumer credit bureaus shifted to statistical scoring systems. Credit bureaus started to computerize their massive consumer records in the 1960s and 70s. But computers had limited memory back then. Bureaus kept data like how many credit cards someone had, while more nuanced variables were dropped from credit records. The need to make these early computerized decisions legible to the public was one of the earliest practical demands for algorithmic transparency.
The regulatory response arrived not long after. Regulation in the form of the Equal Credit Opportunity Act of 1974 made it illegal to deny credit based on factors like race, sex, marital status or religion. The 1970 Fair Credit Reporting Act[12] requires that credit reporting agencies make sure their data is fair, private and accurate. These laws did not directly address algorithmic transparency in the modern sense as known today, but they were among the first to acknowledge that automated systems affecting citizens required some form of legal accountability.
The surge in machine learning applications following the success of deep neural networks in 2012, such as AlexNet's ImageNet victory, led to amplified concerns over model opacity, as these complex architectures prioritized predictive accuracy over human-understandable decision processes. This era marked a shift where algorithms increasingly influenced critical sectors like healthcare, finance, and criminal justice, prompting initial calls for transparency to enable scrutiny of potential errors or biases.
Legislative responses began to proliferate. The GDPR came into force in 2018, introducing a qualified right to explanation for automated decisions. The EU AI Act was adopted in 2024, representing the culmination of years of regulatory development. Simultaneously, companies began developing their own transparency tools. For example, Google's models like BERT and MobileNet used model cards to disclose training data sources, biases identified, and mitigation strategies. Meta released its Responsible AI Standard in 2021, mandating transparency reports for AI systems deployed at scale. IBM's AI Explainability 360 toolkit, launched in 2019, provides open-source tools for generating explanations of model predictions.
Official Definition of Algorithmic Transparency
Algorithmic Transparency as defined in Indian legislations
The Digital Personal Data Protection Act, 2023
As of now, India is yet to develop a standalone definition for Algorithmic Transparency and instead addresses it indirectly. An obligation towards Algorithmic Transparency is outlined in the Data Fiduciaries (those collecting the data) responsibilities to ensure the "completeness, accuracy and consistency"[13] of personal data if it is used to make a decision that affects the data subject (those giving the data). Further, ‘Significant Data Fiduciaries’ must ensure algorithmic compliance: any algorithmic software used for data processing must respect the rights of Data Principals.
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and Amendments
The IT Rules, 2021, and the Code of Ethics and Procedure and Safeguards in Relation to Digital/Online Media have been notified[14] by the Central Government under Section 87 of the Information Technology Act, 2000. These Rules came about as a response to growing concerns around the lack of transparency, accountability, and rights of users related to digital media. They are in supersession[15] of the Information Technology (Intermediaries Guidelines) Rules, 2011.
Rule 3(1)(m)[16] of the Rules direct that “the intermediary shall take all reasonable measures to ensure accessibility of its services to users along with reasonable expectation of due diligence, privacy and transparency;”
Rule 3(1)(m) was not part of the original 2021 Rules. It was inserted by the 2022 Amendment. A new Rule 3(1)(m) was added to the Intermediary Rules requiring intermediaries[17] to take responsibility to ensure accessibility of its services to users along with reasonable expectation of transparency, privacy, and due diligence.
The rule is deliberately open-textured. Rule 3(1)(m) requires intermediaries to take all measures to ensure accessibility of their services to users along with reasonable expectation of due diligence, privacy, and transparency, without further elaboration. While existing legislation requires all private entities to follow accessibility requirements prescribed for Indian Government websites, the specific ambit of the new obligations under the 2022 Amendment is unclear[18].
The intent of this amendment is to ensure that all users should be able to access services made available by the intermediaries, and that intermediaries should be mindful of users' expectations of diligence, privacy, and transparency
The February 2026 amendment to the IT Rules represents India's most assertive regulatory intervention into AI-generated content to date. By compressing takedown timelines, mandating technical traceability, and redefining intermediary obligations, the government has shifted from reactive moderation toward proactive algorithmic governance.
For further information, do peruse the Justice Definitions Project's Deepfakes page.
Algorithmic Transparency as defined in international instrument(s)
UNESCO Recommendation on the Ethics of Artificial Intelligence, 2021
Adopted unanimously in November 2021 by all 193 UNESCO Member States, the UNESCO Recommendation[19] is technically non-binding but politically significant as a truly global consensus instrument.
The UNESCO definition of algorithmic transparency comprises a two-part framework ‘transparency’ proper and ‘explainability’ as a linked but distinct concept:
Transparency aims at providing appropriate information to the respective addressees to enable their understanding and foster trust. Specific to the AI system, transparency can enable people to understand how each stage of an AI system is put in place, appropriate to the context and sensitivity of the AI system. It may also include insight into factors that affect a specific prediction or decision, and whether or not appropriate assurances, such as safety or fairness measures, are in place. In cases of serious threats of adverse human rights impacts, transparency may also require the sharing of code or datasets
Algorithmic Transparency as defined in official government report(s)
NITI Aayog Approach Document for India Part 1 – Principles for Responsible AI
The Approach document[20] highlights how the Supreme Court, in its interpretation of the Constitution, has held that transparency in decision making is critical even for private institutions. The Constitution guarantees accountability of all State action to individuals and groups.The paper outlines seven broad principles for responsible management of AI systems: 1) safety and reliability; 2) inclusivity and non-discrimination; 3) equality; 4) privacy and security; 5) transparency; 6) accountability; and 7) protection and reinforcement of positive human values.
The principle of transparency has been specifically defined in the context of AI. It describes how “The design and functioning of the AI system should be recorded and made available for external scrutiny and audit to the extent possible to ensure the deployment is fair, honest, impartial and guarantees accountability”
Algorithmic Transparency as defined in case law(s)
SCHUFA Case
One of the most consequential developments in the GDPR's application to algorithms came through the SCHUFA case[21] before the Court of Justice of the European Union. In SCHUFA (CJEU Case C-634/21), Europe's highest court held that automated credit scores used to assess individuals' eligibility for services qualified as automated decision-making under Article 22 of the GDPR. Because the scores significantly shaped contractual outcomes, the court found that individuals were entitled to safeguards[22] such as explanation, human review, and the right to contest the decision.
Functional Aspects of Algorithmic Transparency
Two closely related concepts[23] frequently appear alongside transparency in technical literature:
- Interpretability refers to how easily an algorithm's internal decision-making processes can be communicated to a human observer.
- Explainability refers to how easily a specific outcome or decision reached by an algorithm can be described and justified.
These two concepts are related but not identical, a system could produce explainable outputs while remaining largely uninterpretable in its internal workings.
There is also considerable ambiguity in how transparency relates to fairness. There remains great ambiguity as to how transparency relates to other key concepts, such as accountability or fairness. Here, it is important to note that transparency is always merely a means[2] to accountability does not guarantee it. A perfectly transparent algorithm can still be deeply unfair.
International Experience
European Union
EU AI Act
The European Union's AI Act[24] represents the world's most comprehensive attempt to regulate algorithmic systems through law. The EU AI Act, effective August 2024, establishes the world's first comprehensive framework for AI transparency, requiring organisations to disclose their involvement with AI and provide clear explanations[25] of the decision-making processes of AI systems.
The Act adopts a risk-based approach[25], dividing AI systems into four categories — unacceptable risk (banned outright), high-risk, limited risk, and minimal risk — and calibrating transparency obligations accordingly. The Act categorises AI systems into four primary risk levels, each with distinct transparency obligations.
For high-risk systems, such as those used in hiring, credit scoring, law enforcement, and education, the obligations are stringent. The EU AI Act mandates that high-risk AI systems provide explanations for their outputs and disclose training data summaries to users and authorities. This includes requirements for transparency in decision-making processes, such as notifying individuals when they interact with AI systems and ensuring traceability of data used in automated decisions.
Article 50 of the Act[26] further covers transparency for systems that directly interact with the public. Providers of AI systems that interact with people must inform them that they are interacting with an AI system and not a human, unless this is obvious. Providers of AI-generated or manipulated content must facilitate identification and mark such content in a machine-readable manner and enable related detection mechanisms[27].
The AI Act does not stand alone, it works alongside the GDPR and creates a layered compliance architecture for entities operating in the EU. This regulatory convergence increasingly demands robust transparency and explainability: system documentation and notification obligations from both frameworks may create an expectation that systems be not only technically sound but also intelligible to non-expert users.
GDPR
The General Data Protection Regulation[28], which came into force in May 2018, was one of the earliest major legal frameworks to address algorithmic transparency. The central provision is Article 22, which establishes a right not to be subject to purely automated decisions.
Article 22 of the GDPR gives individuals the right not to be subject to decisions based solely on automated processing, including profiling, which significantly affects them. This applies to applications such as credit scoring, job application filtering, and predictive policing. Unless exceptions such as explicit consent or contractual necessity apply, organisations must ensure human oversight and provide meaningful information about the logic involved.
Alongside this, Articles 13 and 14 require that individuals be informed about automated processing in privacy notices. Under these articles, you must inform users about automated processing.
The GDPR has also been invoked in enforcement actions against AI companies. European regulators, most notably Italy's Garante, have used GDPR's transparency provisions to scrutinise AI systems. Regulators have already taken steps[29] to enforce against organisations for lack of transparency in this space.
United States of America
CCPA
The CCPA[30] (California Consumer Privacy Act) grants consumers the right to know what personal information is being collected about them, to opt out of its sale, and, through subsequent rule-making, to understand and contest automated decisions made using their data.
Both the GDPR and CCPA frameworks allow consumers to request information about the logic behind automated decision-making technologies and the likely outcome of a decision. However, the GDPR Article 22(3) provides an unconditional right to a human review of automated decisions and a detailed explanation of the algorithm's rationale. Under the CCPA, companies are not required to offer a right to appeal unless they also deny the opt-out option.
The CCPA's treatment of automated decision-making has evolved through proposed rulemaking by the California Privacy Protection Agency (CPPA). Consumers can choose not to have their data used in automated decision-making technologies, but only under certain circumstances. For instance, there is no opt-out option when such technologies are used for fraud prevention, hiring, or educational profiling, areas where such tools are likely to have high impact.
Critics have noted that the CCPA's approach[22] to algorithmic transparency risks being too weak in exactly the domains where it matters most. This undermines trust in privacy protections and risks entrenching harmful practices, especially in high-stakes contexts like employment, credit, or healthcare. Harmonising standards around consumer rights, transparency, and risk mitigation could reduce legal friction and build public confidence in algorithmic systems
Singapore
Model AI Governance Framework
Singapore has voluntary guidelines for individuals and businesses, such as the Model AI Governance Framework[31].
The Model AI Governance Framework (Model Framework), first released in 2019 and updated in 2020, provides detailed guidance on ethical and governance considerations for AI deployment. The Model Framework sets out ethics and governance principles[32] for the use of AI, alongside practical recommendations that organisations can adopt to fulfil these principles. Organisations using AI in decision-making must ensure that the decision-making process is explainable, ensure that the reasons behind the decision can be explained in non-technical terms, be transparent, and inform people that AI is being used in respect of them and how it affects them.
In the financial sector specifically, the Monetary Authority of Singapore (MAS) has gone further. In 2018, the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector was published by the MAS. Transparency calls for the proactive disclosure of the use of AI-driven tools to data subjects as part of general communication. Data subjects should be provided with clear explanations of how data was used for AI-driven decisions impacting them.
In 2024, the Personal Data Protection Commission issued[33] the 'Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems'. The guidelines clarify how the PDPC will interpret the PDPA. While not legally binding, organisations regard the guidelines as standards that should be followed.
Singapore has also developed the AI Verify toolkit[34], an open-source testing framework that enables users to conduct technical tests on their AI models and record process checks, generating testing reports for the AI model under test that user companies can share with their shareholders to be more transparent about their AI. This positions Singapore as one of the leaders in practical, tool-based approaches to algorithmic transparency, even in the absence of binding legislation.
Peru
Law No. 31814
Enacted on May 5, 2023, Peru's Law No. 31814[35], formally titled the Ley que promueve el uso de la inteligencia artificial en favor del desarrollo económico y social del país, establishes a comprehensive structure for how artificial intelligence must be developed, deployed, governed, and monitored across the public and private sectors. It designates the Secretariat of Government and Digital Transformation (PCM-SGTD) as the national technical-regulatory authority responsible for directing, evaluating, and supervising AI development and use.
Within this framework, Article 4(i) on transparency and explainability is one of the law's most consequential provisions. It requires that AI systems incorporate algorithmic transparency, making the logic and criteria behind automated decisions accessible and documented, alongside traceability measures that allow decisions to be reconstructed and reviewed after the fact. Crucially, it goes beyond technical disclosure, demanding that information be communicated in terms genuinely comprehensible to ordinary people affected by AI-driven decisions, not just technical or legal experts. This is reinforced by the law's implementing regulation, which introduced concrete obligations including pre-use disclosures, visible labeling, and explanations when automated outputs affect individual rights. Together, these requirements reflect the law's broader ambition: ensuring Peru's embrace of AI for national development does not come at the cost of accountability or the fundamental right of individuals to understand decisions that shape their lives.
United States of America
One of the most widely supported proposals is the requirement for Algorithmic Impact Assessments (AIAs), which are systematic evaluations of an algorithm's potential harms before deployment. The Algorithmic Accountability Act and the Mind Your Own Business Act specify that providers of high-risk automated decision systems must conduct an impact assessment to evaluate the system development process, including how the platform designed and trained the system; outline the costs and benefits of the system; and consider whether the system produces any adverse impacts related to accuracy, fairness, bias, discrimination, privacy, and security. Extending such requirements beyond the US context to all high-risk AI deployments globally would represent a significant step forward
How other countries have understood Algorithmic Transparency through case laws.
The consequences of algorithmic opacity are best understood not through theory but through precedent. These cases examined below span criminal justice, consumer finance, content moderation, and data protection enforcement. Each illustrates how the absence of transparency enables harm to persist undetected, and how regulatory and judicial intervention, when it has come, has reshaped the terms of accountability. Collectively, they form the strongest empirical basis for why transparency obligations must move beyond principle and into enforceable practice.
United States of America
In November 2019, allegations surfaced on Twitter regarding discrimination by the Bank in extending credit for Apple Card[36]. Consumers complained that the Bank, in its underwriting of Apple Card credit card accounts, offered lower credit limits to women applicants and denied women accounts unfairly. These claims brought the issue of equal credit access to the broader public, sparking vigorous public conversation about the effects of sex-based bias on lending, the hazards of using algorithms and machine learning to set credit terms, as well as reliance on credit scores to evaluate the creditworthiness of applicants. Tech entrepreneur David Heinemeier Hansson revealed that Apple's newly launched credit card had offered him a credit limit 20 times higher than his wife's, despite her having a better credit score. Apple co-founder Steve Wozniak shared that he had experienced a similar disparity with his wife. These allegations prompted an immediate investigation by New York's Department of Financial Services (DFS)[36], which stated that "any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law."
The case became a flashpoint for algorithmic transparency in financial services. The resulting investigation by the DFS ultimately did not produce evidence of deliberate or disparate impact discrimination but showed deficiencies in customer service and transparency. Nevertheless, the controversy highlighted a significant concern about algorithmic bias in financial services. The incident demonstrated that even when no discriminatory intent exists, opaque algorithmic systems can produce outcomes that disproportionately harm protected groups. Without meaningful transparency, such patterns may go undetected until they become public scandals.
Research that engages with Algorithmic Transparency
Algorithmic transparency is not a singular notion. Rather, it emerges as a blend from related practices that can operate at different levels of depth, address diverse audiences, and focus on varied aspects of an algorithmic system.
Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair?
The most fundamental distinction[37] is between process transparency, which concerns the internal workings of an algorithm, and outcome transparency, which concerns the results it produces.
Process transparency and outcome transparency correspond to those for process fairness (the actual process of making a decision) and outcome fairness (the perceived fairness of a decision itself).
Transparency as Design Publicity: Explaining and Justifying Inscrutable Algorithms.
Design transparency operates at the level of the system's original intentions and construction. It involves disclosing the objectives the algorithm was built to achieve, the fairness definitions it embeds, the data used to train it, and the choices made by its developers. Design transparency recommends explaining the definition of objectives and their motivations to the public.The declaration of which fairness definition has been adopted and, if possible, providing a justification of such choice, is necessary. This form of transparency is particularly important in high-stakes contexts, exposing the normative choices baked and fed into an algorithm before deployment.
Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns
Prospective transparency informs users about the data processing and the working of the system upfront, describing how the AI system reaches decisions in general. Retrospective transparency, on the other hand, refers to post-hoc explanations and rationales. It reveals for a specific case how and why a certain decision was reached, describing the data processing step by step. Retrospective transparency includes the notion of inspectability and explainability and is important for audit purposes.
State of the Evidence: Algorithmic Transparency
The term ‘meaningful transparency’ also comes about in this discourse. It means a formulation that extends beyond disclosure to include mechanisms for evaluation. Rather than addressing the debate surrounding algorithmic accountability, this definition seeks to bring more clarity to the concept of algorithmic transparency by identifying some of the main mechanisms that have been proposed to promote it. Under this view, transparency should enable different audiences to approve or disapprove of the use of a system in a given context.
Comparing the Right to an Explanation of Judicial AI by function; studies on the EU, Brazil, and China
Courts around the world are increasingly adopting AI to manage caseloads and streamline judicial processes, but this creates a significant problem: when AI influences legal outcomes, litigants may struggle to understand or challenge what those systems actually did. This 2026 paper by Metikoš and colleagues asks whether people facing AI-assisted judicial decisions have a legally enforceable right to an explanation of those systems. Rather than treating judicial AI as a single phenomenon, the authors classify it into six functional levels, from routine document filing to fully automated verdict generation, and use this as a lens to compare legal frameworks across the EU, Brazil, and China, three jurisdictions covering roughly a quarter of the world's population with very different approaches to AI adoption and regulation.
Across all three jurisdictions, a right to explanation can technically be inferred from existing law, through due process standards, data protection rules, and AI-specific regulation, but in practice it is fragmented and often fails where it matters most. The EU's GDPR only covers fully automated decisions, excluding most assistive judicial AI currently in use, while the AI Act paradoxically imposes no explanation requirement for fully automated systems replacing human judges, without banning them either. Brazil's National Justice Council resolution is the most specialised judicial AI regulation found in the study, but it leaves widely deployed triage and case-clustering systems under weak standards despite their significant influence on outcomes. China, where AI adoption in courts has been most aggressive, offers litigants little beyond a right to be informed that AI was used at all, thanks to broad public authority exemptions in its data protection law.
A particularly important insight is the authors' argument that explainability is not merely a procedural right but a design imperative, if systems are built as opaque black boxes, legal obligations to explain them become practically hollow. This exposes a recurring tension across all three jurisdictions: explanation duties are placed on judges and courts, without ensuring developers build systems capable of being faithfully explained in the first place. The paper stops short of offering concrete legislative proposals, which some readers may find limiting, but its core argument is well made: legal protection against opaque judicial AI remains incomplete everywhere, and most critically so in the highest-stakes scenarios. Legislators, judiciaries, and AI developers all need to treat explainability as a foundational requirement of fair justice, before automation outpaces accountability.
Challenges
Algorithmic transparency sounds straightforward in principle, but delivering it in practice can prove to be difficult. One of the sharpest difficulties is the demand for meaningful explanations. Legal frameworks like the EU GDPR have responded to public concern by establishing a "right to explanation[38]," which obliges organisations to justify algorithmic decisions in terms that the people affected by them can actually understand. The problem is that what makes an explanation legally sufficient and what makes it genuinely comprehensible do not necessarily coincide. Translating the internal logic of a machine learning model into plain language, without distorting it, is a difficult technicality. Users are often already primed for scepticism when decisions seem to emerge from nowhere, or when a system appears to "know" something unsettlingly personal, the reaction is less curiosity and more unease[3]. That instinctive distrust of opaque systems is not irrational, it is a reasonable response to a real asymmetry of information.
The tension between public accountability and private interest sits at the heart of most transparency debates. Algorithms are not just technical tools, they are the most commercially valuable assets. Treating them as trade secrets is not mere paranoia; it reflects genuine competitive stakes. But the argument against disclosure goes beyond intellectual property. Many platforms operate in what might fairly be called adversarial conditions, meaning environments where revealing how a system works is practically an invitation to exploit it. Even platforms with a genuine commitment to openness draw a line somewhere.
For example, Reddit has been relatively forthcoming about its moderation philosophy, yet still withholds enough about its actual algorithmic mechanisms to prevent bad-faith actors from reverse-engineering ways around them.
In March 2023, Italy's data protection authority, the Garante, temporarily banned OpenAI's ChatGPT from operating in Italy on the grounds that it violated GDPR transparency and data protection principles. The ban was lifted after OpenAI made improvements to its disclosures, but the investigation continued. Despite subsequent improvements aimed at driving transparency, the recent notification by the Garante that it is continuing to investigate OpenAI's alleged violation of GDPR, including of transparency principles, signals that regulators are increasingly willing to use existing data protection law as a lever to force AI companies toward greater transparency.
This case is significant for several reasons. It demonstrated that GDPR transparency obligations apply to generative AI systems, not just limiting to narrower automated decision-making tools. It also illustrated the extraterritorial reach of European data protection law, and its potential to shape the behaviour of global technology companies.
The technical character of modern AI compounds these difficulties considerably. An algorithm does not exist in isolation, it operates on data, and that data is constantly shifting. Examining the code of a system tells you something, but it does not tell you everything, because the same model can behave very differently depending on what it is fed. Discriminatory or harmful outputs are rarely baked neatly into an algorithm's structure. Often, they emerge from the interaction between a model and the particular dataset[39] it is working with at a given moment. This means that even experts with full access to a system's architecture may genuinely disagree about how it operates, or why it produced a particular outcome. Auditing becomes less like inspecting a machine and more like interpreting a relationship between a machine and an ever-changing environment.
All of this points toward a more uncomfortable conclusion: that transparency, on its own, may simply not be enough. Opening the black box does not automatically produce understanding, and understanding does not automatically produce accountability. A system can be fully disclosed and still be impenetrable to anyone without advanced technical expertise. This means that disclosure, without interpretation and enforcement, risks becoming a compliance exercise rather than a meaningful safeguard. What effective oversight may actually require is less focus on internal architecture and more focus on observable outcomes, through testing systems against real-world data, scrutinising the decisions they produce, and building in mechanisms for challenge and redress. Transparency remains a worthwhile and important goal. But it is neither easy to implement honestly nor, by itself, a complete answer to what algorithmic systems can do when left insufficiently governed.
Algorithmic transparency does not have one universally accepted definition. Different disciplines, such as law, computer science, philosophy, and public administration, have each approached it from their own vantage points, producing a rich but sometimes confusing set of meanings.The phrases ‘algorithmic transparency’ and ‘algorithmic accountability’ are sometimes used interchangeably but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. ‘Algorithmic accountability’ implies that the organisations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.
This subtle distinction matters enormously in practice. A company can be transparent about how its hiring algorithm works while simultaneously producing discriminatory outcomes but without accountability mechanisms, that transparency serves almost little purpose.
Way Ahead
Despite significant progress in both law and technology, the current landscape of algorithmic transparency remains inadequate in several important respects. To address this, scholars and practitioners have devised various reforms:
Mandatory Algorithmic Impact Assessments
One of the most widely supported proposals is the requirement for Algorithmic Impact Assessments (AIAs), which are systematic evaluations of an algorithm's potential harms before deployment. The Algorithmic Accountability Act and the Mind Your Own Business Act specify that providers of high-risk automated decision systems must conduct an impact assessment to evaluate the system development process, including how the platform designed and trained the system; outline the costs and benefits of the system; and consider whether the system produces any adverse impacts related to accuracy, fairness, bias, discrimination, privacy, and security. Extending such requirements beyond the US context to all high-risk AI deployments globally would represent a significant step forward
Independent External Audits
Internal audit reports, while a widely chosen option, are frequently insufficient. Internal audit reports are often diluted because the credibility of auditors, the objective of the audit, the methodology of enforcing policy changes and metrics to measure the successes and failures do not adhere to any predetermined or unanimously accepted standards. Therefore, external audits will be necessary to ensure algorithmic transparency as they will more reliably ‘signal trustworthiness and compliance to external audiences.’ Mandating periodic external audits, particularly for high-risk systems, would provide meaningful accountability.
Mandatory Disclosure for Public Sector Algorithms
Mandatory rules governing the disclosure of information about automated decision-making systems is essential. Algorithmic transparency will not materialize if it is left to the market to decide what should be transparent, nor are non-binding ethical artificial intelligence frameworks enough to hold the state accountable for using computational systems to guide or automate decision-making processes. Governments that use algorithms to determine who receives welfare benefits, immigration decisions, or parole conditions have a particular obligation to be transparent, since affected individuals cannot simply opt out of dealing with the state.
Secure Whistleblowing Mechanisms
Whistleblowing systems offer internal oversight by allowing employees to assess the truthfulness of disclosures and reveal the unlawfulness of corporate behaviors that have not been disclosed by firms. These recommended transparency rules can be useful in reducing algorithmic opacity to facilitate stakeholder oversight, risk assessments, and enforcement of law, ultimately safeguarding the protection of data privacy, human rights, and democratic norms.
Retrieval-Augmented Generation (RAG)
A promising direction in advancing algorithmic transparency lies in architectures like Retrieval-Augmented Generation[40]. Unlike conventional large language models that store knowledge opaquely within billions of parameters, RAG combines a neural retriever with a generative model. This allows the system to pull from an inspectable, external knowledge source (such as Wikipedia) when producing outputs. This means the sources behind a model's response can be surfaced and audited, directly addressing one of the core challenges of transparency: ‘the inability to trace why a model says what it says’. RAG also enables knowledge to be updated without retraining, making AI systems more accountable and correctable over time. As algorithmic transparency standards evolve, hybrid architectures like RAG offer a technical path toward AI that is not only more accurate, but more explainable and open to scrutiny.
Related Terms
References
- ↑ 1.0 1.1 Bell A, Nov O and Stoyanovich J, ‘The Algorithmic Transparency Playbook: A Stakeholder-First Approach to Creating Transparency for Your Organization’s Algorithms’ [2023] Center for Responsible AI https://dataresponsibly.github.io/algorithmic-transparency-playbook/resources/transparency_playbook_camera_ready.pdf
- ↑ 2.0 2.1 Valderrama M, Hermosilla MP and Garrido R (State of the Evidence: Algorithmic Transparency) <https://www.opengovpartnership.org/wp-content/uploads/2023/05/State-of-the-Evidence-Algorithmic-Transparency.pdf> accessed 18th March 2026
- ↑ 3.0 3.1 https://aisel.aisnet.org/cgi/viewcontent.cgi?article=4176&context=cais
- ↑ https://dl.acm.org/doi/abs/10.1145/3173574.3173677
- ↑ https://dl.acm.org/doi/10.1145/2844110
- ↑ https://dl.acm.org/doi/epdf/10.1145/3125780
- ↑ https://www.orfonline.org/expert-speak/algorithmic-transparency-public-access-to-information-on-automated-decision-making
- ↑ https://oxfordinsights.com/insights/why-you-should-know-and-care-about-algorithmic-transparency/
- ↑ https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/
- ↑ https://journals.muni.cz/mujlt/article/view/36881
- ↑ https://www.tandfonline.com/doi/full/10.1080/21670811.2016.1208053
- ↑ https://www.ftc.gov/legal-library/browse/statutes/fair-credit-reporting-act
- ↑ https://www.indiacode.nic.in/bitstream/123456789/22037/1/a2023-22.pdf
- ↑ https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=1700749®=3&lang=2
- ↑ http://sflc.in/analysis-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021/
- ↑ https://www.meity.gov.in/static/uploads/2026/02/550681ab908f8afb135b0ad42816a1c9.pdf
- ↑ https://www.azbpartners.com/bank/amendments-to-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021/
- ↑ https://www.lexology.com/library/detail.aspx?g=27a883ce-10ed-4ab9-8642-b7e2dc118ae0
- ↑ https://unesdoc.unesco.org/ark:/48223/pf0000380455
- ↑ https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
- ↑ https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:62021CJ0634
- ↑ 22.0 22.1 https://btlj.org/2025/04/ccpa-vs-gdpr-on-automated-decision-making/
- ↑ https://medium.com/data-science-collective/algorithmic-transparency-f05e290795e8
- ↑ https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- ↑ 25.0 25.1 https://gdprlocal.com/ai-transparency-requirements/
- ↑ https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
- ↑ https://digital-strategy.ec.europa.eu/en/faqs/guidelines-and-code-practice-transparent-ai-systems
- ↑ https://gdpr-info.eu/
- ↑ https://www.fieldfisher.com/en/insights/transparency-requirements-under-the-eu-ai-act-and-the-gdpr-how-will-they-co-exist
- ↑ https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5
- ↑ https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework
- ↑ https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/singapore/
- ↑ https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems
- ↑ https://www.imda.gov.sg/about-imda/emerging-technologies-and-research/artificial-intelligence
- ↑ https://oecd.ai/en/dashboards/policy-initiatives/law-31814-law-that-promotes-the-use-of-artificial-intelligence-in-favor-of-the-economic-and-social-development-of-the-country-3047
- ↑ 36.0 36.1 https://www.dfs.ny.gov/system/files/documents/2021/03/rpt_202103_apple_card_investigation.pdf
- ↑ Seymour W, ‘Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair?’ (BIAS 2018, 1 January 1970) <https://www.cs.ox.ac.uk/files/11108/process-outcome-transparency.pdf> accessed 17 March 2026
- ↑ https://dl.acm.org/doi/10.1145/3375627.3375821
- ↑ https://ai.equineteurope.org/system/files/2022-02/ICA2014-Sandvig.pdf
- ↑ https://publications.clpr.org.in/the-philosophy-and-law-of-information-regulation-in-india/chapter/automated-administration-administrative-law-and-algorithmic-decision-making-in-india/
