INTRODUCTION
AI and technological development are the characteristic features of the 21st century, which have evolved human life in unimaginable ways. One such technological development has also taken place in Administrative Decision Making. Administrative decision-making is one of the most crucial aspects of ensuring smooth administration of the country. To ensure that this decision-making is done efficiently and effectively, in recent times, the use of AI has increased, leading to the rise of Automated Administrative Decision Making. It includes the use of Artificial Intelligence in carrying out various administrative activities and even includes quasi-judicial activities, i.e., decision-making by Administrative Tribunals.
However, while this technological development has been proven to be very helpful and efficient, at the same time, it has brought with it various new forms of challenges, like the Blackbox problem, showcasing that while this development is useful, it is not free from challenges.
AI AND DECISION MAKING
The rapid technology advancement has made AI transformative in public administration, which basically redesigns decision-making.[1] It has transformed the way decision-making is done and continues to do the same. The introduction of AI systems has significantly reduced both effort and time in the decision-making process. AI has become an essential component in the development and enhancement of administrative practices, with the advent of vast databases and long-term processors capable of performing complex calculations and algorithms similar to human intelligence.[2]
In order to understand how AI functions in Administration, it is important to understand AI systems. It can be categorised into the following two types:
- Expert systems and
- Machine learning-based AI (‘ML’).
“Expert systems are rule-based, i.e., they rely on hard-coded ‘if X, then Y’ rules to arrive at conclusions from a given input. In this sense, their reasoning is ‘deductive’ (or top-down) and much simpler to explain. In contrast, ML systems employ inductive reasoning that is bottom-up. In such cases, the AI develops its own rules by drawing mass correlation among the training datasets. This takes us to the black box problem. The “black-box” in AI models means that while they can make predictions or recommendations, the underlying mechanism that led to those conclusions are not transparent making it hard to ensure fairness and build trust. [3]
AI IN QUASI- JUDICIAL PROCEEDINGS
The term ‘quasi- judicial’ means ‘having a partly judicial character by possession of the right to hold hearings on and conduct investigations into disputed claims and alleged infractions of rules and regulations and to make decisions in the general manner of courts.’[4]
Some of the quasi-judicial bodies in India are:
- Tax Tribunals such as the Income Tax Appellant Tribunal, GST Tribunal, etc.
- Consumer Dispute Redressal Commissions,
- Securities and Exchange Board of India (SEBI) Adjudication,
- National Company Law Tribunal, etc.
All of these bodies have, in one way or another, adopted AI in order to make their functioning more efficient and effective. For example: SEBI have issued guidelines for AI use in fiscal services to ensure accountability and transparency (SEBI (intermediaries) (Amendment) Regulations, 2025).
Further, these quasi-judicial bodies use AI in the following manner:
Websites: Each can make their own websites with the help of AI, making their services more accessible. For instance, the GST tribunals, with the help of such websites, make filing or checking of the case status easy and accessible for all.
AI Chatbots: Most of these websites also provide AI chatbots, which help people to instantly solve their queries by just posting their questions in the chatbot.
Drafting Purposes: AI is also used for the purposes of drafting various orders, judgements, and decrees by these tribunals, as it would help in ensuring. For instance, the Telecom Regulatory Authority of India, which acts as an administrative and quasi-judicial body, uses AI for drafting several orders.
Document reviewing and analysis of the Evidence: The quasi-judicial bodies often use AI assistance for the purposes of reviewing documents and analysing evidence produced before it. Natural Language Processing (NLP) tools can flag inconsistencies or contradictions in evidence.
LAWS GOVERNING USE OF AI IN QUASI-JUDICIAL BODIES
While there are no separate statutes governing AI and Quasi-judicial bodies, following mandates and guidelines ensures that fairness, justice, and non-arbitrariness are maintained in the use of AI in the quasi-judicial bodies:
Constitutional Provisions: Articles 14 and 21 of the Indian Constitution provide rights to individuals by ensuring equality before law and prohibiting discrimination and addressing biases in AI algorithms, and they protect the right to life and liberty, which includes the right to privacy as well.
The Information Technology Act 2000: While the IT Act does not explicitly use the term “Artificial Intelligence,” it serves as the foundational legal bedrock for all digital and automated administrative actions in India.
Legal Recognition of AI Outputs: Section 4 of the IT Act grants legal recognition to electronic records. This is a crucial provision for quasi-judicial bodies, as it provides the legal basis for accepting AI-analysed evidence, digitally submitted documents, and AI-assisted drafts of orders and decrees as valid records.[5]
Cybersecurity and Data Protection: Section 43A, read with the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, mandates that bodies handling sensitive personal data maintain robust security practices.[6]
Kerala High Court Guidelines, 2025 –
The Kerala High Court in July 2025 laid down the following guidelines to prevent misuse of AI in Kerala courts:
- Firstly, the court held that principles of transparency, fairness, confidentiality, and accountability are non-negotiable and must always be kept in mind.
- Secondly, the court also noted that the use of cloud-based AI tools must be avoided unless such an AI tool is approved for use.
- Thirdly, the court noted that even while using approved AI tools, proper care and caution must be taken in order to make sure that the above-mentioned principles are not violated.
- Fourthly, when such an AI tool is used for translations, then such translations must be approved and verified by the Judge, and also, such AI tools must not be used for routine administrative tasks in order to ensure efficiency and effectiveness of judicial orders.
- Fifthly, AI tools shall only be used for the purposes for which they are allowed, and a proper record shall also be maintained showing instances where the AI tool was used.
UNESCO Guidelines: UNESCO’s Guidelines for the Use of AI in Courts and Tribunals present the first global ethical and operational framework to ensure that AI serves justice while upholding the rule of law and fundamental rights. Built around 15 universal principles—from transparency, accountability, and human oversight to human rights protection and multistakeholder governance the Guidelines provide practical orientation for judges, court administrators, and policymakers exploring AI adoption. They advocate for AI as an assistive, not substitutive, tool used responsibly and always under meaningful human supervision.[7]
DPDP ACT AND ITS IMPLICATION ON AI
The Digital Personal Data Protection (DPDP) Act, 2023, introduces a strict, consent-driven framework for how digital personal data is governed in India. For Artificial Intelligence (AI), a technology that fundamentally relies on ingesting, retaining, and analysing massive datasets, the Act creates a complex compliance landscape. Some of its provisions are as follows-
Web Scraping & Public Data (Sec. 3(c)(ii)):[8] AI developers can scrape personal data without consent only if the Data Principal voluntarily made it public. Scraping personal data published by unauthorised third parties violates the Act.
Purpose Limitation (Sec. 5, 6, & 8(1)):[9] AI thrives on secondary data use, but the Act mandates granular, informed consent for specific purposes. Repurposing general user data to train proprietary AI models without fresh, explicit consent is unlawful.
The “Machine Unlearning” Dilemma (Sec. 12(3)):[10] The statutory right to erasure legally requires AI developers to delete specific personal data upon request. However, extracting an individual’s data from the weights and parameters of a fully trained neural network remains a nearly impossible technological hurdle.
AI Hallucinations & Accuracy (Sec. 8(8)):[11] Data Fiduciaries must ensure data accuracy when it affects automated decisions. If an AI system “hallucinates” false background data leading to an adverse outcome, the Fiduciary is directly liable.
SDF Compliance Burdens (Sec. 10):[12] Major AI entities will likely be classified as Significant Data Fiduciaries. This trigger heightened obligations under the recently notified DPDP Rules 2025, including mandatory Data Protection Impact Assessments (DPIAs), independent data audits, and the appointment of an India-based Data Protection Officer.
CHALLENGES
Job Insecurity: As discussed above, several quasi-judicial authorities use AI tools for generating orders, decrees, etc., which may lead to loss of jobs of the scribers appointed for such purposes.
Lack of Accountability: Numerous AI models operate as “black boxes,” making it challenging to comprehend how decisions are derived. This absence of transparency leads to accountability. To tackle this issue, it is important to maintain thorough documentation and ensure human supervision.
Data Privacy and Security: AI systems analyse extensive amounts of personal and sensitive information, heightening the risk of misuse. Implementing stringent data protection laws, following privacy laws, and ensuring secure AI systems are crucial for maintaining public trust.
CASE LAWS
1) BUCKEYE TRUST vs PCIT:
Facts: In this case, the ITAT passed an order consisting of false cases.
“The order cited the following three judgments:
- Rukmani Ammal v K. Balakrishnan (1973) 91 ITR 631 (Madras High Court)
- Gurunarayana v. S. Narasinhulu (2004) 7 SCC 472 (Supreme Court of India)
- Sudhir Gopi v. Usha Gopi (2018) 14 SCC 452 (Supreme Court of India)
However, the first two case citations do not exist, while the third citation actually refers to a different case—K. Subba Rao v. State of Telangana.”[13]
As a result of such hallucinations created by the AI tools used by the ITAT, the order issued was revoked, and now fresh hearings of the case have begun from 19th February, 2025, where the tribunal moves to correct the error and uphold fairness, where both parties are expected to present new arguments in the case’s re-examination.
2) JASWINDER SINGH vs STATE OF PUNJAB[14]
It is one such case that shows the growing use of AI tools like ChatGPT by the courts in order to clear their doubts.
The Court in this case clarified that:
“11. Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”
This judicial caveat is a critical precedent for quasi-judicial bodies. It directly addresses the “black box” and accountability challenges inherent in AI. The ruling establishes that while AI can serve as a supplementary research tool to gather context, it absolutely cannot substitute the application of a human judicial mind. Because AI lacks transparency in how it weighs data, its outputs cannot form the substantive or binding basis of an administrative order or legal judgment.
CONCLUSION
The integration of AI into Indian quasi-judicial proceedings, validated by the Information Technology Act, 2000, offers unprecedented efficiency but introduces profound legal perils. The opaque “black box” nature of machine learning, coupled with the strict data compliance mandates of the DPDP Act, 2023, inherently conflicts with administrative transparency. Ultimately, to uphold the constitutional safeguards of Articles 14 and 21, AI must strictly remain an assistive “Human-in-the-Loop” (HITL) tool. Only through unwavering human oversight and strict adherence to established guidelines can tribunals harness technological efficiency while ensuring the delivery of justice remains fair, accountable, and profoundly human.
Author’s Name: Gauri Khandelwal (Vivekananda Institute of Professional Studies, New Delhi)
References:
[1] Damar, M et al, ‘Navigating the Digital Frontier: Transformative Technologies Reshaping Public Administration’ (2024) 69(9) EDPACS 41.https://doi.org/10.1080/07366981.2024.2376792 accessed 5 January 2026
[2] Sadam Mohammad Awaisheh et al, ‘Artificial Intelligence and Its Impact on Administrative Decision-Making’ (2024) 20(1) Journal of Human Security 99 https://doi.org/10.12924/johs2024.20114
[3] Chytanya S Agarwal, ‘A Framework for Reasoned Automated Decision-Making under Indian Administrative Law’ (National Law School of India Review Online, 2025) https://www.nlsir.com/post/a-framework-for-reasoned-automated-decision-making-under-indian-administrative-law accessed 5 January 2026.
[4] Merriam-Webster, ‘Quasi-Judicial’ (Merriam-Webster Dictionary) https://www.merriam-webster.com/dictionary/quasi-judicial accessed 5 January 2026.
[5] Information Technology Act 2000, s 4.
[6] Information Technology Act 2000, s 43A.
[7] UNESCO, ‘Guidelines for the Use of AI Systems in Courts and Tribunals’
https://www.unesco.org/en/articles/guidelines-use-ai-systems-courts-and-tribunals accessed 5 January 2026.
[8] Digital Personal Data Protection Act 2023, s 3(c)(ii).
[9] Digital Personal Data Protection Act 2023, ss 5, 6, 8(1).
[10] Digital Personal Data Protection Act 2023, s 12(3).
[11] Digital Personal Data Protection Act 2023, s 8(8).
[12] Digital Personal Data Protection Act 2023, s 10.
[13] Buckeye Trust v PCIT ITA No 1051/Bang/2024 (ITAT Bangalore) (30 December 2024, recalled 7 January 2025).
[14] Jaswinder Singh v State of Punjab CRM-M-8817-2024 (Punjab & Haryana High Court)

