top of page
Image by Annie Spratt
images-removebg-preview.png

NLSIR

|

Online

A Framework for Reasoned Automated Decision-Making under Indian Administrative Law

Chytanya S. Agarwal

I. Introduction

Decision-making, in general, involves a constant trade-off between external costs and decision-making costs. Today, the use of artificial intelligence technology (‘AI’) in governance promises to cut the Gordian knot of this trade-off by ensuring accurate and efficient decisions. This application of AI in decision-making, also called automated decision-making (‘ADM’) has enormous scope for application in the field of administrative decision-making. Moreover, its influence has crept into judicial adjudication. While AI has been employed for ministerial tasks in the courtroom (e.g., transcription, translation, etc.), there is evidence of it being used by courts in decision-making. In fact, Indian tribunals have referred to the definitions given by ChatGPT to support their decisions.

Illustratively, in Jaswinder Singh v. State of Punjab (¶9-11), Justice Chitkara relied on ChatGPT for summarising the bail jurisprudence in cruelty cases. In Nisha Rani v. State of Punjab, the Punjab and Haryana High Court directed the use of AI to sort court dockets for disposal of cases having settled positions of law. Earlier, in Christian Louboutin v. M/s The Shoe Boutique (¶28), the Delhi HC opined against the use of AI in adjudication because it cannot substitute “human intelligence or the humane element in the adjudicatory process.” A strand of thinking evident in Christian Louboutin is that AI cannot exhibit human reasoning processes.

In this light, this essay asserts that ADM can be made compliant with the imperative of rendering reasoned judgments, a pivotal facet of the principles of natural justice. I hypothesize that the usage of AI in decision-making (judicial or administrative) cannot per se render ADM unreasoned. While there exists literature (see here, here, and here) on ADM’s compliance with natural justice, this essay goes a step further to pinpoint the AI models that can (or cannot) meet the legal standards of Indian jurisprudence.

To make this argument, firstly, I explain the challenges posed by inductive AI systems, particularly the ‘Black Box’ problem, which can lead to unreasoned decisions in ADM. Secondly, I undertake a brief overview of case laws concerning the right to a reasoned decision and cull out the standards thereof. Lastly, building on the preceding analysis, I argue that at least one of three requirements – namely, the doctrine of non-fettering, a subject-centric approach to explainable AI, and chain-of-thought prompting – is requisite for reasoned ADM. While judicial/quasi-judicial usage of AI may seem far-fetched, AI has extensive applications in administrative decision-making. In fact, such ADM by the State and its instrumentalities is envisaged by Sections 7(b) and 17(2)(a) of the Digital Personal Data Protection Act, 2023. Thus, as such, the recommendations outlined in this essay have contemporary relevance for administrative law.

II. The ‘Black Box’ Problem in AI

To identify the challenges to reasoned decision-making posed by AI, we must first understand the anatomy of different kinds of AI. AI systems can be broadly classified into two categories: expert systems and machine learning-based AI (‘ML’). Expert systems are rule-based, i.e., they rely on hard-coded “if X, then Y” rules to arrive at conclusions from a given input. In this sense, their reasoning is ‘deductive’ (or top-down) and much simpler to explain. In contrast, ML systems employ inductive reasoning that is bottom-up. In such cases, the AI develops its own rules by drawing mass correlations among the training datasets.

For instance, if ML-based AI is used for credit-scoring participants in a government scheme, it is initially fed some data pertaining to income, etc., and their corresponding effect on creditworthiness. Then the AI self-learns. The ML-based AI will, by drawing correlations and identifying patterns between different variables (like income and wealth), determine the weightage to be accorded to different variables. Unlike expert systems, the weightage that its rules give to each variable is not determined a priori and can change over time.

(Image 1: The differences in the functioning of expert systems and ML-based AI.)

ML can be further supervised or unsupervised. Please note that these are ideal types and not watertight compartments, as shown by the existence of semi-supervised ML. Supervised learning employs feedback loops and human supervision to refine the accuracy of ML. Unsupervised learning lacks human supervision. While unsupervised learning enables AI to freely self-learn using datasets, its output is determined more randomly (or ‘stochastically’) based on mass correlations in datasets. This randomness in unsupervised learning lacks the explicit logical reasoning and causality that marks conventional human reasoning. The ‘Black Box’ problem arises here.

(Image 2: Types of ML-based AI.)

Simply put, the ‘Black Box’ problem denotes the inability to pin down the precise reasoning underlying AI’s decisions. It arises because ML uses complex processes that are more often than not beyond human comprehension, such as the use of ‘deep neural networking’ or a ‘higher dimensionality’ in which mathematical correlations are made (Bathaee, pp.901-905). Illustratively, if an unsupervised ML-based AI algorithm is used for credit scoring, it may be fed data on the applicant’s income, wealth, etc. However, the AI’s precise reasoning and the weightage it gave to each dataset remain unclear, and cannot be rationalised due to the Black Box problem.

III. The Required Standard of Reasoning

As per Section 4(1)(d) of the Right to Information Act, 2005, every public authority is bound to furnish reasons for its administrative or quasi-judicial functions to the affected persons. As observed in Manohar v. State of Maharashtra, the requirement of giving reasoned orders applies to both administrative and judicial/quasi-judicial bodies. Per Siemens Engg. & Mfg. Co. of India Ltd. v. Union of India, administrative bodies must adduce “clear and explicit reasons” in support of their decisions. Being a basic principle of natural justice, giving reasons for decisions “must be observed in its proper spirit and mere pretence of compliance with it would not satisfy the requirement of law” (Seimens Engg., ¶6).

As observed in S.N. Mukherjee v. Union of India, while administrative reasoning need not be as elaborate as that of a court of law, the authority is still bound to show that due consideration has been given to the points in contention by furnishing “clear and explicit reasons” for the same. Per State of W.B. v. Atul Krishna Shaw, the decision must furnish reasons sufficiently capable of being challenged in review or appeal. Atul Krishna Shaw (¶7-11) also noted that the adequacy of reasoning is contextual – it depends on the facts and circumstances of each case. The Supreme Court, in S.N. Mukherjee, also observed that the Indian law is more consonant with the American law, where the duty to provide reasons for decisions is absolute. This is unlike the English common law where this obligation is limited (see here and here). However, not furnishing reasons justifies an adverse inference that the order was not based on good reasons. Thus, the bare minimum requirement of a reasoned decision is that it must satisfy the standard of “clear and explicit” reasoning, and it should not be reduced to a mere formality. The extent and nature of reasons required varies from case-to-case; a written speaking order, howsoever concise, is the essential minimum requirement. This is evident in administrative decisions that are expected to provide only brief reasons which need not be as elaborate as judicial orders.

In the recent decision in Madhyamam Broadcasting Ltd. v. Union of India (¶51), the Supreme Court ruled that the structured proportionality test must be satisfied for the restriction of any procedural right (including the right to reasoned decisions) to be constitutional. Thereby, the Supreme Court has done away with ‘prejudice-based’ tests in favour of a ‘proportionality-based’ test for assessing the validity of restrictions on natural justice. This test is four-pronged: first, a legitimate aim must underlie the rights-restrictive measure; second, the measure must be rationally connected to the legitimate aim; third, no equally efficacious and less rights-restrictive alternative should be available; and fourth, the measure’s social benefit must be proportionate to the impact on the right-holder.

Thus, for any restriction of the right to reasoned decisions to be constitutional, the same must rationally further a legitimate aim and there must not be any equally effective yet less rights-restrictive alternative available that disproportionately impacts the right-holder. I assume that the first two prongs, which have a lower threshold, are satisfied due to two reasons: first, ADM pursues administrative efficiency in decision-making, which is a justifiable restriction (hence, a legitimate aim) on the right to natural justice (see Rajeev Suri v. DDA, ¶153, 295); second, because ADM ensures accurate and efficient decision-making, it has a rational connection, or a direct and proximate connection, with the legitimate aim. Given the lack of concrete data on the social impact of ADM, a balancing assessment would be purely speculative; hence, it is outside this essay’s scope. So, I assume arguendo that the fourth prong of balancing is fulfilled. My argument rests solely on the third prong of necessity.

Using the necessity prong of Madhyamam Broadcasting, I am drawing the following corollary that will be central to my subsequent argumentation: when there are more interpretable alternatives to Black Box AI, they are less rights-restrictive. They would also be equally efficacious if they satisfy the state’s public interest aims for introducing AI systems – namely, efficiency and accuracy in decision-making (see here, here, and here).

Put simply, more interpretable and equally efficacious ADM systems are in line with the necessity prong of Madhyamam Broadcasting’s proportionality test. At the same time, in the face of such more interpretable alternatives, Madhyamam Broadcasting would strike down less interpretable AI models.

IV. Policy Safeguards to Navigate the Black Box Problem

As is obvious, due to the absence of any output of reasoning, the Black Box problem fails to satisfy the standard of “clear and explicit reasons” of Siemens and S.N. Mukherjee. If the state’s aim behind employing ADM is to ensure fairness and accuracy in decision-making, the Black Box problem would also not fulfil Madhyamam Broadcasting’s proportionality test, if there exist lesser rights-restrictive alternatives that can remedy the Black Box problem. In this section, I explore three different pre-requisites necessary to meet the right to reasoned ADM and substantiate my argument using foreign laws, which have persuasive value in the Indian context. The following analysis, however, has an epistemic limitation due to its reliance on the currently available and reported literature. Thus, the arguments presented below, which solely focus on finding the least rights-restrictive forms of ADM, are not exhaustive on the subject and liable to revisions in light of innovations.

A. Non-Determinative Use of ADM

Lord Reid, in British Oxygen Co. Ltd. v. Minister of Technology, observed that authorities exercising statutory discretion must not forego application of mind. This forms the basis of the ‘doctrine of non-fettering’ that seeks to preserve this element of discretion and reasoning by the authority involved in decision-making. In sum, an authority must apply its mind in arriving at the decision; if the authority unthinkingly delegates/abdicates this reasoning faculty, then the decision is vitiated on the ground of unlawfully ‘fettering’ its discretion. The non-fettering doctrine, in the context of ADM, implies that the decision-maker cannot bind itself to the decision of the AI; to show an application of mind, additional justifications must be furnished as to why the decision of such an AI system should be implemented. If the decision is unduly reliant on ADM, then such decisions are vitiated by over-delegation and are hit by this doctrine.

This application of the non-fettering doctrine is manifest in Article 22 of the EU’s General Data Protection Regulation (‘GDPR’), which forbids decision-making solely grounded in automated processing if such decisions impact the affected person’s rights (see Schufa case). Per the Ola case, to satisfy Article 22, human oversight in ADM must be meaningful and not simply symbolic. As per the EU’s recent AI Act (see Preamble’s ¶37-40), law enforcement, justice systems and welfare services are ‘high-risk’ areas for the use of AI. Article 14 of this Act mandates human oversight over AI in high-risk sectors. Such oversight includes the ability to ‘override’ the AI’s decision.  Similarly, in the American context, State of Wisconsin v. Eric Loomis dealt with the use of algorithms by judges for sentencing. Therein, the judiciary was using the COMPAS AI to determine the offenders’ recidivism risk for estimating their jail term. Loomis (¶99) upheld such use of AI provided that it is only one of the many factors appraised by the courts. In essence, the use of AI was valid only if courts independently reasoned outside the AI’s reports to arrive at their final decisions. Thus, Loomis disallowed blind adherence to ADM.

It must be noted that the non-fettering doctrine only seeks to preserve the element of discretion in decision-making. So, in functions involving little or no administrative discretion, ADM should not be hit by the non-fettering doctrine. For such purely mechanical functions, the usage of expert systems and supervised ML is preferable, provided that the decision-making authorities exercise oversight over such ADM. This approach is evident in foreign laws. In New Zealand, Northland Regional Council v. Rogan (¶27-30) upheld ADM for simple and transparent decisions which involve no discretion, such as mathematically finding the amount of tax payable. Similarly, Article 35a of the Verwaltungsverfahrensgesetz (the German “Administrative Procedure Act”) permits ADM provided AI has no margin of discretion.

The concerns of fettering arise in unsupervised ML-based AI because even if a human decision-maker is kept in the loop, they may, without understanding the AI’s reasoning, tend to over-rely and not diverge from it. In such cases, knowing the AI’s reasoning in precise terms is essential to non-fettering. Explainable AI (xAI) becomes relevant here.

B. Explainable AI (‘xAI’)

Initially, a trade-off between accuracy and interpretability in AI was believed to exist. More complex AI systems, though less interpretable, had greater accuracy. Subsequent research has debunked this notion, revealing that interpretable AI models offer a solution to the Black Box problem without compromising on accuracy. Such pre-coded AI models, called xAI, explain how ML-based AI arrives at its decisions. While xAI nomenclature is largely unsettled, per Professor Deeks, xAI has two broad approaches, viz., Decompositional approach and Exogenous approach.

(Image 3: Types of explanation in xAI.)

1. Decompositional xAI

A Decompositional approach to xAI reveals the internal workings of the xAI system. This can be by revealing the source code of the AI or by recreating the correlations between inputs and outputs. However, there are several shortfalls to such an approach. First of all, as recognised by Professor Deeks, merely explaining how ML works, by revealing the source code for instance, has unsatisfactory results because these technicalities, even when revealed, are incomprehensible for non-technical laypersons, rendering the disclosure ineffective. This deficiency in explaining AI’s decisions has been pointed out by Watcher et al in the following way:

There are two possible types of explanation in ADM – one that explains the system functionality (i.e., the system’s internal algorithm, decision-making tree, etc.) and one that explains specific decisions (i.e., the precise rationale and weight given to factors in arriving at every individual decision). This makes possible three kinds of explanations:

i. An ex-ante explanation of system functionality;

ii. An ex-post explanation of system functionality;

iii. An ex-post explanation of specific decisions.

A decompositional approach (either ex-ante or ex-post) only reveals the system functionality or the source code of the system. It fails to give case-specific reasons. This may render such decisions unreasoned due to three reasons. First, the decision only reveals the AI’s source code, rendering it incomprehensible for non-technical laypersons. So, it fails to meet the standard of “clear and explicit reasons” set by Siemens and S.N. Mukherjee. Second, because the reasons are not case-specific, it fails to account for the rule that the adequacy of reasoned decisions must be decided on a case-by-case basis. Lastly, applying Madhyamam Broadcasting’s proportionality test, the necessity prong may not be satisfied if other less rights-restrictive (and more comprehensible) methods of explanation exist.

A decompositional approach is also prone to ‘gaming’ or ‘reward hacking’. Put simply, if the system functionality is revealed beforehand, then parties can distort and align their actions in accordance with the factors that the AI tends to reward. This can frustrate the intent behind ADM even as the AI system fulfils its stated objectives. A decompositional approach also risks choking innovation as it can reveal vital trade secrets of the AI design.

2. Exogenous xAI

Per Recital 71 to Article 22 of the EU’s General Data Protection Regulation, if a data subject is subject to ADM that has legal or other similar consequences, they have the right to “obtain an explanation of the decision reached after such assessment.” This seemingly supports an ex-post explanation of specific decisions – a claim furthered by EU’s Data Protection Working Party’s Guidelines on ADM (p.25) which mandate the disclosure of “sufficiently comprehensive” information underlying the ADM’s logic. While Professor Watcher argues against this interpretation due to the recital’s non-binding nature, authors such as Kaminski have argued that ‘explanation’ under the GDPR means an ex-post ‘meaningful’ explanation of specific decisions that must be ‘sufficiently comprehensible’ for the data subjects. Kaminski’s approach is supported by Madhyamam Broadcasting. If there is a less rights-restrictive (and more comprehensible/interpretable) xAI, then the necessity prong will favour it above decompositional approaches to xAI.

This is where the law seemingly favours an ‘exogenous approach’ to xAI. Instead of going into system functionality, an exogenous approach to xAI explains external factors that were decisive in ADM. The exogenous approach can further be model-centric or subject-centric. As explained by Professor Edwards, a model-centric approach analogizes AI with other well-known decision-making models and patterns to explain the motivations behind the model and the creator’s intentions. While it certainly makes the decision-making process more digestible without revealing the system functionality, a model-centric exogenous approach to xAI may fail to provide reasoned decisions under Indian administrative law for two reasons. First, explaining the model may not give case-specific reasons and giving a standard set of reasons may show non-application of mind to individual cases. Second, applying the necessity prong of Madhyamam Broadcasting’s proportionality test, there must be no alternative that is equally efficacious yet more interpretable and less rights-restrictive. A more interpretable alternative exists in the form of a subject-centric exogenous approach.

Subject-centric explanations function by referring to the characteristics of the data subject based on which the decision was made against them (Deeks, p.1838). One way of doing this is to use counterfactuals, which can be easily incorporated within the AI design. Per Watcher et al., counterfactuals represent the closest possible world to show how changes in a variable would affect the AI’s decisions. As explained by them, counterfactual explanations do not require the person affected by the decision to grasp the AI’s internal logic or system functionality. Rather, it provides ‘human-understandable approximations’ of factors impacting individual decisions. Illustratively, this pinpoints the characteristics of the data subject which, when altered by a certain degree, would lead to a different decision. This approach gives sufficient grounds for the decision to be appealable and for the concerned person to change their actions for a desired decision. Its biggest benefit is its end-to-end integrated nature which prevents the revelation of system functionality and the risk of gaming. Under Madhyamam Broadcasting’s necessity test, it would qualify as a less rights-restrictive and equally efficacious solution.

C. Chain-of-Thought Prompting

Another less-explored approach that best qualifies the necessity test of Madhyamam Broadcasting comes from prompt engineering in the form of chain-of-thought prompting. Chain-of-thought prompting is a multi-step reasoning process wherein a larger problem is dissected into smaller and intermediate steps, with each step having its own distinct instructions. The output gathered from one step forms part of the next set of instructions. Research has shown the accuracy benefits of chain-of-thought prompting in AI models. It also has an explainability benefit – by breaking down the AI’s reasoning chains, it makes visible (or ‘decomposes’) the reasoning process as a series of steps. Resultantly, chain-of-thought prompting can resolve the Black Box problem in AI models.

Applying Madhyamam Broadcasting’s necessity test, chain-of-thought prompting seems to register the least restriction of the right to a reasoned decision because it is the most interpretable of all alternatives. While an exogenous approach to xAI may approximate the algorithmic reasoning, chain-of-thought prompting can express the AI’s precise reasoning in an interpretable language. Another requirement of the necessity test is that the less-rights restrictive approach must be equally efficacious. As shown by Wei et al., chain-of-thought prompting can be encoded into AI models without compromising their accuracy; rather, it improves the reasoning abilities of AI and can even surpass humans in difficult tasks. Moreover, encoding chain-of-thought prompting involves minimal costs unless a relatively large AI model (with a large number of steps, and not ‘few-shot’ steps) is used. Thus, in the context of administrative decision-making, chain-of-thought prompting qualifies as one of the least rights-restrictive yet equally efficacious remedies for Black Box AI.

V. Conclusion

This essay outlined three policy approaches – namely, non-fettering doctrine, exogenous xAI, and chain-of-thought prompting – that are prerequisites for safeguarding the right to reasoned decisions amidst the utilisation of AI in decision-making processes. A limitation of my argument is epistemic – this essay only considers three possible approaches based on existing literature for resolving the Black Box problem, possibly ignoring less-reported and forthcoming innovations in ADM. In this regard, I acknowledge that this essay does not claim to be exhaustive on the subject, and is liable for revision in line with developments in AI governance. Nonetheless, this article is of contemporary relevance, particularly in light of the upcoming Digital India Act that seeks to regulate AI governance. Since this Act is in its pre-draft stage, the recommendations and principles outlined in this essay are pertinent and should be considered for any future regulation of ADM.

 

Chytanya S. Agarwal is a Third Year - B.A., LL.B. (Hons.) student at the National Law School of India University (NLSIU), Bengaluru.

400 views

Recent Posts

See All
images-removebg-preview.png

NATIONAL LAW SCHOOL OF INDIA REVIEW  © 2022

bottom of page