ARTIFICIAL INTELLIGENCE: A debate for granting legal personhood

AI for Human Security

ARTIFICIAL INTELLIGENCE: A debate for granting legal personhood


As society has developed over the years, technology has played a crucial role in the evolution of human beings and has ultimately taken over a major space of our lives. The machines around us have been developed to an extent where they share the same ability to think and work as humans do. The aspect of intelligence in machines is expected to overtake human intelligence in the upcoming future.  Theory of Multiple Intelligences defines intelligence as the ability to create an effective product or offer a service that is valued in a culture, and a set of skills that make it possible for a person to solve problems in life, and the potential for finding or creating solutions for problems, which involves gathering new knowledge. Artificial Intelligence is the intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals.

As artificial intelligence systems evolved and started to play a larger role in society, distinct from human-actions, a belief that they should be provided with some form of legal personality gained recognition. A general thought underlining all the arguments in this regard has been that the same status has been provided to corporations due to their indistinguishable nature from its members. Corporations throughout the world function under their name, distinct from their owners. Across the world, corporations have attained a status of a judicial person and are subject to being sued for their actions or sue other entities, including humans, irrespective of the actions of its members. Behind these arguments is a similar idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to legal persons. As the AI reaches a stage of surpassing the humans, the call for a separate legal entity is strengthened to ensure that there are accountability and responsibility for actions. Therefore, a separate personality distinguishable from its creator establishes an argument to seek legal recognition for AI. For example; a drone in a combat situation may act on its own, without any direction from anyone, killing one of its own, and thus creating a question of liability upon the developer.


A legal recognition for a wide variety of artificial intelligence machinery like drones, self-driving cars, robots like Sophia ( invented and granted citizenship in Saudi Arabia ) has been a major area of discussion for the past few years. This has initiated a need for a deep understanding of the arguments presented in favor of granting legal recognition to artificial intelligence.

2.1 Fixation of Responsibility & Accountability

Artificial intelligence entities must be treated as legal personalities to make them accountable under the law just like corporations. The objective behind granting legal personality to corporations was to limit the corporate liability on an individual’s shoulder which would, in turn, motivate people to engage in commercial activities through corporations. If we go by this analogy, conferring legal recognition to AI would enable society to accept and promote its usage in the daily activities of human functioning. For example AI in Smart Policing, AI in the Criminal Justice System, AI in Border Security, etc. As the AI develops to a position of singularity from its developer, it would indulge in actions that would seek accountability. A developer might have a different intent behind the creation of artificial intelligence machinery than what is projected from the actions of the machinery, but under current laws, he is the one responsible for the actions, even if he didn’t intend to effectuate those actions. For example: In an autopiloted aircraft powered by artificial intelligence, the aircraft perceives the pilot inside to be a hindrance to its mission and ejects him out of the place to complete the mission easily. In this case, under the current laws, the developer of autopilot aircraft may be held accountable even though he had no intention to kill the pilot. As a solution to this problem conferring legal recognition to artificial intelligence would enable the legal framework to hold the machine liable for its action, saving the developers from liability. This will save the innocent developers of the AI as well as its owners from liability arising from an act which they never intended and will promote the development of the AI. It would also help in distinguishing the accountability between developers of the machine and people who put it to use. Simultaneously, it will save from discouragement of AI developers and their users and will also promote more innovations into the artificial intelligence field.

2.2 Legal Recognition and Empowerment

A primary argument underlining the call for legal recognition to Artificial Intelligence highlights the plight of women and people from lower castes before they were recognized as ‘full humans’ and provided legal rights, including the voting right. The argument is presented to highlight the importance of granting legal recognition, as it is only after some legal recognition is granted, the society evolves accepting the earlier unacceptable. Many legal scholars argue that the primary purpose of the law is to further the welfare and interest of humans. We are the sole beneficiaries of the law, but it would be wrong to say that we are the only ones who must be its subjects. Without developing a certain set of guidelines with legal backing regarding the treatment of artificial general intelligence (AGI) and other non-biologic intelligence (NBIs), humans would fail to perform their moral duty of preserving their fellow members of the society, as AI has developed to become an essential part of the society.

2.3 Criteria for Legal Recognition

A set of criteria to identify a ‘holder of a legal right’ was provided by Christopher D. Stone in his book “Should Trees Have Standing? and he laid down four points for an entity to satisfy for it to gain legal rights:

  1. That ‘some public authoritative body’ be prepared to give some amount of review to actions that are colorably inconsistent with the ‘right’ [an entity is claiming to be deprived of].”
  2. That the [entity] can institute legal actions at its behest.
  3. That in determining the granting of legal relief, the court must take an injury to it into account.
  4. That relief must run to the benefit of it.

It is being accepted that as AI machinery is capable of fulfilling the above-mentioned criteria, it would be safe to present them with legal recognition. The AI in itself is capable of observing the harm inflicted upon it by identifying the change in its operations and thus has a right to legal recognition so that it could seek protection and benefit from it.

2.4 AI and Intellectual Property

All across the world, Artificial Intelligence Systems are actively being used to create works of literature, music, arts, etc. This creates a need to identify AI as a legal entity to provide due credit to the work performed by them, putting aside the issue of personality. Machine-based creation should be protected under the copyright law to ensure that there is due credit given. AI systems now write news reports, compose songs, paint pictures and these activities generate value, and provide monetary assistance. This argument seeks to identify the AI-based upon the work it does, rather than what it is by nature. In a widely accepted and appreciated move, in December 2019, a district court in China held that an article produced by an algorithm could not be copied without permission. The basic reason provided behind this decision was the protection of upfront investment in creative processes and how in the absence of such protection, the investment will dry up and there will be a reduced supply of creative works. The European Parliament in April 2020 issued a draft report arguing that AI-generated works could be regarded as ‘equivalent’ to intellectual works and therefore protected by copyright.[4] Whereas in copyright law the debate is over who owns works produced by AI systems, in patent law the question is whether they can be owned at all.


A basic and general argument presented by experts against the idea of presenting legal recognition to machines is that it is immature to accept that something that just facilitates human intellectual capabilities should be granted an equal legal or moral ground as a human.

3.1 Transfer of Accountability and Responsibility

Another argument presented against granting of legal recognition to AI, which creates a worrisome situation, is that even if providing a legal recognition would serve a gap-

filling function, on a contrary it would also shift responsibility under current laws away from existing legal persons. As a result, it would create an incentive to dispose of responsibility and transfer risk to such electronic persons to shield natural persons from liability and accountability. This creates a major concern as escaping liability and turning in another person for the acts committed by you is itself unlawful and unethical. For example, The developer intends to gather data from another device by developing a program but transfers the liability upon the machinery for such theft. In 2012, a program called PredPol was developed by UCLA scientists in association with the Los Angeles Police Department, to achieve a scientific analysis of crime data and use it to spot patterns of criminal behavior. It was later adopted by more than 60 police departments throughout the country. PredPol identifies areas in a neighborhood where serious crimes are more likely to occur during a particular period. An argument against the software was that it targets minorities and leads to bias. So in an instance where an innocent is accused of a crime and detained, the accountability could be shifted to the machinery and the artificial intelligence could be considered to be at fault.

3.2 Void in Legal Literature

Another perspective why legal recognition of AI seems a far-fetched dream is the lack of flexibility in the existing legal framework to inculcate the provision of legal rights for AI machinery. The AI systems have been developed to a stage of indistinguishability and thus could perform a dynamic set of operations that could be unknown to the human intellect. As a result, the existing legal framework proves to be a hindrance. For example, our current legal systems would not be sufficiently equipped to handle the cases demanding punishment of Machine Intelligence systems. And would an order holding machinery to be guilty would infer that all the machinery of similar nature all over the world should be sentenced, or not? If this is the case an important principle of natural justice of ‘innocent until proven guilty’ would be violated. But if not sentenced the other machinery of similar nature would be equally capable of committing the same violation all over again. A Non-Biological-Intelligence system is capable of constructing and building its program and code. Technically, during the process, the computer instructs itself by becoming the author of the code which gives it directions and guides it. Therefore, it lays down an argument that at some point, the human author will be unable to determine if the code possessed by such a device was created by the human author’s command—which leaves a legal grey area within the law in such cases. Here is also the concern that the owner of the deep-learning system can be wrongfully charged for criminal accusations when their intent delineates from the behavior of the system. This necessitates a need to construct a distinct legal framework for artificial intelligence.

3.3 Contractual Relationships

Another concern is the ability of an AI to execute and be bound by contracts. While international laws have recognized self-enforcing contracts, there is a need for comprehensive legislation on the subject.

Under Indian law, only a “legal person” can be competent to enter a valid contract.[5] The general rule thus far has been that an AI may not qualify as a legal person. Hence, a contract entered into by an AI on its own may not be regarded as a valid contract in India. Resultantly, steps need to be taken to ensure that technology standards are developed to adequately regulate contracts entered into by AI.

3.4 Artificial Super Intelligence Theory

Artificial superintelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behavior but reaches a point where it becomes self-aware and surpasses the capacity of human intelligence and ability. The concept of artificial super-intelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs, and desires of its own.

The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super-intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life is pure speculation.


The technological world is changing rapidly which warrants the adaptive reforms in the current legal system. So, that our legal system is capable of finding solutions to the legal issues raised by technological developments in our society. There is sufficient legal consideration arguing in favor of attribution of legal personality to artificial intelligence. A grant of legal recognition would ensure that as AI machinery develops itself to overcome the human intellect, there is a set mechanism to ensure that their actions don’t go unseen and unmonitored. Legal recognition would ensure that AI gets recognition in the society as a part of the society and evil machinery meets its fate either through destruction or self-modification. This would also ensure that technological development is not divorced from our society. Legal recognition would promote artificial intelligence towards better development capabilities as the human inventors would have a sense of safety from unintended actions and liability caused due to actions of the artificial intelligence machinery. Initially, when the corporations didn’t enjoy legal recognition, as they do today, people were skeptical to indulge in corporate activities due to the uncertain and unlimited nature of the liabilities. But as they got immune from liabilities of a company, due to the recognition of corporations as a separate legal entity, more professionals started working in the corporations. Thus, leading to more development in society. As a result, a preferred approach should be strengthening the legal framework to put AI in the direction of the development of society in a regulated manner.

[1] Howard Gardner, Frames of Mind, 1983

[2] N Banteka, ‘Artificially Intelligent Persons’ (2020) 58 HousLR

[3] Should Trees Have Standing?, Christopher D. Stone

[4] Report on Intellectual Property Rights for the Development of Artificial Intelligence Technologies, Committee on Legal Affairs,EU, (2020/2015(INI))

[5] Section 11, Indian Contract Act, 1872


Leave a Reply

Your email address will not be published. Required fields are marked *