Technological advancement is taking place all over the world and is replacing human work with technology and science, thus creating human numbness and in this area Artificial intelligence and the robotic technology is not untouched. Countries like Japan, china etc. have resorted to the robotic technology where robots are seen as working in hotels, serving food to people thus replacing human labor, but what will be the scenario when robot performs any kind of breach or harms human by its act; who will be held responsible? This is the major question to be addressed.
When we discuss about the criminal liability of robots following are the ones who can be in question when it comes to criminal liability;
- AI machine themselves
- Person who manufactured these AI machines
- Software programmers of AI machine
- Users of these AI machine
Tesla cars, the new phenomena of self-driven cars is one such difficulty as well, what if any kind of software malfunction causes harm to any person or may cause accident, then will the manufacturer or the software developer will be held liable? Siri or Alexa the chat bots, what if leaked someone’s private talk or make some racist remarks, which will be liable?
LEGAL STATUS OF ROBOTS
In a very general sense, we always held a person who is capable of rights and duties liable for his own action but are robots considered as someone capable of carrying rights and obligations; and answer to this is no because these are merely machines which works as per the instructions fed in their machinery. So, the very first consideration here is; does law considers the robots as legal persons? Many countries did not have any law currently which will address either AI technology or the robotic machinery, so this is really hard to determine.
Let us consider a very hypothetical scenario, where it is decided that will consider the robots as legal persons but what will be really hard to decide upon is; how to define it. There are hundreds of different technology existing in the world right now, which type of technology would be covered under the definition of robotic liability and which will not be covered, and those that will not be addressed will there be a separate law for it. This will become very time-consuming and lengthy process to cover it all and honestly, this would create a very unnecessary terminology undermining the liability of robots objective.
HOW WILL CRIMINAL LIABILITY BE DETERMINED?
Guilty act translated as Actus rea and guilty mind translated as Mens rea are two important elements in determining the criminal liability. As far as actus rea is concerned the wrong act committed by the robot will be considered as guilty act but how will the guilty mind be proven because robots unlike humans doesn’t possess mind which will be able to differentiate what is wrong and right. In order to prove the guilty mind of robot, either the software programmer or else the manufactures will be questioned, which is not really possible practically.
Constant updates in technology, cloud servers, advanced mechanics or speech recognition etc automatically updates the system, so this can come up as a good defence to any wrongful act committed by robot, and so what will we be considering a set of principles of legal hood to be in touch with these verifications, which may not be suitable for upcoming technology.
The last point to put in consideration is that of the harm and damage suffered by the victim at the hands of a AI or a robotic entity. The damage can be mental, physical, economical or even financial loss, so will this type of harm would be awarded same compensation as any other harm under penal law or would there be any other factors taken into consideration while awarding compensation to this type of loss suffered.
While deciding criminal liability of any act committed by a human, there are sanctioned and written punishment provided under the penal law of land, but when considering the criminal liability of robots it’s a very important point of consideration that how would the punishment be determined which will be given to the robots or its makers, this is something that really needs an out of box suggestions and measures. There are two important questions under this head; firstly, is giving punishment a real necessity? If yes, what kind of punishment should be given then?
Some of the unique punishments suggested for this dilemma are:
- Death sentence: physically dismantling and destroying the robot
- Hospital sanction: re-writing the moral instructions and algorithm
- Prison sentence: prohibiting use of robot
- Correctional service: training to teach the correct way of using the algorithm for way of action
But, however certain consideration here are; firstly whether the human involvement is higher than the level of automation or there is solely the human involvement higher than the robotic automation.
COUNTER CLAIMS
Firstly, there cannot be any Mens rea or guilty intention formed behind the wrongful act committed by the robot and without the presence or proving of a guilty mind criminal liability is not at all justified in any way possible.
Secondly, machines and robots are not made in a way they can understand the moral alignments of an act, unlike human minds they are not capable of differentiating an act as right and wrong, so incorporating a moral element into these machine algorithms becomes very difficult and blaming a machine which does not know whether it is moral or not is not justified.
Viruses, Trojan horse and other external act of any other machinery onto the robots which make them unable to act in a particular way and machine couldn’t use the machine mind of its own, this is difficult to determine.
The question here is not of self-driven cars or actual robots, anything which is able to carry a particular task on its own can be termed as a robotic machine, this situation also extends to all the unmanned drones or missiles, which can divert its way to some other target because of some malfunction etc.
Lastly, the questions doesn’t only rest here that making the software programmers or developers liable for any act committed by the robot, but instead there are various people involved in making a robot work, from manufactures to users of that robotic machine, therefore it becomes difficult to identify that the instruction feed into the robotic machine were done intentionally or unintentionally that has caused the machine to act in a particular way.
CONCLUSION
It is difficult to determine whether a robot or any kind of AI entity be held criminally liable or not but however, what would be the main consideration for us would be determining how the law making agency would be dealing with any such conflict that is likely to arise in coming future. In past as well, such incidents have been reported like in 1981 Japan, a motorcycle company has employed a robotic machine where it pushed one its workmen considering him as a threat to mission and as a result he died, there was no legislation which could define who is liable for the act committed and hence it remains unsolved.
The law making agency would consider that the applicability of penal law at this instance is practically possible or not. Secondly, in wide range of technology available, the first task would be determining the individuality of these. Though, it may not pressed as a need of time while fixing criminal liability of robots, but the forthcoming technology would definitely needs an answer to this.
Author:- Riya, a Student of SRM University, Sonipat, Delhi-NCR