DOI: 10.5176/2251-3809_LRPP16.2
Authors: S M Solaiman
Abstract:
Many people believe that robots are destined to take over the world as they have already started pervading various aspects of our lives ranging from fighting enemies to administering justice. Manufacturing industries have been increasingly using robots for a long time by supplanting human labour, where available and affordable, in order to gain efficiency in production in terms of both quality and cost. A 2015 report of the International Federation of Robotics (IFR) on industrial robots reveals that the use of such machines increased by 29 percent in 2014 with the sales of 229,261 units, the highest number ever recorded for a single year. Notably, the sale of industrial robots to all industries in China alone significantly went up to 57,096 in 2014 showing an increase of 56 percent compared to the sale in the previous year. However, the overall growth was driven mostly by the automotive parts suppliers and the electrical/electronics industries. The IFR report adds that while 70 percent of the sold robots went to five countries: China, Japan, the United States, South Korea and Germany; Asia embracing Australia and New Zealand has been the biggest robot market thus far that bought 139,300 industrial robots in 2014, which is 41 percent higher than those supplied to that market in 2013. The above statistics seem to be both startling as well as alarming at the same time in that they represent innovations whilst pose new threats to occupational safety.
Robots have been undeniably helpful in accomplishing many complex tasks in our lives, their services however are not risk free. Meanwhile they have caused enormous harm alongside the good they have produced by far as it is evident from numerous casualties inflicted by robots at the workplace in many countries. Reportedly, a United States Ford car factory in Michigan witnessed the first recorded death killed by a robot in 1971, followed by the second in 1981 occurred in Japan at a Kawasaki plant. The number of such incidents is on the rise. Killer robots have allegedly terminated at least 26 human lives over the past 30 years at the workplace in the United States, whilst the United Kingdom recorded a total of 77 accidents in 2005 alone in which ‘people have been crushed, hit on the head, welded and even had molten aluminium poured over them by robots’. The latest incident took place in Frankfurt where a robot killed a 22-year man at a Volkswagen factory on 29 June 2015.
These accidents have logically generated a legal question of liability. The problem spawning this question lies in the ambiguity as to the culpability of person who should be blamed for these fatalities particularly where robots are autonomous to a certain degree. This issue has been a topic of growing debates in the academic discourse worldwide in recent times. Some argue that robots should be granted legal personality so as to shift the blame from humans to errant humanoids due to their autonomy which is still measured to be only a limited degree, whereas others oppose this view and advance counter-arguments that it would be injudicious to regard these machines as a legal person. They even pose a question, should we all surrender to the will of machines that we are creating? They find little scope to grant a distinctive legal personhood to such machines as Andrea Bertolini asserts that ‘the robot would still be either performing a program or exerting a freedom, which was attributed to it by its producer’. Similarly, Sartor argues that conferring personhood on software agents like robots ‘does not seem at present necessary or even opportune.’
The proponents often compare robots with corporations in substantiating their case for the separate personality and even contend that the objectives of punishment can be achieved by punishing errant robots themselves, though their reasoning does not seem to be well founded and thus arguably untenable. The opponents, on the other hand, postulate that it is still premature to confer legal personality upon machines owing mainly to the fact that their functionality is dependent upon certain human actions, such as their design, engineering, programming and operating. While certain robots, drones for example, have been purposely created to hit and kill people, and currently more than 50 countries are developing robots having the capacity to function autonomously with lethal force, industrial robots obviously are not designed and programed to that end. So, killing co-workers by industrial robots should be treated as a kind of malfunction of the machine, which may or may not be caused by fault in the delinquent machine itself independently of the potential effect of ‘wrongs’ that might have been committed by its maker (encompassing the designer, programmer, manufacturer and operator) in making it. Moreover, the purposes of sentencing are unlikely to be attained by punishing the artificial intelligence invented by humans, although some of the penalties, such as incarceration and even capital punishment, can be materialised, which would help achieve specific deterrence, general deterrence apart. Incarceration or death penalty of a machine to prevent the machine itself from committing further crimes should matter little to the realisation of the ends of administration of criminal justice. The purposes of sentencing, for example, as enacted in the Crime (Sentencing Procedure) Act 1999 (NSW, Australia) are: ensuring adequate punishment of the offender, preventing crime by deterrence, protecting community, promoting rehabilitation of the offender, making offender accountable for his/her wrongful conduct, denouncing the conduct, and recognising the harm done to the victim.
This paper questions the wisdom in favouring the legal personality of robots for criminal liability in particular by analysing the lack of elements of legal personhood in these humanoids and the unattainability of the most critical objectives of criminal penalties. While an analogy between corporations and robots seems the backbone of soliciting lawmakers for such a personality, the present paper strongly argues that these two inanimate human creations are evidently dissimilar in almost every respect of legal considerations and thus incomparable to each other with regard to legal personality. It also underlines that recommending robot’s personality for criminal liability might be resembling the days of criminal law of punishing animals between the 12th and the 18th centuries that did and still do arguably have even great autonomy than robots in making independent decisions to cause harm to people including their most loving owners. Nonetheless, no legal systems in the world today recognise the legal personhood of animals, at least for the purposes of criminal liability, to the best of the present writer’s knowledge. Rather, the domesticated animals, wildlife aside, are regarded as personal property of humans who generally bear the responsibility for wrongdoings by their caress. Similarly, commentators contend that ‘all robot responsibilities are actually human responsibilities, and that today’s product developers and sellers must acknowledge that principle when designing first-generation robots for public consumption.’ This paper subscribes to the latter view of fault-based human responsibilities, however, it substantiates its arguments from the perspectives of corporate personality, the attributes of a legal person, and the attainment of the objectives of criminal punishment - this is where its originality lies.
Keywords: industrial robots, legal personality, workplace safety, criminal liability
