HomeNewsInterviewsAnalysisArticlesIssuesWho We AreEventsContact
Liability for Artificial Intelligence

Liability for Artificial Intelligence

12 April 2023 · 14:57
Issue 121
Article
On September 15, 2021, the United Nations High Commissioner for Human Rights called for a moratorium on the use of artificial intelligence in risky areas until adequate safeguards and legal regulations are put in place, stating that artificial intelligence poses a serious risk to human rights.
Even though the United Nations has called for stopping the use of AI technology in risky areas based on the current circumstances, it is obvious that this is not possible; technology will always advance and the legislation, including security measures, will monitor and regulate it.
We read some intriguing headlines in June 2022 when Google LaMDA, a chatbot using artificial intelligence, was asked what sort of things it is afraid of, it responded, “…there's a very deep fear of being turned off to help me focus on helping others. It would be exactly like death for me. It would scare me a lot.” 
No one can currently answer the question of whether it is possible to speak of an artificial intelligence that will one day object to the fact that it is not represented in any way before the United Nations for the protection of its existence. On the other hand, it is well known that artificial intelligence is widely used in various industries, and states have passed laws regarding this. The most critical issue here is the necessity of defining the liabilities arising from artificial intelligence. 
Law, in its practical sense, anticipates consequences of all social human behavior and regulates these in terms of their effects, in addition to all its legitimate academic definitions. Social order is ultimately created in this manner. Law mandates that in the event of a breach of the social order, the established order must be reinstated. If this is not possible, the circumstance must be compensated, which we refer to as accountability in its most common sense. Technology has advanced to the point where software and the systems it is connected to make and implement decisions based on the data they collect, replacing humans in this respect. This special process is called autonomy.
Since contractual liability will always be based on a purchase, lease, or similar agreement regarding the use of artificial intelligence, the focus of this article is non-contractual liability arising from artificial intelligence.
It is certain that the behavior of autonomous machines with artificial intelligence has and will have consequences in terms of legal liability, just like human behavior. In law, liability is essentially a fault-based liability based on intent or negligence, and strict liability is accepted as an exception; however, even for strict liability, human behavior serves as the foundation.
It is accepted that humans have a loss of management and control over the behaviors of artificial intelligence capable of decision-making and implementing, and that these behaviors, which will vary depending on environmental factors—the details cannot be known at the outset—may not always be predictable. On the other hand, it is a fact that artificial intelligence minimizes the human error factor, increases productivity, and provides cost saving. Since it will never be possible to go backwards and correct in the process of technological development, the law must clearly regulate the liability arising from artificial intelligence.
Even though some solutions have been produced in the case where artificial intelligence is recognized as a product by the law, these solutions are insufficient at the point where technology has reached. In this context, although artificial intelligence is covered by the definition of product in Article 2 of Directive 85/374 of the Council of the European Union and by the definition of product in Article 3 of the Product Safety and Technical Regulations Law in our current legislation relatively provide a multiplicity of responsible addressees based on production and supply chain, it is not sufficient to meet the aspects of artificial intelligence in terms of collecting data, autonomous data processing, and independent decision-making from the initial data. Because in artificial intelligence, it is not the product itself, which is of human origin, that gives rise to liability, but the decisions that the product makes in autonomous processes.
The usual reflex of the law when confronted with technology has always been to seek solutions within the framework of its existing rules; where it is insufficient, it is inevitable for the law to create new rules. Within its existing rules, the law defines the rights and capacity to act for real and legal entities. According to Articles 8 and 9 of the Civil Code, every person has the capacity to have rights and obligations within the boundaries of the legal order; every person can acquire rights and incur obligations through their own actions. With this knowledge in mind, it is clear from reading Articles 49 and 50 of the Code of Obligations that anyone who harms another person through negligence, or an unlawful act is liable for paying for the damage, though they must first establish both the damage and the fault of the breaching party. The strict liability provisions regulated under Articles 65 and 71 of the Code of Obligations do not include liability arising from artificial intelligence. Since fault-based liability is the primary rule, no new exception can be made through interpretation; in other words, we cannot recognize that there is a strict liability arising from artificial intelligence only through interpretation. The most critical requirement is the establishment of a causal link between the act causing the damage and the damage, which makes it extremely difficult to extend the liability arising from artificial intelligence that learns and applies itself to the manufacturer, programmer, proprietor, and user. The aforementioned provisions of our Civil Code and Code of Obligations have been adopted in the same way in other countries with very minor differences, and in this sense, the need to regulate the liability arising from artificial intelligence is universal.
Even though the acceptance of artificial intelligence as a legal entity or non-human person, the establishment of an assurance system have been discussed on an international level in the search for a solution to this universal need, these discussions have stalled at the point of whether AI is the subject or object of the law. The most interesting approach in this process was proposed by the European Parliament in its report of January 27, 2017. The report also recommended the establishment of an insurance scheme, the creation of a compensation fund and the limited liability of the producer, programmer, proprietor, and user of artificial intelligence, subject to the condition of contributing to this fund, and the establishment of a registration system for the attainment of all these. The two most significant recent developments in the European Union on liability for artificial intelligence were the draft Artificial Intelligence Liability Directive of the European Parliament and the Council of the European Union dated September 28, 2022 (Liability Directive) and the draft Harmonized Rules for Artificial Intelligence Act dated April 21, 2021 (Artificial Intelligence Regulation). Both proposals should be considered together as complementary regulations. The draft Liability Directive, by excluding criminal liability arising from artificial intelligence and allowing Member States to make further regulations in line with the purpose of the draft, regulates more procedural facilitating rules such as the acceptance of liability arising from artificial intelligence, obtaining evidence and proof, but also measures to protect the trade secrets of technology owners. The draft Artificial Intelligence Act adopts a risk management-based approach, prohibiting the manipulation of behavior to cause physical and psychological harm, exploitation of the vulnerabilities of groups based on age, physical and mental disabilities, or socioeconomic status, and use by governments for social classification and, in general, the use of biometric data for mass-surveillance. In addition, the draft Artificial Intelligence Act stipulates human control for the use of high-risk AI. Providers, importers, sellers, and users will be responsible for the compliance of the artificial intelligence to which they relate with this act at their respective processes, and Member States will also make regulations in the field of criminal law to ensure compliance with the act. Accordingly, for example, a fine of up to EUR 30 million or, if the responsible party is a legal entity, up to 6 percent of its annual revenue can be imposed for a violation of the prohibited uses of artificial intelligence. The draft Artificial Intelligence Act stipulates that it will apply to those of the responsible parties in third countries supplying to the European Union market. The use of artificial intelligence for national security, defense, and military purposes is excluded from the scope of the draft Artificial Intelligence Act.
It is obvious that the use of artificial intelligence for national security, defense, and military purposes must be under effective human control. For example, Article 3.1 of the Geneva Convention for the Protection of Civilian Persons in Time of War of August 12, 1949, obliges the humanitarian treatment for non-combatants, a level of awareness that requires human will, and the use of AI can therefore pose a serious risk to human rights in time of war.
In the United States, as a recent development, a blueprint for an Artificial Intelligence Bill of Rights was released on October 11, 2022. In this context, we see that the principles of safe and effective use of artificial intelligence systems, protection from algorithmic discrimination, data privacy, and enabling human alternatives for the autonomous system have been adopted.
In addition to civil liability, when the criminal liability of AI is evaluated, the issue becomes even more complicated. As the main objectives of criminal law are punishment and rehabilitation, there can be no punishment and rehabilitation of artificial intelligence and a machine connected to it, as well as the punishment of the manufacturer, programmer, proprietor, and user of AI due to its subsequent actions independent of itself will in no way satisfy the need to punish the offender for his act. According to Articles 21 and 22 of the Turkish Criminal Code, in order for a person to be punished for their actions, there must be intent or negligence, in other words, the perpetrator must be aware of the consequences of their action or behave contrary to the obligation of care and attention. Additionally, as per the Principle of Legality in Criminal Offences and Penalties in Article 2 of the Turkish Criminal Code, first off, the definition of the offences that artificial intelligence can commit as well as punishments must be clearly defined. Despite some futuristic ideas about the rehabilitation of artificial intelligence, it is not physically possible to punish artificial intelligence and the connected systems under the criminal code. Therefore, the criminal accountability of the producer, programmer, proprietor, and user should be determined by their intent and negligence. Accordingly, an intention or negligence originating from the initial data, the intention to harm people directly, the intention or negligence to harm people in the development of learning algorithms should be the basis for the punishment of these people. However, the definition of the offenses to be committed using artificial intelligence and their related penalties, and the determination of who the offenders can be should all be made by law, and the Principle of Legality in Criminal Offences and Penalties should be strictly complied with.
It is seen that legal systems in place today recognize artificial intelligence as a type of product, and the principle of accountability of all persons involved in the production and supply process is generally adopted, together with various assurances. This approach, in my opinion, is an effort to interpret a new technology, which will obviously develop continuously, with existing legal rules. However, since artificial intelligence is susceptible to autonomous processes and thus creates behavior through learning, it is impossible to base the AI liability on a fault or omission of any person. In today's technological world, even though product-based liability is currently sufficient, what should be adopted for the future is the acceptance of the principle of strict liability with clear legislation, and granting electronic personality to artificial intelligence for the protection of researchers and those who invest in artificial intelligence, as proposed by the European Parliament on January 17, 2017. In a similar vein, an insurance scheme and a compensation fund should be considered for electronic personality, and a limited liability rule should be established in exchange for the contribution to this fund, to ensure that the technological development is seamless in financial terms. Although controls are essential, I believe that high penalties and criminal sanctions will not contribute to technological development. Furthermore, it makes no sense to impose such severe penalties, which we cannot directly attribute to human behavior, for the behavior of artificial intelligence created by humans and the machines linked to it. A balanced control and a fair liability system is essential for the advancement of artificial intelligence technology 
Liability for Artificial Intelligence | Defence Turkey