Artificial Intelligence in the Modern Judicial System
Tetiana Drakokhrust1*, Nataliia Martsenko1
1Department of International Law and Migration Policy, West Ukrainian National University, Ternopil, Ukraine
*Corresponding to: Tetiana Drakokhrust, PhD, Professor, Department of International Law and Migration Policy, West Ukrainian National University, Lvivska St.11a, Ternopil, 46000, Ukraine; Email: Tanya.drakohrust@gmail.com
Abstract
Artificial intelligence (AI) mitigates the congestion of the judicial system, accelerates court proceedings, and reduces the costs of litigation. Therefore, attempts to introduce AI in the domestic justice system necessitate legal norms to regulate the relations arising from the participation of AI as a judicial tool and to adapt existing legislation to new realities and foreign legal experiences. The purpose of the present article is to explore the features of AI legal regulation in Ukraine and the world, highlight the pros and cons of judicial application of AI, analyze foreign legal experience, and outline its implementation issues in Ukraine. This study substantiates the appropriateness of AI in the domestic justice system for certain categories of cases. The use of AI as a tool in modern justice is acceptable in view of the modern informatization of all spheres of society and the state. Moreover, the opinion of the judge on the case should be the embodiment of professionalism of knowledge of objective reality, supported by consciousness and deep theoretical and legal knowledge combined with law enforcement experience.
Keywords: artificial intelligence, justice, judicial system, electronic court, principles of justice
1 INTRODUCTION
Active development of technologies emphasizes the urgent need to reform the legal system and the justice system in particular. The latest technologies that are rapidly penetrating daily life demonstrate great potential to improve the way legal issues are handled. Some national judicial systems are implementing the latest technologies and algorithms that easily handle large amounts of data with favorable efficiency. However, there exist certain risks and controversies in using artificial intelligence (AI) in the legal field.
Mankind has been actively involved in the implementation of AI since the 1950s. One of the first definitions of AI was proposed by John McCarthy at a conference at the University of Dartmouth in 1956 - “body of knowledge (science) and methods capable of processing data to develop complex computer problems”. Theoretically, AI mimics the work of the human brain through a biological system of neural networks to solve a large array of problems. However, AI fails to fully replace humans due to the lack of automatic simulation of emotional traits[1] (e.g. a judge may accept substantiated evidence of a defendant in late payment of alimony, but the machine makes no concessions). In addition, AI has a significant number of confirmed errors and malfunctions. There are sufficient evidences that AI provides the results of judicial analysis but cannot ensure a complete judicial analysis process, which disallows judges, the public and defendants to clearly understand the rulings and endorsements of the verdict.
In general, the issue of AI use in justice is extremely controversial and insufficiently studied across the world. However, AI features huge potential to accelerate data processing, unload the work of courts and improve judicial efficiency[2]. Nevertheless, it is important when using AI to adhere to fundamental principles of the judiciary, such as the rule of law, non-discrimination, impartiality, justice, security and the principle of comprehensive protection of human rights and freedoms provided by the judiciary[2]. It is obvious that the use of AI will affect the indicators of access and efficiency of justice and will ensure better realization of human rights. Therefore, research on the application of AI in the field of justice and study of foreign experience and the state of readiness of the judicial system of Ukraine to introduce the latest technologies-is clearly relevant and requires theoretical work.
2 FEATURES OF LEGAL REGULATION OF AI IN UKRAINE AND THE WORLD
AI allows machines to learn using human and personal experience, adapt to new conditions in their application, perform a variety of tasks, predict events, and optimize resources of various kinds.
A growing body of evidence suggests that such technologies will become a new subject of social relations in the future, which underlies the need for its legal regulation for the benefit of all humankind and the maintenance of peace in the world. If until now human rights violations involved human interaction and were regulated by national and international law, with the development of AI, another plane of interaction-man and machine requires law regulation.
Objectively, the legal regulation in this area is closely related to the stage of development of AI and robotics. Relevant for the current level of development of AI and robotics is the classic scheme “developer-owner-user”, in which AI and robotics are the object of social relations[3]. Nevertheless, at the present stage, AI requires regulation of human rights and discrimination, protection of personal data, regulation of business activities for the production of robots or software, civil and criminal liability, protection of copyright in works created by AI, cybersecurity, and the use of these technologies and their application in justice.
The issue of normative regulation of AI work has been actively discussed since the 1970s and 1980s, but to date, only a little progress has been achieved at the official level. Some work on the development of legal standards in this area is underway in East Asia and the United States, while the most practical measures have been witnessed in European Union (EU). An important advance was the consolidation of the legal basis for the use of AI and the introduction of a pan-European registration system for these machines in 2017, in accordance with Resolution 2015/2103. This document is one of the first real steps towards the legislative consolidation of standards for the development and use of AI. Although these provisions are entirely recommendatory, they provide an opportunity to gain a clear understanding of the specific principles that will underlie the rules governing these relationships in the nearest future. The document proposes to assign a separate registration number to certain categories of robots, which will be entered in a special register that provides detailed information about a particular robot, including its manufacturer, owner and even the terms of compensation in case of damage[4].
Another issue in this Resolution is the civil liability for the negative consequences of the use of AI. Collectively, at this stage of technology development, the responsibility for actions that cause harm to third parties cannot rest with the AI but only with the person. The Resolution notes that the liability may lie with one of the “agents”, which include the manufacturer, operator, owner or user. Moreover, the most important criterion for the establishment of such liability is to prove that the “agent” is able to foresee and prevent such harmful consequences. It is also proposed to introduce an AI insurance system, such as those used for transport, in which the “agents” would be required to insure against potential damage from their use[4]. The Resolution emphasizes that these regulations should not affect and limit the processes of research, innovation and development, as such implication is disadvantageous to manufacturing companies[4].
In Ukraine, which is trying to integrate into the EU, there is a growing body of companies engaged in technology development involving AI, which is the reason to analyze this document at the national level and to develop legal standards for AI regulation. It is obvious that the EU standards in the field of AI will become the basis for the relevant norms of Ukrainian legislation, but currently there is no legal framework for AI in Ukraine.
However, an analysis of scientific research shows that the penetration of AI into various aspects of the legal system gives grounds to distinguish three categories of entities whose work can be facilitated by the following technologies: 1) legal administrators, including judges, legislators and police; 2) those who use AI in legal practice (lawyers); and 3) those who regulate their activities by law and use law to achieve their goals-people, businesses and organizations. Therefore, given the obvious impact of these technologies on the legal system, it is obvious that the legal regulation of AI will remain at the national level in the near future.
An extremely important step towards the legal regulation of AI within the framework of EU law is also the adoption in 2018 by the European Commission for the Efficiency of Justice of the Council of Europe of the Ethical Charter on the use of AI in the judiciary and its environment. The main purpose of this document is to improve the efficiency and quality of justice through algorithms that process court decisions and data in compliance with fundamental human rights and freedoms[5]. In addition, it is worth noting that Article 6 of the European Convention for the Protection of Human Rights and Fundamental Freedoms enshrines the right to an independent and impartial tribunal[6]. However, there is no direct prohibition on the use of AI, nor does it stipulate that only human judges can administer justice. Importantly, the case law of the European Court of Human Rights on the violation of this Article with AI in judicial decisions is not yet available.
Ukrainian law details this provision of the Convention in Article 127 of the Constitution of Ukraine, which provides for justice to be administered by judges. The same legal position can also be found in the Constitution of Germany. That said, AI cannot replace judges; however, there is nothing prohibiting the optimization of judges and courts through the involvement of AI. However, the complexity of technology requires a transition to a new, more complex regulatory scheme. It has been found that the regulation of AI socialization demonstrates the potential to move from the perception of work as an object of relations to endowing it with rights and responsibilities: it is probable to grant AI the status of “electronic person” as an independent participant in public relations[7]. As a result, two new types of justice systems may emerge with the empowerment and responsibilities of AI - "mixed justice" and "AI justice". Mixed justice will be created to regulate the relationship between human and AI, while “AI justice” will be created only between jobs. Such a decision may contain dangers, so a balanced and unhurried approach is required[8]. Moreover, the status of AI in civil law must be addressed at the subject-object level, as this will determine its understanding and perception in other areas of law.
3 AI IN THE JUSTICE SYSTEM OF FOREIGN COUNTRIES
Despite the continued controversy and ambiguity regarding the use of AI in the judicial field, some countries are already using such technologies very actively in this area. It should be noted that the process of introducing e-courts was the process of digitization of documents and the transition of the judicial system from paper to electronic media. The leaders of this process were Asian countries and regions such as Korea, Hong Kong, Singapore and others. Currently, there are no paper documents in the judicial systems of these countries. Even the order for enforcement or recovery is sent by the court immediately to the enforcement service in electronic form, and copies are sent to the debtor and the debt collector. There are also law firms that are already developing programs to predict the outcome of a lawsuit or to predict police activity.
The United States actively uses available information about past crimes, which AI analysis is to predict possible new crimes, such as the determination of the likelihood and potential location. Accordingly, the police can use the AI results to decide the areas to send for patrols, who to search, and who to detain[9].
As for forecasting the outcome of the lawsuit, there is a great need in this area to facilitate the work of lawyers. This system is still under testing and validation.
For example, researchers at University College London and the University of Sheffield have created a “computer judge” who analyzes the text of a case using a machine-learning algorithm. To develop this algorithm, the team allowed the “computer judge” to scan the published decisions of the European Court of Justice in 584 cases of torture, humiliation and impartial judges. As a result, the “e-judge” established the verdicts of the European Court with an accuracy of 79%[10].
Interestingly, the world flagship in the use of AI is Brazil, because the country produces about 20 million disputes a year, while Ukraine about 4 million. A large number of disputes leads to the introduction of e-court, followed by AI system which is used in each court.
The appellate complaint received by a judge in Brazil is analyzed by the system, followed by the searching of relevant case law by keywords, and a draft decision is proposed with consideration of the existing legal positions. According to statistics, one Brazilian appellate judge receives about 500 appeals per month, for which the aid of AI is crucial, while a Ukrainian judge receives about 590 appeals for the whole year[10].
In Estonia, AI systems also replace judges at the first instance level to make decisions. This technique is currently trusted to resolve simple disputes, such as accidents, divorces, utility bills, and so on. The judge is present only at the appellate stage.
With the aid of AI in Estonia, the number of cases in which a judge must be personally involved is 10%-15% of the total. Thus, judges allocate their time on important cases, that constitute legal issues, contain legal uncertainty, or relate to new social relations. Estonian judges spare no time to resolve disputes that exist for the sake of disputes, for which the procedure is delayed or the outcome is obvious[11].
At present, the Estonian Ministry of Justice is testing another ambitious project. A team of Estonian experts is working on an algorithm that allows judgments to be made in disputes of up to 7,000 euros without the involvement of a judge. In this project, the parties are allowed to download documents and all the necessary information, and the AI is to perform the decision-making, which can be appealed. A full review of a judge's decision on appeal is intended to ensure compliance with the provisions of Article 6 of the Convention for the Protection of Human Rights and Fundamental Freedoms.
The United States is the leading user of AI in the legal field, and they use these technologies most often in civil and criminal cases. In the United States, there is an algorithm developed by researchers at Stanford University that assists judges to select a measure of restraint for a defendant: bail or detention. After reviewing about 100,000 procedural documents related to the choice of precautionary measures, the developers of the algorithm found that some American judges in 90% of cases allow citizens to bail, while 50% among others. This program provides an impartial assessment of the risks to all defendants and reduces the cases of detainees without endangering the public[12].
Another US product is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software, which assesses the risk of re-offending by a person against whom a judge must pass sentences. The COMPAS program is based on the processing of data provided by the defendants through questionnaires, and if the defendants refuse such a procedure, the program relies on information from their baseline information. Data are can be dynamic and variable data that may change (e.g. drug addiction, professional status, predisposition to a criminal group), or static data that remain intact (gender, age, criminal status, criminal history).
Questions in the questionnaires include “How often did you get into fights when you were at school?”, “How many of your friends/acquaintances have ever been arrested?”, “How old were you when your parents divorced, if it happened?”, “Does a hungry person have the right to steal?” etc. After obtaining their answers, the defendants are divided into risk groups in the range from 1 to 10 (1-4: low risk; 5-7: medium; 8-10: high)[13], and the judge renders a verdict based on such an assessment of the risk produced by the AI.
China is a direct competitor in this technological arena who competes for the title of the leader in the use of AI in the world. Since 2017, the online court held on WeChat (a Chinese social media) has been gradually put into use, in which videoconferencing and AI is used instead of a courtroom and a judge. Hangzhou Court is the first digital court in China, and this form of court has also been introduced in Beijing and Guangzhou.
To date, 119,000 cases have been heard and 88,000 have been processed in these three online courts. Such e-courts possess different powers than the corresponding courts in Estonia, as the Chinese court may hear copyright disputes, commercial disputes on the Internet and infringements in the field of e-commerce.
In European countries, the use of AI algorithms remains as a private sector initiative and is rarely integrated into public policy. Moreover, criminal liability is even introduced for its use. In France, there is a penalty for case law analysis, which allows anticipation of the decision-making by a judge.
These amendments to the law were adopted under pressure from the judiciary, arguing that court decisions are used to analyze the behavior of a particular judge that violates one’s personal rights. However, in 2017, two courts of appeal in France still tested the AI program to calculate severance pay in dismissal cases without sufficient grounds. However, this algorithm fails to be proven contributory to French judges because excluded the amounts agreed during the out-of-court settlement and other important details of the cases[14].
However, AI algorithms are still used in the administration of justice in some European countries. Netherlands has held a private e-court since 2010, and decisions on individual cases (including debt collection) have been made based on AI since 2011. Intervention of the judge was performed in the presence of difficult issues. Decisions of the electronic “judge” are subject to execution in the general order at which the executor personally enters data into system and transfers the amounts. To date, no errors in e-judge decisions have been found in the Netherlands, but errors have been repeatedly reported during the re-entry phase[15].
The risks of different court decision-making by AI and humans in identical court cases exist and pose a threat to the stability of judicial practice due to the difference in solving a particular problem faced by AI and humans. Such discrepancies are largely attributable to the uncertainty of court decisions based on artificial intelligence. The algorithms of human decision-making are poorly investigated and predicted, but in the case of AI, the machine is fully independent by analyzing the data provided by man (so such decisions can also be deprived of impartiality and objectivity).
In addition, the differences in algorithms are evident in the internal decision-making process of the court, but such decisions become uniform in terms of structure and legal basis, given the legal requirements for court decision as general documents. In the event that an AI performs the same decision making process as a human judge, the decision making process is distinctly different, and therefore, AI is not a complete alternative in the judiciary.
Humans can think more farsightedly to assess all factors and risks of the impact of AI on the legal and judicial systems. This algorithmic difference leads to an understanding of AI at the level of an assistant judge, which may rely on AI analysis in making court decisions but be fully responsible for the justice of the judgment.
4 ADVANTAGES AND DISADVANTAGES OF USING AI IN JUSTICE
“The main problem is that when we look at AI technology, we tend to think that they can solve all our problems, that they do not have the bias inherent in human judgment. But it's not really that simple” (Joshua Franco, Amnesty Tech's Deputy Director at Amnesty International). Indeed, AI has potential risks and issues to be addressed[16].
One of the main risks posed by AI is the risk of changing the hierarchy of standards, as this software will shed light on the rule of court precedent, which should never erase the rules written by the legislator.
The next significant risk is the impact on the team. AI tools provide an in-depth analysis of previous case law. The judge will be able to note that 90% of his/her colleagues have made the same decision in similar cases. He will feel pressured to do the same or feel relieved of the responsibility to follow the majority decision and make a personal decision. That is the reason why the European Commission is concerned that “with this type of administration of justice, judges' decisions may be biased due to inertia”[3].
The independence and freedom of the judge should not be compromised by software, and “judicial decision-making tools should be developed and perceived as an aid to their adoption and facilitate the work of the judge instead of being a restriction”. “Respect for the principle of independence requires that everyone can, and therefore should, make a personal decision as a result of personal reasonable motivation, regardless of the computer tool”[17]. The judge must remain the main controlling entity at all judicial levels.
Another risk during the introduction of AI is the blockage of the evolution and improvement of the law. Close attention should be given to the generally accepted stereotype created by AI, as these algorithms work based on previous situations, which is a real limitation of development. If judges allow themselves to be guided by the results of these programs, the adaptation of the law to society is significantly retarded. In addition, in the systems of continental law (which includes Ukraine), if the main text changes, all case law based on this text will be rejected.
Nevertheless, the introduction of information and telecommunication technologies into the judiciary is intended to be a powerful tool for effective justice and yields obvious advantages. including reduction of congestion in the judicial system, acceleration of proceedings, reduction in cost.
Proponents of replacing AI judges argue that AI requires no rest or salary, is impartial and incorruptible, and resolves cases exceedingly quickly. Such technology ensures complex tasks in litigation, including the speed and ease of communication between all participants in the process, organizing and explaining to users various databases and useful legal information about the exercise of their rights and obligations, providing access to all classes of people, reducing the cost of litigation, and rendering transparent and understandable rules for participation in the trial of any case[18]. For example, in the case of digitalization, the parties will be able to immediately obtain a record of the meeting, instead of waiting for the minutes. However, e-justice must be guided by ethical principles that guarantee full respect for human rights.
Theoretically, AI, which is completely free of defects of the human psyche, is able to provide an objective result of the analysis of a complex set of facts and produce rational decisions[19]. The absence of emotions in AI will result in a fair and independent judge. For example, AI judges will never under physical or psychological pressures that may be present in human judges, as any information is symbols for them.
When assessing the potential for innovation of traditional tools and AI used in judicial practice, the reasonableness of the results obtained and the prospect of their impact on future judicial practice in general should be weighed. Undoubtedly, court rulings, AI-generated and court decisions made by human judges must follow the same principles of justice and conform to the requirements of the law. The application of AI technologies based on a single law for AI and for the judge will serve as a basis for the stability of the legal system and will guarantee justice and legality as the basis of justice. In addition, AI in the judiciary is currently used as a tool for judges. In some countries, minor cases can be handled by AI, which indicates caution in its use in the field of justice. In our opinion, such an approach is quite unjustified given the lack of transparency of AI algorithms and given the importance of justice in the democratic world in general. Therefore, traditional approaches must be organically combined with modern technologies to achieve the goals of justice.
Thus, to have a positive effect on the judiciary, the principles on which the AI justice process will be based should be as follows: 1) inadmissibility of abuse of procedural rights; 2) AI work to minimize discrimination; 3) high-quality and safe structure of AI machine learning; 4) transparent and impartial training and organization of justice through AI; and 5) the degree of autonomy of AI use for users is properly ensured in order to best implement their procedural functions.
5 PROBLEMS OF AI IMPLEMENTATION IN NATIONAL JUSTICE
The introduction of AI into national justice remains a problematic issue. The precondition for its implementation is the launch of a single judicial information and telecommunication system (SJITS), which provides completely paperless record via the use of electronic digital signatures and electronic document management, creation of personal offices of judges to perform any procedural actions, thereby improving a single state register of court decisions, adding a system of hyperlinks to the legal positions of the Supreme Court. This allows the algorithm to select case specific decisions suitable for the Supreme Court and to construct them without human intervention[19].
In Ukraine, the e-court has already been introduced to some extent as a subsystem of SJITS, through which it is possible to independently file an exhaustive list of claims, monitor the progress of the case, file procedural documents, pay court fees and control the receipt of claims against themselves. However, difficulties arise for both users and judges, who often received document returns due to improper format. However, these issues are not unique, as digitalization is not being implemented at such a rapid pace: it requires the implementation of other important steps. One of them is to solve the problem of effective interaction with users. After all, the use of information and telecommunications technologies in the judicial system also implies an important role of the person in the process of such use from the moment of development of the concept of the technology to control its use. Therefore, it is important to develop a user-friendly and understandable interface, which has repeatedly been subject to change because users did not understand it.
A very important aspect of the digitalization of Ukraine's judicial system is the state's ability to protect documents from forgery and the use of copies for illegal purposes. Another caveat for the use of AI in Ukraine is the possibility of cyberattacks (for example, inadvertent activation of malware may lead to the shutdown of the system and thus to the failure of a judicial decision[19]. Despite numerous risks, the positive experience with the introduction of AI in justice shows that AI in judicial applications is highly efficient. Another obstacle to the introduction of AI in Ukraine is that precedent is not central to the hierarchy of sources of law. Perhaps that is why these programs are usually more developed in the Anglo-Saxon countries (the United States, United Kingdom) than in the countries of continental law (France, Germany, Ukraine).
The solution to these issues requires the involvement of the state. The implementation of an AI system requires many resources but would ultimately save a great deal of money, even considering the abandonment of paper and postal services. In addition, decision-making will be faster and of higher quality, thereby ensuring the unity of judicial practice. The issue of communication between judges, courts, the legal community, the judiciary and society as a whole will be addressed. At present, employees of the court of Ukraine spend a great deal of time on mechanical work, which is considered little significance. Experience has shown that incorporating AI into the justice system will greatly improve efficiency.
6 CONCLUSION
Undoubtedly, changes in the future are inevitable and the role of judges in the administration of justice, with the involvement of artificial intelligence in the future, is all extremely important. It is considered that the judge in a case should be the embodiment of a professional level of knowledge of objective reality, supported by a proper professional level of consciousness and deep theoretical and legal knowledge combined with experience in law enforcement. Nevertheless, AI is extremely necessary as a tool in modern justice given the modern informatization of all spheres of society and the state.
Furthermore, justice with AI must be implemented in compliance with certain rules. First and foremost, AI algorithms should be transparent and non-deceptive. “If the justice of artificial intelligence does not want to be ultimately seen as divination, mystery and intimidation, or as a new instrument of bribery, it must reveal its algorithms, not hide behind trade secrets,” warns Antoine Harapon. Therefore, it is important to ensure procedural fairness and adversarial proceedings.
In addition, AI systems must operate transparently and fairly and be certified by experts independent of the operator. The processing of judicial data by AI systems will ensure the transparency of the administration of justice, in particular by improving the predictability of the application of the law and the consistency of judicial practice. Such processing should be carried out in compliance with the fundamental rights guaranteed by international instruments; in particular, the European Convention on Human Rights and the Convention for the Protection of Personal Data and national legislation that locally protects the rights and freedoms of litigants.
The use of AI in the administration of justice necessitates the definition of boundaries that humans should not cross. It is important that humans carefully manage the degree of AI autonomy, because these technical means are designed for the convenience of human instead of being a competitor or an equal subject of social relations, violating the principle that “the right is created by people for people”.
Acknowledgements
Not applicable.
Conflicts of Interest
The authors state that there are no conflicts of interest regarding the writing and publication of the article.
Author Contribution
Both Martsenko N and Drakokhrust T contributed to the manuscript and approved the final version.
Abbreviation List
AI, Artificial intelligence
COMPAS, Correctional offender management profiling for alternative sanctions
EU, European Union
SJITS, Single judicial information and telecommunication system
References
[1] Karchevskii VP. Man and robot: Development of learning processes. Artif Intell, 2012; 4: 43-52.
[2] Bondarenko A. Artificial intelligence against humanity: Musk, Hawking and Wozniak warn that it's time to stop [In Russian]. Accessed February 9, 2022. Available at http://ain.ua/2015/07/27/593911
[3] Mindell D. Rise of the machines is cancelled! Myths about robotics [In Russian]. Accessed February 9, 2022. Available at http://www.litres.ru/pages/biblio_book/?art=23172387
[4] European Parliament. Resolution of the European Parliament [In Russian]. Accessed February 9, 2022. Available at https://robopravo.ru/riezoliutsiia_ies
[5] Europeia U. European ethical charter on the use of artificial intelligence: The 31st Plenary Session of European Commission on the Efficiency of Justice, Strasburh, France, 3-4 December, 2018. Strasburh: Judicial Systems and Their Environment; 2018.
[6] McBride D. Convention on the protection of human rights and fundamental freedoms [In Ukrainian]. Accessed February 9, 2022. Available at https://zakon.rada.gov.ua/laws/show/995_004
[7] Ponkіn ІV, Red'kіna AІ. Artificial intelligence from the point of view of law. RUDN J Law, 2018; 22: 91-109. DOI: 10.22363/2313-2337-2018-22-1-91-109
[8] Martsenko NS. Legal regime of piece intelligence in civil law [In Ukrainian]. Accessed February 9, 2022. Available at http://appj.tneu.edu.ua/index.php/apl/article/viewFile/797/785
[9] Calo R. Robots in American law. Legal Stud Res Pap, 2016; 4.
[10] Asaro P. Robots and responsibility from a legal perspective: IEEE Conference on Robotics and Automation, Workshop on Roboethics, Rome, Italy, 10-14 April 2007.
[11] Consultative Council of European Judges. Justice and information technologies (IT). Accessed February 9, 2022. Available at https://rm.coe.int/168074816b
[12] Moor J. The Dartmouth college artificial intelligence conference: The next 50 years. AI Mag, 2006; 27: 87-87. DOI: 10.1609/aimag.v27i4.1911
[13] Hitis VB, Hudkova K. Methods of piece intelligence: Kramatorsk. DDMA, 2018.
[14] Martsenko NS. Determining the place of artificial intelligence in civil law. Studia Prawnoustrojowe, 2020; 47: 157-174. DOI: 10.31648/sp.5279
[15] Tyshkivskyi SL. What is an electronic court and why do you need wine? [In Ukrainian]. Accessed February 9, 2022. Available at https://vn.20minut.ua/Podii/scho-take-elektronniy-sud-i-dlya-chogo-vin-potriben-novini-kompaniy-10814948.html
[16] Verkhovna Rada of Ukraine, UN convention on the protection of communications with automated processing of personal data [In Ukrainian]. Accessed February 9, 2022. Available at https://zakon.rada.gov.ua/laws/show/994_326#Text
[17] Law Enforcement Office. The unprecedented decision of the courts in Ukraine has become a national tradition [In Ukrainian]. Accessed February 9, 2022. Available at http://helsinki.org.ua/index.php?id=1418036630
[18] Pekarchuk VM, Pushenko NV. The need for an electronic court is a reminder to improve the efficiency of litigation in Ukraine. Bull LDUVS E.O. Didorenko, 2017; 1: 81-88.
[19] Laryna OS, Ovchynskyi VS. Artificial intelligence. Ethics Law, 2020; 192.
Copyright ©2022 The Author(s). This open-access article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, sharing, adaptation, distribution, and reproduction in any medium, provided the original work is properly cited.