주 메뉴 바로가기 본문으로 바로가기

PUBLICATIONS image
PUBLICATIONS

KICJ Research Reports

Strenghtening Korean Criminal Justice System Applying Forensic Science (Ⅷ): Artifical Intelligence Technology 사진
Strenghtening Korean Criminal Justice System Applying Forensic Science (Ⅷ): Artifical Intelligence Technology
  • LanguageKorean
  • Authors Jeeyoung Yun, Hankyun Kim, Donggeun Gam, Seongdon Kim
  • ISBN979-11-87160-67-0
  • Date December 01, 2017
  • Hit451

Abstract

With the 2016 monumental event between AlphaGo, an artificial intelligence (AI) program developed by Google Deep Mind and the world Go champion Lee Sedol the public’s interests in artificial intelligence has grown and many people anticipate that in various fields, machines equipped with AI will replace human labour in the 4th Industrial Revolution Era. No clear definition has been made for AI, but it is widely understood as a computer program to realize human cognitive functions, such as perception, reasoning, learning ability and ability to use and understand language. From the late 1980s, AI technology had suffered setback in technical progress for an extended period of time. Recently, however, it made a new leap forward thanks to the machine learning based on advanced computing power and aggregated digital data.
The progress of AI technology includs delivery service using drones or commercialization of Automated vehicles and Al Programs machines to write articles or to diagnose illnesses. As the result, there is a growing expectation of greater convenience in our lives. On the other hand, some people point out the potential risks that human may lose their jobs to the machines and the ‘killer robots’ may put humanity in danger. Some are even concerned about the possible inequality as social biases are infused into the AI algorithm. Last year, the Korean society was inundated more than ever with a deluge of information on AI due to the so-called AlphaGo shock. Interestingly, unlike other Future technologies of the 4th Industrial Revolution, bright prospects and worries about a dystopian future were raised simultaneously from the each of the two extremes on AI technologies. Since it had won a match against high-profile Go player Lee Sedol with a1 4-1 score, AlphaGo won three matches in a row against current world no. 1 ranking player Ke Jie in May, 2017 and retired from competitive Go matches. In October 2017, AlphaGod Zero, even more powerful than the original version, came out. Witnessing this alarming progress of technology, it is time that we should put behind the earlier shock and start seriously looking for a strategy to maximize the benefits of technology and minimize the risks of potential harm to people, at once.
AI technology is being discussed at the international level, as it can either be leveraged for the greater good of mankind or threaten it. First, the UN initiated a discussion on how AI technology could play a catalytic role for its Sustainable Development Goals, adopted in September 2015. At the same time, the UN made clear and stressed its position against developing the killer robots as it saw these machines as lethal autonomous weapons. In October 2017, the Centre for Artificial Intelligence and Robotics was launched under the United Nations Interregional Crime and Justice Research Institute(UNICRI), which was significant in that the Center was the first permanent body established by the UN in preparation for utilization of AI and robotics. Further, in February 2017, the EU Council adopted the European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics. Although it is not yet confirmed whether the Executive Committee, which has been requested to submit a legislative bill to prepare related guidelines, would accept the resolution, and if so what the details would be like, the whole world watches attentively the EU’s attempts to prepare a legislation regarding AI or robots. Those recommendations include certain principles to support the industrial and commercial use of robotics within the internal market of EU, provide a draft Code of Ethics which the concerned researchers, developers and engineers are expected to observe, and a summary of the licensing system required for both the developer and the users, and recommend to set up a Research Ethics Committee. From the legal perspective, the resolution presented its standpoint that in addition to the protection of intellectual property rights or information, establishment of both technology standard and information security management system, the EU should grant legal status to AI robots as electronic persons and thus confers certain rights and obligations. These efforts and attempt of EU have attracted a worldwide attention as it made the legal status of robots a public issue.
The U.S. launched the Subcommittee on Machine Learning and Artificial Intelligence under the Committee on Technology of the National Science and Technology Council(NSTC) in May 2016 to monitor the progress and outcomes of AI technology. Based on this, the NSTC published two reports in October 2016, the ‘Preparing for the Future of Artificial Intelligence’, and ‘The National Artificial Intelligence Research and Development Strategic Plan’, and again in December another report entitled ‘Artificial Intelligence, Automation, and the Economy’ to provide a direction of AI R&D and policy-making. Although the U.S. government is seeking a regulatory and monitoring measure for antomated vehicles, drones, and cancer diagnosis and analysis system, all of which are at the verge of commercialization, it considers that premature intervention in AI field might damage the safe and responsible development of the technologies and therefore applies the out-dated legal principles rather than creating a new regulation considering changing circumstance. The government also exerts itself to secure fairness, accountability and transparency in using the AI technology.
In Germany, anticipation of weak-AI in consideration of the current state of technology is more common, but sometimes strong-AI is considered for discussion of future policy. At the Ausschusses Digitale Agenda held in Berlin on March 22, 2017, experts raised the necessity of regulation to deal with the damages caused by AI system and exchanged opinions about the responsibilities in developing the AI technology, insurance model, and the protection of information and privacy. Most of the experts agreed to regulate AI within the borders of EU, while some suggested ethical regulations are more important than legal ones which tend to impede technical development. As for automated vehicles at the verge of commercialization, the Bundesministerium fur Verkehr und digitale Infrastruktur already published Bericht der Ethik-Kommission, a ethical code, on August 23, 2017. It is significant in that the government, for the first time, made public its ethical viewpoint concerning the software development for automated vehicles.
In Japan, after a careful consideration of three aspects, that is, ‘necessity of resolution as a social demand’, ‘economic impact’, and ‘contribution of AI technology, the government selected productivity, health/care and nursing, space movement, and information security as four major fields for development of AI technology and set up a road map for its industrialization. In relation to the AI technology, responsibilities in case of an accident or malfunction of AI machines, legal rules AI creative works, changes in the labour market and forms by AI, and violation of privacy are discussed as the major legal issues. On February 28, 2017, the JSAI announced an ethical guideline for the relevant researchers in order to prevent abuse of AI. This guideline has its characteristic in that it postulates AI as a member of the society and expect
AI to comply with this ethical guideline just as human members.
In Korea, a new administration which launched on May 10, 2017, announced ‘100 National Tasks’ on July 19 and presented its road-map for the government policies, the details of which include to commercialize 5G mobile communications, first time in the world. Its future innovation strategy to properly respond to the 4th Industrial Revolution aims to establish a nationwide network exclusively for Internet of Things(IoT) services, to foster new industries which generate high-value added products by developing core technologies, such as AI and thus to create jobs, and to secure the engine for the nation’s future growth in the end. The new government sets regulation principle in a negative way, which allows all activities as long as they do not violate the rules or policies of the government and the courts, and looks for an effective way to improve the existing legal system, including the Framework Act on National Information to properly respond to the possible changes under the 4th Industrial Revolution. For example, the administration has announced that the Ministry of Science and ICT (formerly, Ministry of Science, ICT and Future Planning) will revise the current Framework Act as the Framework Act on Intelligence Information Society (provisional) to promote the nationwide intelligence information, and therefore provide a direction to follow, set up a basic plan, and add more provisions to the new Act to secure the basis of intelligence information technology, such as protection of property rights of data and distribution of values. Furthermore, as for the growing issues in the legal system across the world, with regard to the development of AI technologies, such as safety of AI, subject of legal liability in case of accidents, ethical rules in developing the technology, and the data’s value as intellectual property, the administration is planning to consider various interests of industries and set up a strategy to improve the current legal system.
The most significant feature of the recent AI system is that it finds out a right pattern to solve the problem through machine learning. From this fact, there might be a concern that AI system’s pattern of behaviors would go beyond human understanding. The development of AI technology in this manner requires of current legal system a new and different strategy than the previous one to properly deal with the external outcomes due to the development of AI system, of which the central question is whether we should grant an independent legal status to the AI system. This question is related to the question of legal liability, that is whom or which the law should hold accountable for the damages, or acts of infringing any legal rights and the consequences of such acts made by AI system.
Given the current legal theories and the level of technologies, AI robots’ ability of taking action and responsibilities is not, of course, acknowledged and therefore AI robots cannot be recognized as legal person under the criminal laws. Nevertheless, the reason trying to set up a criminal law condition in advance under which AI robots could take the criminal responsibilities of their own is that the society wants to provide a set of standards to determine whether AI robots could satisfy the criminal elements. In brief, in a stage where current law cannot hold the AI system itself accountable for criminal responsibilities, it is necessary to analyze the hampering factors to ascribe the criminal responsibilities to a natural person behind the AI system in question, and to provide prerequisites to make the AI system accept its accountability under the criminal law. Furthermore, criminal legal theories should be mindful of the possibility that a new agent, which is not a natural person, can be created under the criminal law system in order to fill in the demands of punishment in the new era of AI.
Meanwhile, while expectation and concerns about the rapid progress of AI technology exist side-by-side, a new attempt to improve the effectiveness of AI technology by applying it to the legal service area draws public attention. The characteristics of the traditional legal service are complexity of the matters and importance of face-to face meetings. These make introduction of technology to the business relatively slower than other service areas. However, thanks to the advent of big data and development of AI technology, adapting such technologies to the legal service is promoted recently. The ‘legaltech’ created in around 2010 is one example. Legaltech, a compound word of ‘legal’ and ‘technology’, is legal information technology using information communications technology. Most frequently used services are legal research, search for specific lawyers, electronic evidence production, legal consultation and setting up of a strategy, and online integrated service etc.
For the advancement of criminal justice system, AI can be used in various manners. First, thanks to its high awareness and high accuracy, through deep learning, on people’s faces and voices, AI is expected to be greatly used for criminal investigation. AI can be also used to analyze atypical forms of files, e-mails, mobile data, telephone records, bank account details, and/or accounting data and thus improve the effectiveness of investigation by the law-enforcement. In fact, in the U.K., AI solution is already in use for a large-scale bribery cases or financial frauds. Also, by connecting the AI technology to geographic information system mapping, the police force can be more effectively mobilized as the technology will be able to forecast the possible sites where crimes might be committed. It can be also used as a supplementary tool at the courts’ hearing for drunk driving or unlicensed driving cases, where electronic summary proceedings are already in use thanks to the matters of the cases are relatively simple. Further, AI can assist criminal judges at the sentencing stage and the repeated crime prevention program which is based on AI technology can be used in determination of parole granting.
As above, AI technology could be implemented in various forms in criminal justice system, and thus it is necessary to provide a legal definition of AI and the foundation to grant a legal person status. The definition can be made through the individuals legislation where the AI technology is used, as in the legislations of automated vehicles or drones. However, if the legislature enacts a civil law concerning development and use of AI, it should identify the core concepts of AI and prepare their legal definitions as well. Meanwhile, granting a legal status or right to AI is different than the issue of acknowledging AI’s liability under the criminal laws. It can be understood in the similar context of the legal status of corporations. A corporation is recognized as a legal person under the laws, even if it has no ability to commit a crime or take punishment; likewise, a discussion has been developed to acknowledge the intellectual property rights of AI and impose liability to taxation in exchange. In using AI in criminal justice area, too, we need to consider whether it would be possible to delegate some authorities of the law enforcements to the AI system. AI used in determination of guilt of the accused, or at sentencing stage is no more than a supplementary tool and thus issue of granting a legal status is not in issue. However, for AI devices used in prevention of crime or at the stages of investigation or correction, a clear legal basis needs to be prepared to grant it certain powers (or functions).
An active use of data is a prerequisite for development of AI technology. However, as current legal system puts ‘protection of information’ ahead of ‘use of information’, a concern is raised that Korea might fall behind in the global competitions, which is a driving force of the 4th Industrial Revolution. In particular, as for AI and big data technology, citizens are more concerned about breach of privacy and personal information, resulting in the government’s strict approach to the matter. However, legal risk of private information leakage has always existed, while it narrows the room to generate new values using data. It is thus imperative to manage the legal risks, and to set up an independent agency which provides technical supports and certification to protect private information when integrating data from various organizations. Also, to properly respond to the era of 4th Industrial Revolution, we should consider if negative form of regulation is also necessary in the areas related to private information. Current private information system is criticized as it requires a prior consent of the data owners if the data users want to use certain private information for any purposes other than the initial ones at the time of collection, or if the data users attempt to provide such information to a third person. It results in a relatively slow process in handling massive data. In sum, as for big data analysis using AI, or using private information which is created while the data processors are not aware of it, such as profiling, it should be desirable to change the current condition of individual consent to 'ex post facto' consent system.

Meanwhile, under the circumstances that a series of new issues arise as AI and information and communication technology develop rapidly, some point out a problem that due to the comprehensive prior consent system under the Personal Information Protection Act, for certain purposes, private information can be collected and used without the information owners’ additional approval. To properly respond to this, a legal measures should be established to reinforce the data owners’ rights, which is possible by granting them a right to refuse to automatic processing of their private information as well as a right to request related information. A solution to control data’s overseas transfers should be also prepared to protect private information, as digital commercial transaction and cloud computing is rapidly expanding beyond national borders. In other words, although exchange of private information among the countries can allow more effective use of relevant information, the risk of private information leakage increases, and therefore the responsibility of data processors should be more clearly defined and reinforced. Moreover, in order for an agency taking charge of protection of private information to function with its full force and effect, it should be equipped with necessary authority for enforcement of relevant law and promotion of policy. Consequently, readjustment of the powers and roles between the Personal Information Protection Commission and the Ministry of the Interior and Safety is also required.
File
  • pdf 첨부파일 형사정책-윤지영_수정5.pdf (15.14MB / Download:101) Download
TOP
TOPTOP