1- Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria , evaschur10@gmail.com
2- Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria
Abstract: (600 Views)
Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse events. This is because in addition to designers, the environment and users also participate in the development process. This ambiguity and complexity sometimes makes it seem that the output of these technologies is beyond the control of human individuals and that no one can be held responsible for it, which is known as the “responsibility gap”. In this article, the issue of the responsibility gap in artificial intelligence technologies will be explained and strategies for the responsible development of artificial intelligence that prevent such a gap from occurring as much as possible are presented.
Material and Methods: The present article examined responsibility gap in AI. In order to achieve this goal, related articles and books were examined.
Conclusion: There have been various responses to the issue of the responsibility gap. Some believe that society can hold the technology responsible for its outcomes. Others disagree. Accordingly, only the human actors involved in the development of these technologies can be held responsible, and they should be expected to use their freedom and awareness to shape the path of technological development in a way that prevents undesirable and unethical events. In summary, the three principles of routing, tracking, and engaging public opinion and attention to public emotions in policymaking can be useful as three effective strategies for the responsible development of AI technologies.
Type of Study:
Original Article |
Subject:
Special Received: 2024/10/9 | Accepted: 2024/11/25 | Published: 2025/01/24