اخلاق هوش مصنوعی چه نیست و چه هست؟

نوع مقاله : مقاله پژوهشی

نویسنده

گروه فلسفه، دانشگاه پیام نور، تهران، ایران.

چکیده

در جهان امروز حوزه‌ای نو و میان‌رشته‌ای با عنوان «اخلاق هوش مصنوعی» شکل گرفته است که نشانه‌های ورود آن به فضای فکری ایران نیز به‌تدریج آشکار می‌شود؛ با این حال، سابقۀ مواجهۀ ما با مباحث نو و همچنین پیچیدگی‌های این قلمرو تازه، موجب می‌شود که بروز برخی سوء‌تفاهم‌ها دربارۀ چیستی اخلاق هوش مصنوعی قابل پیش‌بینی باشد. برخی آثار اندک و پراکنده‌ای که تاکنون در این زمینه منتشر شده‌اند نیز تا حدی بر این ادعا گواهی می‌دهند. این مقاله می‌کوشد عمدتاً به شیوه‌ای سلبی، چیستی، قلمرو، مسائل و اهمیت اخلاق هوش مصنوعی را روشن سازد. در این مقاله نخست نشان داده‌ایم که سوء‌تفاهم‌های متعددی در خصوص ماهیت این حوزه وجود دارد؛ برای نمونه، سوء‌فهم در معنای «اخلاق» در ترکیب «اخلاق هوش مصنوعی»، نسبت دادن مفاهیم شهودی و غیرعلمی به هوش مصنوعی و به تبع آن به اخلاق هوش مصنوعی، بی‌توجهی به جنبه‌های فنی و تکنیکی، و تلاش برای جست‌وجوی آن در سنت‌های علوم انسانیِ خود است.
با وجود تأکید بر جنبۀ سلبی، از توصیف ایجابی این حوزه نیز به‌طور کامل چشم‌پوشی نشده است؛ بدین منظور، ضمن ارائۀ تعریفی از اخلاق هوش مصنوعی و دسته‌بندی مسائل و پرسش‌های اصلی آن، کوشیده‌ایم ماهیت این قلمرو را شفاف‌تر سازیم. هدف اصلی مقاله، ارائۀ تصویری واقع‌بینانه از اخلاق هوش مصنوعی و کمک به اصلاح برخی سوء‌تفاهم‌های رایج در این زمینه است. با توجه به نو بودن این موضوع در فضای فکری ایران و همراه شدن طرح آن با ابهام‌ها و سوء‌برداشت‌ها، مقالۀ حاضر بیش از آنکه در پی حل مسئله‌ای خاص باشد، درصدد ایضاح و تبیین ماهیت اخلاق هوش مصنوعی است.

کلیدواژه‌ها


عنوان مقاله [English]

What Artificial Intelligence Ethics Is Not — and What It Is?

نویسنده [English]

  • Jalal Peykani
Department of Philosophy, Payam Noor University, Tehran, Iran.
چکیده [English]

Introduction: Verifying the claim that the general public, humanities experts, politicians, policymakers, and others have all turned their attention to the phenomenon called artificial intelligence is not particularly difficult. Consequently, a sort of consensus has formed regarding the decisive importance of artificial intelligence. However, people's intuitive understanding of AI largely stems from interactions with chatbots, which has led to the emergence of several significant misunderstandings. Examples of these misunderstandings will be elaborated later in the article. Beyond the purely technical aspects of the matter, there is an important dimension of AI commonly referred to as AI ethics. In the contemporary Iranian humanities domain, the literature on AI ethics is gradually taking shape. Nonetheless, due to the complexities of AI ethics and its interdisciplinary nature, signs of misunderstanding appear even at this early stage. Accordingly, this article first seeks to demonstrate what AI ethics is not, and subsequently endeavors to clarify what it is and its inherent nature. Finally, the main issues pertinent to it will be examined.
Findings: Understanding what AI actually is proves very challenging. Contrary to the impression that may arise at first glance, one cannot easily gain an intuitive and clear grasp of AI’s essence. The greatest error is to imagine AI as something similar to human intelligence or to assume, after some interaction with popular chatbots, that AI equates to this. Equally mistaken is the notion that AI is merely a complex and fast computational machine similar to a computer. Understanding AI’s nature requires technical knowledge, which most humanities specialists typically lack it. This is evident when we examine AI definitions: asking how AI operates in machines does not produce an intuitive mental picture because its mechanisms are entirely technical and technological. Hence, McCarthy defines AI as both a science and an engineering discipline. Thus, all components of the phrase "AI ethics" are complex: "ethics" is employed in a particular, nontrivial sense, and AI itself is a highly complex technology, far removed from intuitive comprehension. This leads humanities experts to face a serious initial obstacle. From a methodological standpoint, this necessitates interdisciplinary approaches, requiring collaboration with technical specialists and engineers in this field. It is possible and indeed valid to discuss local issues regarding AI; however, attempting to root AI ethics within one’s own intellectual tradition is a mistaken and flawed approach. Regardless of the correctness or incorrectness of this widespread tendency, it can be stated with certainty that an equivalent concept to AI ethics cannot be found in our own tradition. Any such attempt leads only to confusion and error. Interestingly, some even try to localize AI itself or impose their own frameworks and assumptions onto it. A prevalent misunderstanding is that, contrary to the intuitive and technical understanding, AI is generally conceived as an agent similar to a human but possessing a machine brain. Consequently, just as a human agent has an ethical system, AI machines are expected to have an ethical system as well. This is among the most fundamental misconceptions surrounding AI ethics. AI ethics primarily aims to reduce the risks posed by AI. The literature of AI ethics is saturated with warnings and concerns about AI-related dangers. These dangers mainly relate to human life, happiness, and well-being. However, sometimes these risks are exaggerated. Discussions on AI ethics can be initially and fundamentally divided into two categories. The first category involves purely theoretical issues, which are mostly philosophical and do not directly apply to industry or technology. The second category concerns practical issues that arise during the application and use of AI in industry and technology. Compared to the first category, these have less philosophical emphasis and embody ethical challenges encountered in the production of AI technologies or the construction of AI-based machines, thereby falling under practical and applied ethical questions related to AI.
Discussion: AI ethics is an approach seeking to construct practical guidelines within the AI domain to prevent outcomes that our ethical intuitions generally deem improper. However, for various reasons—including AI’s reliance on machines and machine learning—these guidelines entail considerable technical complexities. Therefore, there exist significant differences between the conventional, even philosophical, understanding of ethics on one side, and the technically grounded comprehension of AI ethics on the other. Consequently, humanities specialists’ engagement with AI ethics must be accompanied by caution, extensive knowledge of AI itself, and, if necessary, collaboration with AI experts.

کلیدواژه‌ها [English]

  • AI ethics
  • research methodology
  • philosophy of technology
  • AI
  • misunderstanding
رمضانی، م., و فیضی درخشی، م. ر. (1392). اخلاق ماشین: چالش‌ها و رویکردهای مسائل اخلاقی در هوش مصنوعی و ابرهوش. فصلنامه اخلاق در علوم و فنّاوری، 8(4)، 1-9.
زرگر، ز. (1404). رابطه عواطف و ظرفیت اخلاقی در فناوری‌های هوش مصنوعی. پژوهش‌های فلسفی، 50، 19-40.
میری بالاجورشری، س. م., و محمودی، ا. ر. (1403). واکاوی مسائل اخلاقی در زمینه هوش مصنوعی با نگاهی به اخلاق اسلامی. فصلنامه علمی پژوهشی مطالعات اخلاق کاربردی، 14(6)، 97-123.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64.     
https://doi.org/10.1038/s41586-018-0637-6
Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency
(pp. 149–159). https://doi.org/10.1145/3287560.3287583
Cave, S., & Dihal, K. (2020). The Whiteness of AI. Philosophy & Technology, 33(4), 685–703.
Crootof, R. (2015). The killer robots are here: Legal and policy implications. Cardozo Law Review, 36, 1837–1915.
Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360
Gunkel, D. J. (2018). Robot rights. MIT Press.
Hurley, M., & Adebayo, J. (2017). Credit scoring in the era of big data. Yale Journal of Law and Technology, 18(1), 148–216. https://digitalcommons.law.yale.edu/yjolt/vol18/iss1/5/
Liao, M. (2020). A short introduction to the ethics of artificial intelligence. In M. Liao (Ed.), Ethics of Artificial Intelligence. Oxford University Press.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.76
Molnar, P. (2019). Technological testing grounds: Migration management experiments
and reflections from the ground up. European Journal of Migration and Law, 21(3),
329–352. https://doi.org/10.1163/15718166-12340054
Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347–1358. https://doi.org/10.1056/NEJMra1814259
van den Hoven, J. (2010). The handbook of information and computer ethics. Wiley.
دوره 1، شماره 1 - شماره پیاپی 1
فروردین 1404
صفحه 261-274
  • تاریخ دریافت: 28 خرداد 1404
  • تاریخ بازنگری: 02 مرداد 1404
  • تاریخ پذیرش: 13 مهر 1404
  • تاریخ اولین انتشار: 30 مهر 1404
  • تاریخ انتشار: 01 آذر 1404