تحلیل مقایسه‌ای دو رهیافت اخلاقی خوش‌بینانه و بدبینانه به توسعه هوش مصنوعی با تمرکز بر معضل شکاف مسئولیت

نوع مقاله : مقاله پژوهشی

نویسنده

پژوهشکده مطالعات بنیادین علم و فناوری، دانشگاه شهید بهشتی، تهران، ایران.

چکیده

این مقاله به‌طور مقایسه‌ای دو رهیافت خوش‌بینانه و بدبینانه نسبت به توسعه هوش‌ مصنوعی را با تمرکز بر معضل «شکاف مسئولیت» بررسی می‌کند. نخست مبانی هر یک از این دو دیدگاه تشریح می‌شود: در رهیافت خوش‌بینانه تلاش می‌شود بر مزایای چشمگیر هوش مصنوعی برای جامعه، فقدان نگرانی در میان متخصصان و عدم‌ قطعیت‌های فلسفی تأکید شود؛ در مقابل، رهیافت بدبینانه با استناد به سوگیری‌های الگوریتمی، افول توانایی تصمیم‌گیری انسانی و پیامدهای پیش‌بینی‌ناپذیر اجتماعی‌ـ اقتصادی، توسعه‌ بدون محدودیت هوش‌ مصنوعی را از نظر اخلاقی مردود می‌داند. پس از آن شکاف مسئولیت به‌عنوان معضلی بررسی می‌شود که امکان انتساب کامل پیامدهای منفی فنّاوری به فرد یا نهادی مشخص را از بین می‌برد و نمونه‌هایی از مواجهه با آن در نظام سلامت و سلاح‌های خودران ارائه می‌‌شود. در ادامه سه معیار محوری‌ـ مبنای خیر اخلاقی، دامنه قابلیت‌های متافیزیکی و عملی هوش ‌مصنوعی و آسیب‌پذیری انسان در رواج هوش مصنوعی به‌عنوان چارچوبی برای تحلیل معقولیت دو رهیافت مطرح می‌شوند، اما تحلیل انجام شده نشان می‌دهد هیچ‌یک از استدلال‌های ارائه‌شده ذیل رهیافت‌های خوش‌بینانه و بدبینانه به‌طور مطلق پاسخ قطعی به این معیارها نمی‌دهند؛ بنابراین استدلالی مستقل درباره میزان معقولیت رهیافت خوش‌بینانه و بدبینانه با تأکید بر معضل شکاف مسئولیت ارائه خواهد شد که براساس آن، رهیافت خوش‌بینانه نسبت به رهیافت بدبینانه معقول‌تر دانسته می‌شود. در پایان تأکید خواهد شد که این استدلال و معقول‌تر بودن رهیافت خوش‌بینانه نسبت به رهیافت بدبینانه به معنای بی‌احتیاطی در توسعه هوش مصنوعی و نادیده‌انگاشتن معضل شکاف مسئولیت نیست.

کلیدواژه‌ها


عنوان مقاله [English]

Comparative Analysis of two Ethical Approaches—Optimistic and Pessimistic—toward AI Development, with a Focus on the Problem of the Responsibility Gap

نویسنده [English]

  • Massoud Toossi Saeidi
Institute for Science and Technology Studies, Shahid Beheshti University, Tehran, Iran.
چکیده [English]

Introduction: This paper examines two contrasting ethical approaches to the development of artificial intelligence (AI): the optimistic and the pessimistic. Both approaches aim to analyze the ethical and human-centered dimensions of AI, yet they differ fundamentally in their assumptions and conclusions. The optimistic approach emphasizes AI’s potential to enhance human life and argues that ethical concerns are often based on speculative or non-specialist assumptions. In contrast, the pessimistic approach deems unrestricted AI development ethically unjustifiable due to unpredictable consequences, algorithmic bias, and the erosion of human decision-making capacity. The focal point of this paper is the “responsibility gap”—a dilemma that complicates the attribution of negative outcomes of AI systems to any specific individual or institution, raising profound questions about moral and legal accountability. The central question addressed is: which of the two approaches offers a more reasonable response to the responsibility gap?
Findings: The optimistic approach is grounded in three core arguments:
l The benefits of AI development outweigh its harms, and depriving societies of these benefits is ethically unjustifiable.
l Pessimistic concerns often stem from non-expert perceptions, whereas specialists tend to offer more balanced and optimistic assessments.
l Philosophical assumptions underlying pessimistic views—such as the claim that robots lack human-like qualities—remain unresolved and cannot serve as a decisive basis for restricting AI development.
Conversely, the pessimistic approach draws on empirical evidence of AI’s problematic effects:
l AI systems may possess unethical tendencies such as deception and malicious intent.
l AI development leads to undesirable consequences like institutionalized inequality and diminished human autonomy, which cannot be ethically offset by potential benefits.
l Ethical considerations should extend beyond normative human life to include potential harm to nature and ecosystems, threatening the very foundation of human existence.
Regarding the responsibility gap, pessimistic thinkers such as Matthias and Sparrow argue that autonomous systems make it impossible to assign moral responsibility, especially in sensitive domains like warfare. Optimists like Danaher, however, view the gap as an opportunity to reduce the psychological burden of tragic human decisions, presenting it as a potential ethical advantage.
Discussion: The paper offers an independent analysis that distinguishes moral accountability from moral worth, arguing that the responsibility gap in AI is no more complex than that found among humans. Epistemic uncertainty and lack of full control are inherent to all moral agents, and the emergence of intelligent entities is not fundamentally different from the birth of new human beings.
Thus, ethical pessimism that rejects AI development due to the responsibility gap suffers from internal contradiction—if blameworthiness is a condition for moral legitimacy, then human reproduction itself would be ethically suspect. Accordingly, a combined neutral-optimistic approach to the responsibility gap is logically superior to absolute pessimism. This conclusion demonstrates the overall implausibility of the pessimistic approach and supports the preference for optimistic and neutral perspectives.

کلیدواژه‌ها [English]

  • The Responsibility Gap Challenge
  • AI Ethics
  • Optimistic Approach
  • Pessimistic Approach
  • Rationality
Ahmad, S. F.; Han, H.; Alam M. M.; Rehmat, M.; Irshad, M.; Arraño-Muñoz, M.; & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1), 1–14. https://doi.org/10.1057/s41599-023-01787-8
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12.           
https://doi.org/10.1057/s41599-023-02079-x
Danaher, J. (2019a). The Philosophical Case for Robot Friendship. Journal of Posthuman Studies, 3(1), 5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Danaher, J. (2019b). The Philosophical Case for Robot Friendship. Journal of Posthuman Studies, 3(1), 5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Danaher, J. (2022). Tragic choices and the virtue of techno-responsibility gaps. Philosophy & Technology, 35(2), 26. https://doi.org/10.1007/s13347-022-00519-1
Ferlito, B., Segers, S., De Proost, M., & Mertes, H. (2024). Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach. Science and Engineering Ethics, 30(4), 34. https://doi.org/10.1007/s11948-024-00501-4
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.   
https://doi.org/10.1007/s10676-004-3422-1
Narayanan, A., & Kapoor, S. (2025b). AI as normal technology: An alternative to the vision of AI as a potential superintelligence. Knight First Amendment Institute, Columbia University, https://kfai-documents.s3.amazonaws.com/Documents/C3cac5a2a7/AI-as-Normal-Technology%E2%80%94Narayanan%E2%80%94Kapoor.Pdf
Prentice, R. (2025). Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI. Ethics Unwrapped. https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai
Schwaller, F. (2025). Will AI improve your life? Here’s what 4,000 researchers think. Nature, 640(8059), 577–578. https://doi.org/10.1038/d41586-025-01123-x
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.            
https://doi.org/10.1111/j.1468-5930.2007.00346.x
Strain, M. R. (2024, Summer). The Case for AI Optimism. National Affairs, 60.
https://nationalaffairs.com/publications/detail/the-case-for-ai-optimism
Thaiduong, N. (2025). IT Professionals Versus the Public: Who’s More Optimistic About AI’s Future Impacts? SAGE Open, 15(2). https://doi.org/10.1177/21582440251348802
Vallor, S., & Vierkant, T. (2024). Find the Gap: AI, Responsible Agency and Vulnerability. Minds and Machines, 34(3), 20. https://doi.org/10.1007/s11023-024-09674-0
Wada, K., & Shibata, T. (2007). Living with seal robots—Its sociopsychological and physiological influences on the elderly at a care house. IEEE Transactions on Robotics, 23(5), 972–980. https://doi.org/10.1109/TRO.2007.906261
Wang, H., & Blok, V. (2025). Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework. Big Data & Society, 12(2).         
https://doi.org/10.1177/20539517251340620
  • تاریخ دریافت: 08 مهر 1404
  • تاریخ بازنگری: 30 آبان 1404
  • تاریخ پذیرش: 04 بهمن 1404
  • تاریخ اولین انتشار: 04 بهمن 1404
  • تاریخ انتشار: 01 مهر 1404