Comparative Analysis of two Ethical Approaches—Optimistic and Pessimistic—toward AI Development, with a Focus on the Problem of the Responsibility Gap

Document Type : Original Article

Author

Institute for Science and Technology Studies, Shahid Beheshti University, Tehran, Iran.

Abstract

Introduction: This paper examines two contrasting ethical approaches to the development of artificial intelligence (AI): the optimistic and the pessimistic. Both approaches aim to analyze the ethical and human-centered dimensions of AI, yet they differ fundamentally in their assumptions and conclusions. The optimistic approach emphasizes AI’s potential to enhance human life and argues that ethical concerns are often based on speculative or non-specialist assumptions. In contrast, the pessimistic approach deems unrestricted AI development ethically unjustifiable due to unpredictable consequences, algorithmic bias, and the erosion of human decision-making capacity. The focal point of this paper is the “responsibility gap”—a dilemma that complicates the attribution of negative outcomes of AI systems to any specific individual or institution, raising profound questions about moral and legal accountability. The central question addressed is: which of the two approaches offers a more reasonable response to the responsibility gap?
Findings: The optimistic approach is grounded in three core arguments:
l The benefits of AI development outweigh its harms, and depriving societies of these benefits is ethically unjustifiable.
l Pessimistic concerns often stem from non-expert perceptions, whereas specialists tend to offer more balanced and optimistic assessments.
l Philosophical assumptions underlying pessimistic views—such as the claim that robots lack human-like qualities—remain unresolved and cannot serve as a decisive basis for restricting AI development.
Conversely, the pessimistic approach draws on empirical evidence of AI’s problematic effects:
l AI systems may possess unethical tendencies such as deception and malicious intent.
l AI development leads to undesirable consequences like institutionalized inequality and diminished human autonomy, which cannot be ethically offset by potential benefits.
l Ethical considerations should extend beyond normative human life to include potential harm to nature and ecosystems, threatening the very foundation of human existence.
Regarding the responsibility gap, pessimistic thinkers such as Matthias and Sparrow argue that autonomous systems make it impossible to assign moral responsibility, especially in sensitive domains like warfare. Optimists like Danaher, however, view the gap as an opportunity to reduce the psychological burden of tragic human decisions, presenting it as a potential ethical advantage.
Discussion: The paper offers an independent analysis that distinguishes moral accountability from moral worth, arguing that the responsibility gap in AI is no more complex than that found among humans. Epistemic uncertainty and lack of full control are inherent to all moral agents, and the emergence of intelligent entities is not fundamentally different from the birth of new human beings.
Thus, ethical pessimism that rejects AI development due to the responsibility gap suffers from internal contradiction—if blameworthiness is a condition for moral legitimacy, then human reproduction itself would be ethically suspect. Accordingly, a combined neutral-optimistic approach to the responsibility gap is logically superior to absolute pessimism. This conclusion demonstrates the overall implausibility of the pessimistic approach and supports the preference for optimistic and neutral perspectives.

Keywords


Ahmad, S. F.; Han, H.; Alam M. M.; Rehmat, M.; Irshad, M.; Arraño-Muñoz, M.; & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1), 1–14.       
https://doi.org/10.1057/s41599-023-01787-8
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12.         
https://doi.org/10.1057/s41599-023-02079-x
Danaher, J. (2019a). The Philosophical Case for Robot Friendship. Journal of Posthuman Studies, 3(1),
5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Danaher, J. (2019b). The Philosophical Case for Robot Friendship. Journal of Posthuman Studies, 3(1),
5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Danaher, J. (2022). Tragic choices and the virtue of techno-responsibility gaps. Philosophy & Technology, 35(2), 26. https://doi.org/10.1007/s13347-022-00519-1
Ferlito, B., Segers, S., De Proost, M., & Mertes, H. (2024). Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach. Science and Engineering Ethics, 30(4), 34.              
https://doi.org/10.1007/s11948-024-00501-4
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1
Narayanan, A., & Kapoor, S. (2025a). AI as normal technology. Knight First Amend. Inst.   
https://thedocs.worldbank.org/en/doc/d6e33a074ac9269e4511e5d44db2f9ac-0050022025/original/AI-as-Normal-Technology-Narayanan-Kapoor-Final.pdf
Narayanan, A., & Kapoor, S. (2025b). AI as normal technology: An alternative to the vision of AI as a potential superintelligence. Knight First Amendment Institute, Columbia University,  
https://Kfai-Documents.S3.Amazonaws.Com/Documents/C3cac5a2a7/AI-as-Normal-Technology—Narayanan—Kapoor.Pdf
Prentice, R. (2025). Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI. Ethics Unwrapped.    https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai
Schwaller, F. (2025). Will AI improve your life? Here’s what 4,000 researchers think. Nature, 640(8059), 577–578. https://doi.org/10.1038/d41586-025-01123-x
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.
https://doi.org/10.1111/j.1468-5930.2007.00346.x
Strain, M. R. (2024, Summer). The Case for AI Optimism. National Affairs, 60.     
https://nationalaffairs.com/publications/detail/the-case-for-ai-optimism
Thaiduong, N. (2025). IT Professionals Versus the Public: Who’s More Optimistic About AI’s Future Impacts? SAGE Open, 15(2). https://doi.org/10.1177/21582440251348802
Vallor, S., & Vierkant, T. (2024). Find the Gap: AI, Responsible Agency and Vulnerability. Minds and Machines, 34(3), 20. https://doi.org/10.1007/s11023-024-09674-0
Wada, K., & Shibata, T. (2007). Living with seal robots—Its sociopsychological and physiological influences on the elderly at a care house. IEEE Transactions on Robotics, 23(5), 972–980.                
https://doi.org/10.1109/TRO.2007.906261
Wang, H., & Blok, V. (2025). Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework. Big Data & Society, 12(2), https://doi.org/10.1177/20539517251340620
  • Receive Date: 30 September 2025
  • Revise Date: 21 November 2025
  • Accept Date: 24 January 2026
  • First Publish Date: 24 January 2026
  • Publish Date: 23 September 2025