What Artificial Intelligence Ethics Is Not — and What It Is?

Document Type : Original Article

Author

Department of Philosophy, Payam Noor University, Tehran, Iran.

Abstract

Introduction: Verifying the claim that the general public, humanities experts, politicians, policymakers, and others have all turned their attention to the phenomenon called artificial intelligence is not particularly difficult. Consequently, a sort of consensus has formed regarding the decisive importance of artificial intelligence. However, people's intuitive understanding of AI largely stems from interactions with chatbots, which has led to the emergence of several significant misunderstandings. Examples of these misunderstandings will be elaborated later in the article. Beyond the purely technical aspects of the matter, there is an important dimension of AI commonly referred to as AI ethics. In the contemporary Iranian humanities domain, the literature on AI ethics is gradually taking shape. Nonetheless, due to the complexities of AI ethics and its interdisciplinary nature, signs of misunderstanding appear even at this early stage. Accordingly, this article first seeks to demonstrate what AI ethics is not, and subsequently endeavors to clarify what it is and its inherent nature. Finally, the main issues pertinent to it will be examined.
Findings: Understanding what AI actually is proves very challenging. Contrary to the impression that may arise at first glance, one cannot easily gain an intuitive and clear grasp of AI’s essence. The greatest error is to imagine AI as something similar to human intelligence or to assume, after some interaction with popular chatbots, that AI equates to this. Equally mistaken is the notion that AI is merely a complex and fast computational machine similar to a computer. Understanding AI’s nature requires technical knowledge, which most humanities specialists typically lack it. This is evident when we examine AI definitions: asking how AI operates in machines does not produce an intuitive mental picture because its mechanisms are entirely technical and technological. Hence, McCarthy defines AI as both a science and an engineering discipline. Thus, all components of the phrase "AI ethics" are complex: "ethics" is employed in a particular, nontrivial sense, and AI itself is a highly complex technology, far removed from intuitive comprehension. This leads humanities experts to face a serious initial obstacle. From a methodological standpoint, this necessitates interdisciplinary approaches, requiring collaboration with technical specialists and engineers in this field. It is possible and indeed valid to discuss local issues regarding AI; however, attempting to root AI ethics within one’s own intellectual tradition is a mistaken and flawed approach. Regardless of the correctness or incorrectness of this widespread tendency, it can be stated with certainty that an equivalent concept to AI ethics cannot be found in our own tradition. Any such attempt leads only to confusion and error. Interestingly, some even try to localize AI itself or impose their own frameworks and assumptions onto it. A prevalent misunderstanding is that, contrary to the intuitive and technical understanding, AI is generally conceived as an agent similar to a human but possessing a machine brain. Consequently, just as a human agent has an ethical system, AI machines are expected to have an ethical system as well. This is among the most fundamental misconceptions surrounding AI ethics. AI ethics primarily aims to reduce the risks posed by AI. The literature of AI ethics is saturated with warnings and concerns about AI-related dangers. These dangers mainly relate to human life, happiness, and well-being. However, sometimes these risks are exaggerated. Discussions on AI ethics can be initially and fundamentally divided into two categories. The first category involves purely theoretical issues, which are mostly philosophical and do not directly apply to industry or technology. The second category concerns practical issues that arise during the application and use of AI in industry and technology. Compared to the first category, these have less philosophical emphasis and embody ethical challenges encountered in the production of AI technologies or the construction of AI-based machines, thereby falling under practical and applied ethical questions related to AI.
Discussion: AI ethics is an approach seeking to construct practical guidelines within the AI domain to prevent outcomes that our ethical intuitions generally deem improper. However, for various reasons—including AI’s reliance on machines and machine learning—these guidelines entail considerable technical complexities. Therefore, there exist significant differences between the conventional, even philosophical, understanding of ethics on one side, and the technically grounded comprehension of AI ethics on the other. Consequently, humanities specialists’ engagement with AI ethics must be accompanied by caution, extensive knowledge of AI itself, and, if necessary, collaboration with AI experts.

Keywords


Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59-64. https://doi.org/10.1038/s41586-018-0637-6
Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.             
https://doi.org/10.1145/3287560.3287583
Cave, S., & Dihal, K. (2020). The Whiteness of AI. Philosophy & Technology, 33(4), 685-703.           
https://doi.org/10.1007/s13347-020-00415-6
Crootof, R. (2015). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review, 36, 1837–1915.
Floridi, L., & Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360.                
https://doi.org/10.1098/rsta.2016.0360
Gunkel, D. J. (2018). Robot Rights. MIT Press.
Hurley, M., & Adebayo, J. (2017). Credit scoring in the era of big data. Yale Journal of Law and Technology, 18(1), 148-216. https://digitalcommons.law.yale.edu/yjolt/vol18/iss1/5/
Liao, Matthew. (2020). “A Short Introduction to the Ethics of Artificial Intelligence”, in: Liao, Matthew. Ethics of Artificial Intelligence, Oxford University Press.
Miri Balajourshari, Seyedeh Mahshid, and Mahmoudi, Amir Reza. 2024. "Examining Ethical Issues in the Context of Artificial Intelligence with a View to Islamic Ethics," in: Applied Ethics Research Quarterly, Volume 14, Issue 6, pp. 97-123. [in Persian]
Molnar, P. (2019). Technological testing grounds: Migration management experiments and reflections from the ground up. European Journal of Migration and Law, 21(3), 329-352.          
https://doi.org/10.1163/15718166-12340054
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21. https://doi.org/10.1109/MIS.2006.76
Ramazani, Majid, and Feyzi Derakhshi, Mohammad Reza. 2013. "Machine Ethics: Challenges and Approaches to Ethical Issues in Artificial Intelligence and Superintelligence," in: Ethics in Science and Technology Quarterly, Volume 8, Issue 4, pp. 1-9. [in Persian]
Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347-1358. https://doi.org/10.1056/NEJMra1814259
van den Hoven, J. (2010). The Handbook of Information and Computer Ethics. Wiley
Zargar, Zahra. 2025. "The Relationship Between Emotions and Moral Capacity in Artificial Intelligence Technologies," in: Philosophical Research, Spring 2025, Issue 50, pp. 19-40. [in Persian]
Volume 1, Issue 1 - Serial Number 1
April 2025
Pages 261-274
  • Receive Date: 18 June 2025
  • Revise Date: 24 July 2025
  • Accept Date: 05 October 2025
  • First Publish Date: 22 October 2025
  • Publish Date: 22 November 2025