You’ve likely encountered the term “AI ethics” at least once in recent years. It’s a concept increasingly discussed as AI becomes a ubiquitous part of our lives, integrated into home electronics, smartphones, and shopping assistants. But what exactly is AI ethics? 

Broadly speaking, AI ethics refers to efforts to think critically about AI technology and how we, both on personal and societal levels, should respond to its potential consequences. The term has been gaining international discussions. UNESCO, which has recently become the most active UN organization in developing a normative framework for use of AI, employs the term “AI ethics.” Similarly, the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest technical professional organization, is also emphasizing the concept of “Ethically Aligned Design” from the design phase of autonomous intelligent systems and has even established an international standard for it.

Despite numerous rigorous and creative efforts in the field of AI ethics, it’s fair to say that this discourse remains a minority narrative in the grand scheme of things though. AI ethics has not yet exerted sufficient influence on the AI industry’s approach to development and research. While news outlets often appear critical of AI development, their focus tends to center on what AI can or cannot do, reflecting our more existential fears rather than uncovering deeper ethical concerns. Meanwhile, the industry’s main players seem preoccupied with the concept of AGI (Artificial General Intelligence), perhaps to an excessive degree. Consequently, the term “AI safety” has almost come to exclusively mean safety measures against superintelligence, overlooking more immediate ethical considerations in a range of socio-political concerns, including job displacement, environmental impact, and algorithmic biases, particularly those affecting minority groups.


Challenges of AI ethics

With AI being an all-purpose tool, the ethical grey area surrounding it is extensive and far-reaching, not limited to the yet-elusive threat of superintelligence. However, at major AI conferences, industry leaders often relegate the topic of AI ethics to the last few slides, treating it as an afterthought following lengthy discussions about AI’s capabilities. Why is it so challenging to bring this topic to the forefront?

One common argument against AI ethics stems from the misconception that criticizing technology is inherently anti-technology and, by extension, anti-progress. This misconception was addressed in our previous article on the Neo-Luddite movement — the key takeaway being that one can be critical of the development and distribution of technological benefits while still supporting technological advancement.

The biggest challenge surrounding AI ethics might center on the word ethics itself. John Tasioulas notes that ethics is “sometimes construed as narrowly individualistic in focus: that is, as being concerned with guiding individuals’ personal conduct.” (This individualistic focus varies across cultures — for instance, the Korean word for ethics, ‘Yoon-ri (윤리)’, carries a stronger individualistic connotation — but it appears to be a universal theme.)

If you subscribe to this individualistic view of ethics, applying it to AI becomes problematic. AI is often perceived as a value-neutral scientific and technological concept. Consequently, unless we’re discussing generally unethical acts like fraud or murder, the concept of AI ethics can seem awkward or misplaced.

However, this perspective overlooks the broader institutional and social contexts in which AI-related decisions are made and implemented. While making an “ethical” decision based on personal values may be straightforward, the process becomes exponentially more complex when multiple individuals, communities, or entire societies are involved.

Consider questions like “Should we prioritize improving AI system explainability at the cost of performance?” or “Should we correct AI biases even when they seem to merely reflect societal status quo?” Reaching conclusions on such issues becomes an extremely challenging and time-consuming process when multiple stakeholders are involved. One can envision a vast battlefield of competing values, where constant negotiation, compromise, and trade-offs occur. In this rapidly evolving landscape, there is no one-size-fits-all solution; instead, we must cultivate dynamic and agentic individuals who are aware and capable of responding to our constantly shifting environment, particularly in the face of rapid advancements in AI technology.

This is where the AI ethics narrative proves valuable. It offers an alternative perspective, allowing us to step back from the overwhelmingly positive narrative of cutting-edge technology and reflect on the broader implications. It enables a more nuanced understanding of AI’s role in our society and our ability to collaborate on a framework for understanding long-tail implications.


Embedded EthiCS

In this context, the FAIR AI conference our team attended a couple of weeks ago was a significant event, and in particular, the Embedded EthiCS initiative stood out. 

In 2015, Harvard Computer Science professor Barbara Grosz had a pivotal experience that led to the creation of Embedded EthiCS. Teaching a course on intelligent systems and ethical challenges to a diverse group of 24 students interested in AI and ethics, Grosz assigned a task involving ethical considerations in a real-world scenario. Students were asked to list profile features for a social media platform’s ad targeting algorithm. Surprisingly, none of the students considered the ethical implications of their choices, despite their interest in technology ethics. This experience became the catalyst for Grosz and his colleagues to develop the Embedded EthiCS program, highlighting a significant gap between theoretical interest in ethics and the practical application of ethical thinking in technical problem-solving. 


Introduction to Embedded Ethics by NC Soft

Embedded EthiCS attempts to seamlessly integrate ethics into computer science (CS) degree curricula at universities, rather than providing them as separate courses. Designed by interdisciplinary teams of computer scientists, philosophers, anthropologists, ethicists, and others, the program engages students in repeated learning experiences that highlight the wide-ranging technical and social implications of designing and building technology.  Through engaging debates with peers and experts from various fields, students develop an ethical awareness, understanding that integrating ethics into technology involves navigating a complex array of value trade-offs.

The Embedded EthiCS program reiterates what’s so valuable about AI ethics. The most important goal of Embedded EthiCS is to expose students to as many different ethical perspectives as possible. The value of critical discourse, such as AI ethics, perhaps lies more in increasing information diversity so that at least those who encounter it can make up their own minds, in a way that is more aligned with your value systems, rather than necessarily making them “do the right thing.” This might mean, for example, using AI in an environmentally friendly manner, reducing prejudices towards minority groups, or rewarding the creativity of artists more effectively.


Ethics as a path to Flourishing

The goal of ethics is often associated with the idea of flourishing. As the aforementioned John Tasioulas eloquently wrote, “Ethics is concerned with what it is to live a flourishing life.”

This is particularly significant in the context of what we at DAL set out to achieve: sketching alternative, diverse forms of human flourishing that aren’t necessarily rooted in a materialistic perspective. With the AI industry advancing at an unprecedented pace and information asymmetry growing rapidly, a flourishing life might well be having a balanced perspective of technology, with the freedom to proactively design your life using technology as tools, rather than passively allowing technologies like AI to dictate your life. And for us the term “AI ethics” aptly encapsulates this form of human flourishing. 

To this end, DAL is going to continue on this path — writing, organizing workshops, and hosting conferences to further this mission. We are committed to helping update and maintain awareness about important dialogues surrounding AI ethics and human flourishing, ensuring that diverse perspectives are represented and understood. Additionally, through our MESH studio initiative, which includes AI ethics as a key topic of interest, we aim to foster interdisciplinary collaboration and innovative approaches to addressing these crucial issues in the rapidly evolving technological landscape.




Joseph Park is the Content Lead at DAL (joseph@dalab.xyz)

Illustration:  Soryung Seo
Edits:  Janine Liberty