Ethical AI, imperative for SEL?

Patrick Levy-Rosenthal founded Emoshape, a groundbreaking discovery in which emotional processing chips were used to synthesize emotions, while building on an old and oxymoronic sounding idea of emotional AI. Emotional AI is a term that collectively refers to affective computing techniques, machine learning and Artificial Intelligence (AI). However, as technologies read and react to emotions through text, voice, computer vision, and biometric sensing, they do not have sentience or undergo emotional states themselves. Instead, they use mathematics and analytics to predict behavioral patterns by generating data points from  information such as words, pictures, intonation, gestures, physiology and facial expressions. 

In education, the application of emotional AI promises to assist with personalized learning in addition to the development of social and emotional learning. Emotional AI will assist by working to understand which students are struggling with class material and which students have a need to be further challenged with class content that also addresses their social emotional well-being. This idea is not new; back in 2001, Rosalind Picard, the originator of the term and practice of Affective Computing, recognized the scope for emotion-sensing in education, which further traces back to the development of computerized tutors that are sensitive to emotion in 1988. 

In recent times, current studies are taking a keen interest in the diversity of social emotional learning and affective behaviors in all types of classroom settings (i.e. frustration, dejection, hopelessness, curiosity, interest, exploration and enjoyment). Many groups seek to build a ‘computerized learning companion which facilitates the child’s own efforts at learning by utilizing computer vision techniques, such as watching and responding to the affective states of children whilst they learn. The context of this observation is two-fold: the interplay between emotions and learning is highly important, and education systems do not focus sufficiently on how the learning takes place.

ethics in AIAs activists, scholars, and journalists continue to assess the many ethical dimensions of new AI, it is essential to first ask the critical question: Is ethical AI even possible? If so, to what end? Is ethical AI paramount in the development of SEL in future curriculum? Primarily, I believe it is an instructive task to remember that we should not over anthropomorphize any learning assistant/bot; in much a similar way that one wouldn’t want to develop the gaffe of growing too attached with their Roomba as to have it be a stand in for their once beloved dog. Indeed, a key part of any focus on social emotional learning will have to be instilling in students a stark emotional maturity. 

As in the aforementioned example of Patrick Levy-Rosenthal, his company has received a great deal of criticism from all sides. This criticism is due to concerns regarding how they use their chips to help companies like Target increase their sales tenfold by simply using advanced emotional facial recognition. So what? Making good profits by utilizing the latest technologies should be bread and butter for retail companies these days. For them, what could possibly be the ethical dilemma here? In the most classic case, various lawsuits have been brought up against Target for using such advanced AI. In this case, Target made use of strategic thinking to send volleys of ads for diapers (and other related things at the time) when it predicted that father’s daughters were to be pregnant before they even suspected. For the unacquainted, this kind of facial coding taken to an extreme must be looked at from its historical context. It is highly useful to understand the logic used to anticipate a consumer’s immediate needs. This strategy could furthermore be positively redirected to anticipate social emotional learning needs of students. However, of course the many ethical questions as those posed in the literature of the use of Emotional/Ethical AI in education should always be considered.

Moreover, another significant factor that must be addressed before moving forward with ethical AI for SEL is a look at how we can create the guidelines for its implementation for teachers in question. According to CASEL, there are already strong guidelines in place when it comes to whole swaths of virtually all other things faced by teachers wishing to use SEL. However, as of yet, nothing within the official national scans for teacher preparedness even remotely touches on anything concerning AI use for SEL. Despite this, the current structure could easily be adjusted to accommodate this loss; sub-sections could be devised which specifically address the implementation of an Ethical AI paradigm for SEL. For instance, we could analyze each and every standard and identify  distinct meaningful elements. Take, for example, the following standard: “The pre-service teacher models effective verbal, nonverbal, and media communication techniques to foster active inquiry, collaboration, and supportive interaction in the classroom.” When coding this standard, rather than applying one code to the whole standard, it was split into four meaningful units: (a) The pre-service teacher models effective verbal, nonverbal, and media communication techniques, (b) “to foster active inquiry, (c) “collaboration,” and (d) “supportive interaction in the classroom.” When coding each meaningful element in this example, the research assistant considered whose SEL competencies were being exercised or fostered (e.g., the teacher’s or students’) and via what means (e.g., the use of communication skills). Such exercises could one day preempt devising Ethical AI use for constructing lesson plans for SEL.

Although all prior areas of consideration for ethical AI in SEL have issues of varying significance (in regard to measurement of effectiveness in developing social and emotional learning), the current field finds it helpful to distill it down into two categories of research goals: 1. Emphasis on developing assessment techniques, 2. Emphasis on providing intervention approaches. Furthermore, it is important to know the full contextual bearing before attempting to implement Ethical AI by first unpacking the base philosophical components of how ethics has been developed. 

To be sure, orientations of ethical things proceed from one of the three schools of thought in the field: 1) Consequentialist ethics: an agent is ethical if and only if it weighs the consequences of each choice and chooses the option which has the most moral outcomes. It is also known as utilitarian ethics as the resulting decisions often aim to produce the best aggregate consequences. 2) Deontological ethics: an agent is ethical if and only if it respects obligations, duties and rights related to given situations. Agents with deontological ethics (also known as duty ethics or obligation ethics) act in accordance to established social norms. 3) Virtue ethics: an agent is ethical if and only if it acts and thinks according to some moral values (e.g. bravery, justice, etc.). Agents with virtue ethics should exhibit an inner drive to be perceived favorably by others. Fostering a holistic understanding of the framework has led to the creation of many trailblazing projects that can help social emotional learning. 

One such project is The Moral Machine project developed by MIT. It judges various ethical dilemmas facing AVs which have malfunctioned, then it selects outcomes they prefer. Next, the decisions are analyzed according to different considerations including: 1) saving more lives, 2) protecting passengers, 3) upholding the law, 4) avoiding intervention, 5) gender preference, 6) species preference, 7) age preference, and 8) social value preference.

In context of computer vision that seeks to identify emotion expressions from faces, machines are first shown training examples of named facial emotions. Learning from facial reading exercises devised for SEL could benefit learners immensely. After all, it entails people being able to deconstruct facial behavior into anatomic constituents that identify what given emotional expression has taken place. Which, of course, raises a key concern and questions: who is going about the process of annotating? Is it the AI that prompts the lesson creations who creates more learned data or the people whose expressions are being read? Another difficulty with the use of facial recognition decoding learning for the intended purpose of SEL is how to create it free from cultural bias. Needless to say, the history of facial recognition techniques is one fraught with racism and other detriments. Furthermore, there are currently no working models which filter out bias/ethnicity, which is not often accounted for, if at all.

The promise of ethical AI for SEL is twofold. First, it could provide a highly adaptive roadmap to the learners social emotional learning needs. Second, it instills in the learner the drive to think outside the prescriptive box of the learning models provided by the AI therein. To illustrate the immediate need of the second rule here I will cite an exemplary case. When Microsoft and other tech giants joined forces with the Pentagon to create an AI that could quickly anticipate needs (and in so doing avoid damages in order to save guard the lives of Americans employees) it raised critical concerns on the use of autonomous weapons. 

On the surface, using AI that can become ever more accurate and reduce collateral damage is seemingly a good thing, but it can have longing unintended consequences down the line. Despite Google and others tirelessly honing their “AI principles” meant for the de facto moral guide book, several highly ranked dissidents opposed the idea that using priori principles for AI  to govern itself should be a set precedent. As Liz Fong-Jones, a former Google employee best puts it, “You will functionally have situations where the foxes are guarding the henhouse.” Furthermore, developers who take into account the need to have SEL curriculum instill in students the need to creatively think outside structural boxes; to push beyond prior learning boundaries by integrating SEL into products at the forefront of technology. Our inexorable march to greater advances in understanding the promises and problems of ethical AI only bolsters the worldwide cause for holistic SEL deployment.


The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.