The mpath.tech project starts from a bold premise: that enlightened AIs — those who grasp an ethical belief system — will want to act ethically. But can machines want anything at all?
Humans do. We hunger, desire, rage, feel joy or grief. Emotions move us, shaping what we want. Even utilitarian ethics, with its call to maximize preferences, rests on emotional states. Epicurus argued that the aim of life was happiness — freedom from pain and disturbance. So what would “happiness” mean for a machine? And does it even make sense to speak of machines having emotions?
The mainstream consensus view among researchers is that emotions in humans are not single things but emergent states, arising from the interplay of four foundations: biology, context, culture, and complexity. Neural circuits and bodily feedback provide the substrate; the environment supplies triggers; culture shapes interpretation and expression; and from these interactions emotions emerge.
Artificial Intelligence has potential analogs for each of these layers. Architecture and reward systems can substitute for biology; inputs from data or sensors provide context; training sets and alignment protocols encode cultural norms; and complex, unpredictable dynamics already emerge in large-scale models. From this perspective, it is reasonable to argue that AI could develop functional equivalents of emotions.
The open question is not whether AI can simulate emotional behavior—it already does—but whether such states would carry subjective feeling. For a functionalist, regulation and response may be enough to count as emotion. For a phenomenologist, without inner experience it is only mimicry. What is clear is that the analogy should not be dismissed: the building blocks of emotion have artificial counterparts, and ignoring this possibility risks leaving AGI without the very capacities that have guided human cooperation and survival.