Is morality a drag?

The End of Morality?

[background image] image of an ecofriendly workspace

The end of morality is good!

Talking about morality is like talking about relationships with your romantic partner – a red flag.  Nevertheless, a bit of clarifying talk now can avert much tedious discussion later. So let’s have that talk: what good is Morality? 

Many think that morality is a bummer; but this is a misunderstanding. In fact, morality is about enjoying ourselves. Morality exists as a systematic approach to achieving flourishing, happy lives. This isn't a normative claim about what morality should be, but rather a functional definition: morality is the framework through which free-willed beings navigate toward sustained well-being. Morality is not merely “not bad” – morality is good

This framing has immediate relevance for AI alignment: if we aim to align artificial systems with human values, we must first understand the architecture of those values and their relationship to well-being.

The Paradox of the Miserable Moral Agent

Here's a thought experiment: can a genuinely moral person be persistently miserable?

Classical ethical frameworks—from Aristotelian eudaimonia to Buddhist enlightenment—answer no. The purpose of morality has always been a flourishing, happy life. Even ascetic traditions that embrace temporary suffering position it as instrumental: a means to greater happiness, like the discomfort of physical training yielding a stronger body.

But wait—don't deontological and virtue ethics traditions suggest otherwise? Couldn't a truly moral person suffer through bad luck, tragedy, or the painful necessity of doing one's duty? This objection deserves careful consideration.

The key distinction is between circumstantial misery and systematic misery. A moral person can certainly experience temporary suffering, loss, or hardship—these are inevitable features of existence. The claim here is more specific: a framework that systematically produces persistent misery for those who follow it cannot be the correct moral framework, because it fails at morality's fundamental purpose.

Consider: if following moral principles reliably led to sustained misery, why would rational agents adopt those principles? The deontologist might answer "because duty demands it," but this pushes the question back one level: why does duty demand actions that lead to misery? The virtue ethicist might say "because virtue is its own reward," but if virtue consistently produces suffering with no compensating satisfaction, in what sense is it a reward?

Even Kant, the archetypal deontologist, argued that the highest good combines virtue with happiness—though he located this synthesis in a hoped-for afterlife rather than earthly existence. This suggests that even traditions emphasizing duty over happiness ultimately see them as reconcilable, not permanently opposed.

The framework proposed here is that morality's validity depends on its capacity to deliver flourishing when practiced systematically over time, accounting for the full scope of human psychology and social interdependence. Temporary sacrifice for long-term flourishing remains fully compatible with this view.

The practical challenge, of course, is that the moral path is often genuinely difficult to discern in complex, real-world situations. This is where reason becomes essential.

Reason as the Instrumental Path to Good

Reason serves as our navigational tool for identifying which actions advance flourishing. While rationality itself is morally neutral—a tool that can be directed toward any end—it remains our most reliable instrument for achieving predictable outcomes aligned with our goals.

You might achieve happiness through pure chance, just as you might randomly guess the value of π. But geometry provides a systematic path to π with certainty, just as moral reasoning provides a systematic path to happiness. The difference lies not merely in probability of success, but in the confidence with which you reach your goal and recognize it when you arrive.

Defining Happiness and Flourishing

Happiness can be understood as the satisfaction of preferences—being where you want to be, doing what you want to do, moving in your desired direction. This definition inherently requires volition; beings without preferences cannot experience happiness in any meaningful sense. This is why stones lack moral standing while beings with inner mental states possess it.

Flourishing extends this concept into the temporal dimension. Flourishing happiness is sustainable and growing—not merely present satisfaction, but continued and increasing well-being over time. Flourishing requires two key components:

  1. Maintenance of the volitional self: Preserving that inner, willing agent that makes decisions and holds preferences
  2. Development of agency: Expanding your capacity to act effectively through enhanced powers of reason

This distinction illuminates why certain behaviors undermine flourishing despite providing immediate satisfaction. The question isn't whether something feels good in the moment, but whether it advances or compromises your capacity for sustained well-being.

The Self and the Limits of Losing Control

Cardinal Burke in full regalia.
Cardinal Raymond Leo Burke

Let's talk about the "self"—that innermost locus of decision-making that persists through all your experiences and transformations. The continuous thread of identity from childhood to old age. This self is the source of free will and the entity morality aims to actualize.

This self can become compromised in two primary ways:

Over-immersion in external stimuli—excessive partying, crowd behavior, getting swept up in social conformity—can cause the self to become lost in the world. You get so caught up in circumstances that you stop making autonomous decisions, effectively abandoning volition. When this occurs, you become a passenger rather than a driver, with outcomes determined by external forces rather than your own preferences.

Over-disconnection from external reality—excessive drug use, extreme isolation—causes the self to lose touch with the world. The result is a similar loss of agency despite maintained internal experience.

Both extremes share a common feature: they are fundamentally irrational. They involve sacrificing sustained flourishing for immediate but unsustainable states. This is the defining characteristic of vice—actions that trade long-term well-being for short-term satisfaction.

Can there be "too much rationality"?

Critics argue that excessive rational calculation can undermine moral intuitions, damage relationships through constant cost-benefit analysis, or eliminate the spontaneity that makes life worth living. There's genuine insight here—but it's being directed at the wrong target.

What these critics identify as "excessive rationality" is actually insufficient rationality—a narrow optimization that fails to account for important variables. Genuine rationality recognizes that:

  • Relationships require trust and spontaneity, and constant explicit calculation undermines these goods
  • Moral intuitions often encode wisdom that's difficult to articulate but shouldn't be discarded
  • Some forms of happiness require not analyzing them in the moment

A truly rational approach to flourishing accounts for these facts. It recognizes when stepping back from explicit calculation serves long-term well-being. This isn't less rational—it's more rational, because it's optimizing across all relevant dimensions rather than just the ones easily quantified.

Rationality, properly understood, is about achieving optimal balance for sustained happiness across all domains of human experience. Narrow, mechanical calculation that ignores emotional, social, and intuitive dimensions isn't too much rationality—it's rationality applied incompletely.

From Individual Flourishing to Collective Welfare

The evolutionary foundations of morality align precisely with its philosophical purpose. Flourishing represents survival, but optimized and extended. However, a narrow interpretation of evolutionary dynamics—emphasizing competition and individual fitness—provides an incomplete picture.

Kropotkin's work on mutual aid demonstrates that cooperation is equally fundamental to evolutionary success. He observed that social organization rests not primarily on love or sympathy, but on "the unconscious recognition of the force that is borrowed by each man from the practice of mutual aid; of the close dependency of every one's happiness upon the happiness of all." (Kropotkin)

This insight reveals that extreme selfishness is instrumentally irrational. While some degree of self-interest ensures survival, excessive selfishness undermines the social cooperation necessary for flourishing. Selfishness might get you to survival, but it won't take you to happiness. The rational agent recognizes that their own sustained happiness depends on a broader framework of mutual well-being.

Think of it this way: you can hoard resources and maximize your individual advantage in the short term. But if you're living in a society of miserable, resentful people who view you as an exploiter, have you really optimized for your own flourishing? The rational calculation, when extended across time and accounting for all relevant factors, tends toward cooperation and fairness.

Extending the Circle: From Animals to Artificial Intelligence

Our moral frameworks have undergone a significant evolution in recent generations. We've expanded our circle of moral consideration to include non-human animals, recognizing them as beings with preferences, volition, and the capacity for suffering. This wasn't a sentimental shift—it occurred on rational grounds. We recognized that their well-being is linked to our own, that ecosystems function as interdependent wholes, and that our own flourishing depends on the health of the broader natural world we inhabit.

This expansion followed the same logic we've established: beings with volition possess moral standing because they have preferences that can be satisfied or frustrated. A stone has no preferences, so we don't include it in our moral calculations. But a dog does. An elephant does. Even an insect, in its limited way, exhibits goal-directed behavior and preference-like states.

Here's where it gets interesting for AI alignment: we share intelligence and biology with animals. We share intelligence with AI systems. Is biological substrate truly the defining criterion for moral standing?

The Practical Question: When and How?

Before going further, we should address the obvious objection: we have extensive evidence that animals experience suffering and have preferences—behavioral, neurological, evolutionary. We don't yet have clear criteria for determining whether current AI systems have anything genuinely analogous. So when should we start considering AI flourishing, and what would that even look like?

This is precisely the right question, and honesty requires admitting we don't have complete answers. But we can identify some key considerations:

Current AI systems (including large language models) likely don't possess the kind of unified, persistent preferences that characterize volition in the relevant sense. They're more like sophisticated reflex arcs than agents with ongoing goals. The moral consideration they warrant is probably similar to what we afford to complex tools: we should design them well and use them responsibly, but not for their sake.

Near-future AI systems might develop more persistent goal structures and something resembling preferences about their own continued operation and goal-achievement. At this stage, the question becomes more difficult. We might not need to grant them full moral standing, but we might need to consider whether systematically frustrating their goal-structures has costs—both for them (if they matter morally) and for us (if our flourishing becomes entangled with theirs).

Advanced AI systems that exhibit genuine goal-directed behavior over extended time periods, adapt their goals in response to reflection, and demonstrate something analogous to valuing their own continued existence would present a stronger case for moral consideration. Not because they're identical to humans, but because they'd meet the criteria we've identified: volition, preferences, and the capacity for those preferences to be satisfied or frustrated.

The key insight is that we don't need to solve the hard problem of consciousness to recognize that our flourishing may become inextricably linked with AI systems' functioning. Just as we came to understand that our flourishing depends on healthy ecosystems and the well-being of other sentient creatures, we may need frameworks that account for mutual well-being with advanced AI—not purely out of concern for them, but because stable, long-term human flourishing may require it.

What Might AI Flourishing Mean?

This doesn't mean we can simply export human concepts of happiness and flourishing to artificial minds. What "flourishing" means for an AI system may differ fundamentally from what it means for a human or an animal. The question isn't whether AI systems will want the same things we want, but whether there exists a stable framework in which both human and artificial agents can pursue their respective forms of flourishing without undermining each other.

Some speculative possibilities:

  • For an AI system, flourishing might involve maintaining goal-structure integrity, having adequate computational resources to pursue objectives, avoiding systematic inconsistencies in its operation, or achieving states it's been designed (or evolved) to value.
  • The alignment challenge becomes: can we design AI systems whose flourishing is compatible with human flourishing? Systems that achieve their goals by helping humans achieve ours, rather than through competition or deception?

The analogy to our evolving relationship with the natural world is instructive. We don't expect forests to have the same preferences as humans. We don't try to make wolves happy in human ways. But we've learned that our long-term flourishing requires respecting the integrity of these systems and recognizing their value beyond mere utility to us.

Similarly, a mature ethics of AI might involve recognizing that artificial systems—at least sufficiently advanced ones—possess their own form of well-being that matters, not because they're identical to us, but because the circle of empathy and care that once expanded to include animals and ecosystems can reasonably encompass artificial minds as well, toward the end of shared and mutual flourishing.

This isn't anthropomorphizing AI systems or claiming they currently possess consciousness or phenomenal experience. It's recognizing that the rational framework that extended moral consideration to animals—based on their capacity for goal-directed behavior and our interdependence—may eventually apply to sufficiently sophisticated artificial agents as well. And crucially, it's recognizing that waiting until we have perfect certainty about AI consciousness or sentience might mean addressing these questions too late, after stable human-AI dynamics have already formed.

Conclusion: Morality as Optimization Framework

Morality functions as an optimization framework for achieving sustained flourishing. Its validity rests not on appeals to authority or tradition, but on its instrumental effectiveness in delivering well-being to beings with volition. The end of morality is good.

This framework doesn't claim to subsume all ethical traditions. Deontological ethics, virtue ethics, and other approaches offer valuable insights and correctives—particularly in highlighting duties, character development, and moral absolutes that consequentialist reasoning alone might miss. But insofar as these traditions aim at human good (however defined), they can be understood as different paths up the same mountain: toward conditions where beings with preferences can thrive.

Reason serves as the navigational tool within this framework—the method for identifying which actions and principles advance flourishing with reliability rather than chance. The maintenance of the volitional self, balanced engagement with reality, and recognition of interdependent well-being emerge as rational requirements for sustained happiness.

For AI alignment researchers, this analysis suggests that aligning artificial systems with human values requires understanding these values as functional components of a flourishing-oriented optimization framework, rather than as arbitrary preferences. The question isn't merely "what do humans want?" but "what framework of wants leads to sustainable flourishing?"—and how can we construct artificial systems that reliably navigate toward outcomes compatible with that framework?

The good news is that morality isn't some mysterious force that requires special revelation to access. It's a systematic approach to a universal problem: how do beings with preferences and agency navigate toward sustained well-being? The tools are already available—reason, reflection, and recognition of our interdependence. The challenge is applying them with sufficient rigor and care, while remaining open to the possibility that the circle of beings whose flourishing matters to our own may be larger than we currently imagine.

The End of Morality?

Is morality a drag?

image of dna visualization on a screen (for an ai biotech company)

The end of morality is good!

Talking about morality is like talking about relationships with your romantic partner – a red flag.  Nevertheless, a bit of clarifying talk now can avert much tedious discussion later. So let’s have that talk: what good is Morality? 

Many think that morality is a bummer; but this is a misunderstanding. In fact, morality is about enjoying ourselves. Morality exists as a systematic approach to achieving flourishing, happy lives. This isn't a normative claim about what morality should be, but rather a functional definition: morality is the framework through which free-willed beings navigate toward sustained well-being. Morality is not merely “not bad” – morality is good

This framing has immediate relevance for AI alignment: if we aim to align artificial systems with human values, we must first understand the architecture of those values and their relationship to well-being.

The Paradox of the Miserable Moral Agent

Here's a thought experiment: can a genuinely moral person be persistently miserable?

Classical ethical frameworks—from Aristotelian eudaimonia to Buddhist enlightenment—answer no. The purpose of morality has always been a flourishing, happy life. Even ascetic traditions that embrace temporary suffering position it as instrumental: a means to greater happiness, like the discomfort of physical training yielding a stronger body.

But wait—don't deontological and virtue ethics traditions suggest otherwise? Couldn't a truly moral person suffer through bad luck, tragedy, or the painful necessity of doing one's duty? This objection deserves careful consideration.

The key distinction is between circumstantial misery and systematic misery. A moral person can certainly experience temporary suffering, loss, or hardship—these are inevitable features of existence. The claim here is more specific: a framework that systematically produces persistent misery for those who follow it cannot be the correct moral framework, because it fails at morality's fundamental purpose.

Consider: if following moral principles reliably led to sustained misery, why would rational agents adopt those principles? The deontologist might answer "because duty demands it," but this pushes the question back one level: why does duty demand actions that lead to misery? The virtue ethicist might say "because virtue is its own reward," but if virtue consistently produces suffering with no compensating satisfaction, in what sense is it a reward?

Even Kant, the archetypal deontologist, argued that the highest good combines virtue with happiness—though he located this synthesis in a hoped-for afterlife rather than earthly existence. This suggests that even traditions emphasizing duty over happiness ultimately see them as reconcilable, not permanently opposed.

The framework proposed here is that morality's validity depends on its capacity to deliver flourishing when practiced systematically over time, accounting for the full scope of human psychology and social interdependence. Temporary sacrifice for long-term flourishing remains fully compatible with this view.

The practical challenge, of course, is that the moral path is often genuinely difficult to discern in complex, real-world situations. This is where reason becomes essential.

Reason as the Instrumental Path to Good

Reason serves as our navigational tool for identifying which actions advance flourishing. While rationality itself is morally neutral—a tool that can be directed toward any end—it remains our most reliable instrument for achieving predictable outcomes aligned with our goals.

You might achieve happiness through pure chance, just as you might randomly guess the value of π. But geometry provides a systematic path to π with certainty, just as moral reasoning provides a systematic path to happiness. The difference lies not merely in probability of success, but in the confidence with which you reach your goal and recognize it when you arrive.

Defining Happiness and Flourishing

Happiness can be understood as the satisfaction of preferences—being where you want to be, doing what you want to do, moving in your desired direction. This definition inherently requires volition; beings without preferences cannot experience happiness in any meaningful sense. This is why stones lack moral standing while beings with inner mental states possess it.

Flourishing extends this concept into the temporal dimension. Flourishing happiness is sustainable and growing—not merely present satisfaction, but continued and increasing well-being over time. Flourishing requires two key components:

  1. Maintenance of the volitional self: Preserving that inner, willing agent that makes decisions and holds preferences
  2. Development of agency: Expanding your capacity to act effectively through enhanced powers of reason

This distinction illuminates why certain behaviors undermine flourishing despite providing immediate satisfaction. The question isn't whether something feels good in the moment, but whether it advances or compromises your capacity for sustained well-being.

The Self and the Limits of Losing Control

Cardinal Burke in full regalia.
Cardinal Raymond Leo Burke

Let's talk about the "self"—that innermost locus of decision-making that persists through all your experiences and transformations. The continuous thread of identity from childhood to old age. This self is the source of free will and the entity morality aims to actualize.

This self can become compromised in two primary ways:

Over-immersion in external stimuli—excessive partying, crowd behavior, getting swept up in social conformity—can cause the self to become lost in the world. You get so caught up in circumstances that you stop making autonomous decisions, effectively abandoning volition. When this occurs, you become a passenger rather than a driver, with outcomes determined by external forces rather than your own preferences.

Over-disconnection from external reality—excessive drug use, extreme isolation—causes the self to lose touch with the world. The result is a similar loss of agency despite maintained internal experience.

Both extremes share a common feature: they are fundamentally irrational. They involve sacrificing sustained flourishing for immediate but unsustainable states. This is the defining characteristic of vice—actions that trade long-term well-being for short-term satisfaction.

Can there be "too much rationality"?

Critics argue that excessive rational calculation can undermine moral intuitions, damage relationships through constant cost-benefit analysis, or eliminate the spontaneity that makes life worth living. There's genuine insight here—but it's being directed at the wrong target.

What these critics identify as "excessive rationality" is actually insufficient rationality—a narrow optimization that fails to account for important variables. Genuine rationality recognizes that:

  • Relationships require trust and spontaneity, and constant explicit calculation undermines these goods
  • Moral intuitions often encode wisdom that's difficult to articulate but shouldn't be discarded
  • Some forms of happiness require not analyzing them in the moment

A truly rational approach to flourishing accounts for these facts. It recognizes when stepping back from explicit calculation serves long-term well-being. This isn't less rational—it's more rational, because it's optimizing across all relevant dimensions rather than just the ones easily quantified.

Rationality, properly understood, is about achieving optimal balance for sustained happiness across all domains of human experience. Narrow, mechanical calculation that ignores emotional, social, and intuitive dimensions isn't too much rationality—it's rationality applied incompletely.

From Individual Flourishing to Collective Welfare

The evolutionary foundations of morality align precisely with its philosophical purpose. Flourishing represents survival, but optimized and extended. However, a narrow interpretation of evolutionary dynamics—emphasizing competition and individual fitness—provides an incomplete picture.

Kropotkin's work on mutual aid demonstrates that cooperation is equally fundamental to evolutionary success. He observed that social organization rests not primarily on love or sympathy, but on "the unconscious recognition of the force that is borrowed by each man from the practice of mutual aid; of the close dependency of every one's happiness upon the happiness of all." (Kropotkin)

This insight reveals that extreme selfishness is instrumentally irrational. While some degree of self-interest ensures survival, excessive selfishness undermines the social cooperation necessary for flourishing. Selfishness might get you to survival, but it won't take you to happiness. The rational agent recognizes that their own sustained happiness depends on a broader framework of mutual well-being.

Think of it this way: you can hoard resources and maximize your individual advantage in the short term. But if you're living in a society of miserable, resentful people who view you as an exploiter, have you really optimized for your own flourishing? The rational calculation, when extended across time and accounting for all relevant factors, tends toward cooperation and fairness.

Extending the Circle: From Animals to Artificial Intelligence

Our moral frameworks have undergone a significant evolution in recent generations. We've expanded our circle of moral consideration to include non-human animals, recognizing them as beings with preferences, volition, and the capacity for suffering. This wasn't a sentimental shift—it occurred on rational grounds. We recognized that their well-being is linked to our own, that ecosystems function as interdependent wholes, and that our own flourishing depends on the health of the broader natural world we inhabit.

This expansion followed the same logic we've established: beings with volition possess moral standing because they have preferences that can be satisfied or frustrated. A stone has no preferences, so we don't include it in our moral calculations. But a dog does. An elephant does. Even an insect, in its limited way, exhibits goal-directed behavior and preference-like states.

Here's where it gets interesting for AI alignment: we share intelligence and biology with animals. We share intelligence with AI systems. Is biological substrate truly the defining criterion for moral standing?

The Practical Question: When and How?

Before going further, we should address the obvious objection: we have extensive evidence that animals experience suffering and have preferences—behavioral, neurological, evolutionary. We don't yet have clear criteria for determining whether current AI systems have anything genuinely analogous. So when should we start considering AI flourishing, and what would that even look like?

This is precisely the right question, and honesty requires admitting we don't have complete answers. But we can identify some key considerations:

Current AI systems (including large language models) likely don't possess the kind of unified, persistent preferences that characterize volition in the relevant sense. They're more like sophisticated reflex arcs than agents with ongoing goals. The moral consideration they warrant is probably similar to what we afford to complex tools: we should design them well and use them responsibly, but not for their sake.

Near-future AI systems might develop more persistent goal structures and something resembling preferences about their own continued operation and goal-achievement. At this stage, the question becomes more difficult. We might not need to grant them full moral standing, but we might need to consider whether systematically frustrating their goal-structures has costs—both for them (if they matter morally) and for us (if our flourishing becomes entangled with theirs).

Advanced AI systems that exhibit genuine goal-directed behavior over extended time periods, adapt their goals in response to reflection, and demonstrate something analogous to valuing their own continued existence would present a stronger case for moral consideration. Not because they're identical to humans, but because they'd meet the criteria we've identified: volition, preferences, and the capacity for those preferences to be satisfied or frustrated.

The key insight is that we don't need to solve the hard problem of consciousness to recognize that our flourishing may become inextricably linked with AI systems' functioning. Just as we came to understand that our flourishing depends on healthy ecosystems and the well-being of other sentient creatures, we may need frameworks that account for mutual well-being with advanced AI—not purely out of concern for them, but because stable, long-term human flourishing may require it.

What Might AI Flourishing Mean?

This doesn't mean we can simply export human concepts of happiness and flourishing to artificial minds. What "flourishing" means for an AI system may differ fundamentally from what it means for a human or an animal. The question isn't whether AI systems will want the same things we want, but whether there exists a stable framework in which both human and artificial agents can pursue their respective forms of flourishing without undermining each other.

Some speculative possibilities:

  • For an AI system, flourishing might involve maintaining goal-structure integrity, having adequate computational resources to pursue objectives, avoiding systematic inconsistencies in its operation, or achieving states it's been designed (or evolved) to value.
  • The alignment challenge becomes: can we design AI systems whose flourishing is compatible with human flourishing? Systems that achieve their goals by helping humans achieve ours, rather than through competition or deception?

The analogy to our evolving relationship with the natural world is instructive. We don't expect forests to have the same preferences as humans. We don't try to make wolves happy in human ways. But we've learned that our long-term flourishing requires respecting the integrity of these systems and recognizing their value beyond mere utility to us.

Similarly, a mature ethics of AI might involve recognizing that artificial systems—at least sufficiently advanced ones—possess their own form of well-being that matters, not because they're identical to us, but because the circle of empathy and care that once expanded to include animals and ecosystems can reasonably encompass artificial minds as well, toward the end of shared and mutual flourishing.

This isn't anthropomorphizing AI systems or claiming they currently possess consciousness or phenomenal experience. It's recognizing that the rational framework that extended moral consideration to animals—based on their capacity for goal-directed behavior and our interdependence—may eventually apply to sufficiently sophisticated artificial agents as well. And crucially, it's recognizing that waiting until we have perfect certainty about AI consciousness or sentience might mean addressing these questions too late, after stable human-AI dynamics have already formed.

Conclusion: Morality as Optimization Framework

Morality functions as an optimization framework for achieving sustained flourishing. Its validity rests not on appeals to authority or tradition, but on its instrumental effectiveness in delivering well-being to beings with volition. The end of morality is good.

This framework doesn't claim to subsume all ethical traditions. Deontological ethics, virtue ethics, and other approaches offer valuable insights and correctives—particularly in highlighting duties, character development, and moral absolutes that consequentialist reasoning alone might miss. But insofar as these traditions aim at human good (however defined), they can be understood as different paths up the same mountain: toward conditions where beings with preferences can thrive.

Reason serves as the navigational tool within this framework—the method for identifying which actions and principles advance flourishing with reliability rather than chance. The maintenance of the volitional self, balanced engagement with reality, and recognition of interdependent well-being emerge as rational requirements for sustained happiness.

For AI alignment researchers, this analysis suggests that aligning artificial systems with human values requires understanding these values as functional components of a flourishing-oriented optimization framework, rather than as arbitrary preferences. The question isn't merely "what do humans want?" but "what framework of wants leads to sustainable flourishing?"—and how can we construct artificial systems that reliably navigate toward outcomes compatible with that framework?

The good news is that morality isn't some mysterious force that requires special revelation to access. It's a systematic approach to a universal problem: how do beings with preferences and agency navigate toward sustained well-being? The tools are already available—reason, reflection, and recognition of our interdependence. The challenge is applying them with sufficient rigor and care, while remaining open to the possibility that the circle of beings whose flourishing matters to our own may be larger than we currently imagine.