Renormalizable Morality
Thinking about moral systems more like we think about successful theories of particle physics can help to weed out bad ideas.
Moral philosophy is complicated. The ancient version of it based on divine commandments was a crude attempt but good enough to do one core thing we need morality for: averting societal collapse.
By the time we understood enough about human nature and where it came from (natural selection) to start improving on ancient moral systems, there was a problem. Those systems had become entrenched. The priests, and school masters, and politicians didn’t want to hear about anyone’s new-fangled theories of morality.
The people whose job it is to think about such theories were not raised in a vacuum either. An intellectual cocoon had formed around a basic set of assumptions. And a big one of those was about the limited role of morality in daily life.
Religion got us to the point where we were not lying and cheating and murdering our way through life, maybe writing a check or two every year to some random charities, and that’s about it. The bulk of your existence—pursuing a career, romance, recreation, socializing—is not stuff that normal people think is the province of morality. That’s just selfish stuff you have to figure out on your own.
What you spend the vast majority of your time and resources on is seen as amoral at best. Of course the moral ideal would be to live an absolutely spare, ascetic life, channeling the majority of what you make to faceless people (or animals) oceans away from you and your own interests. But we’re just fallen humans, and nobody ever does that.
Some really vigilant types might give away 15% of what they make, but why not 20% or 25%? Those numbers are certainly possible for most people, if you’re willing to moderately reduce your standard of living. If you’re really committed, you can live a very comfortable life, by historical standards, and still give away anything above $20k/year or so. In principle, why even stop there? Do you really need more than two pairs of pants—more than a Nigerian child needs to not get malaria?
Now, back in reality, you’re not going to give away 80% of your income. Come anywhere close to it and your family will be planning an intervention. Start talking about how the moral reasoning behind your position is totally straightforward, and they’ll bring a psychiatrist with them.
Imagine someone thinking he might arrive at a realistic donation percentage through moral reasoning. He’d be laughed out of the Effective Altruist meet-up. Everyone just accepts that insofar as we have a working theory of morality, the percentage it would advise is far above what you’re actually going to give. It’s understood that a moral theory is not the kind of thing that you can actually apply to real life in that way.
This problem is not new, of course. It exists in the traditional, religious version of altruism. It’s not discussed too much there either, but you can see a glancing acknowledgement of it in the doctrine of “Render unto Caesar,” which says that if the state forces you to do something you think is wrong, just obey and accept it as the price of doing business in real life. You can think of your own voracious, amoral ego as a kind of internal Caesar whose power you cannot reasonably resist. In real life, you’re going to pay your taxes, and you’re going to spend almost all your remaining time and money on pursuing your own selfish interests. Don’t be weird.
The same mentality has been ported over to more rational, secular theories of altruism like old-school Utilitarianism or recent variants like EA. Most people don’t bat an eye about this. But they should. You’re supposed to be rational. You’re not supposed to just be content with inconsistencies. But you accept a moral theory whose big implication—that something like 80% of your actual life is morally suspect—you then just blithely ignore.
The usual response is that humans are imperfect, deal with it. But this is not merely imperfect. It’s 80% of the way to the polar opposite of what you’re supposed to be doing. Human nature is a given, you say, and we’re not going to change it. Well, shouldn’t your moral theory take into account that minor little fact—the fundamental nature of human beings? Isn’t this a pretty big clue that the problem is not you, it’s this particular moral theory, the core of which you have inherited, essentially unchanged, from some Iron Age tribes in the Middle East?
Why are we so inured to this obvious failure of our moral theory? For most people, it’s because they don’t think of it as a theory at all—i.e., as a cognitive tool for understanding a set of observable facts. They see morality as just an instruction set that you’re given as a kid, and if you don’t at least pretend to follow it, people will find you disgusting. When a bundle of ideas sneaks into people’s minds like this, it’s critical to remember that it is just a theory and must be evaluated in the way we do all other theories. We need to mark out the set of facts it’s trying to explain and see how well it performs. No epistemological get-out-of-jail-free cards.
So let’s switch from a set-of-instructions mindset to more of a physics mindset. Physicists are quite familiar with theories like this that get some key things right (like averting societal collapse) while also leading to absurd conclusions when you push them a bit further. A classic example of this was Enrico Fermi’s theory of weak nuclear interactions. People had observed a process called “beta decay” where a neutron can decay into a proton, an electron, and an anti-neutrino. And the Fermi theory worked great to explain it.
But when you use the theory to calculate what happens on smaller and smaller length scales, below the scale relevant to beta decay, the answers get crazier and crazier. Mathematically, the theory “diverges”—e.g., predicts an infinite energy of new particles being created—as you fully zoom in. What to do? At first, it seems obvious that this inconsistency means you have to throw the whole theory out. You thought you had a good explanation of beta decay, but too bad. Back to the drawing board.
This view, it turns out, is overly draconian. A while later, Steven Weinberg hit on an “electroweak” quantum field theory that explained beta decay without falling into these divergences. And then it was seen that Fermi’s theory emerges as a certain limiting case of electroweak theory, one applicable on the scales relevant to beta decay.
The Fermi theory was thus understood as an “effective theory” in physics lingo, valid in certain contexts but producing absurd conclusions if you push it too far. Effective theories can be quite valuable. They’re usually less complicated than the deeper theories that underly them. The basic objects in the effective theory turn out, in light of the deeper theory, to be made up of other even more fundamental objects. But the effective theory is just simple enough. It retains enough of what matters to explain what we observe at big enough scales.
Christian morality is an effective theory that works well at the scale that matters most for society as a whole—the scale of whether it collapses or not. But then push the theory a little. Apply it at the individual level in a modern society where we need to make all kinds of complicated, long-term life choices. Then the theory breaks down. Its predictions diverge, offering absurdly self-sacrificial answers like that you should give away 80% of your income.
Modern versions of altruism fix a couple of the technical problems with the old religious version, like reliance on supernatural cosmology. But the new theories still suffer from the same basic divergence problem. And people have become so accustomed to ignoring or explaining away this elephant that no one cares.
It’s as if a whole ideology got built up around the Fermi theory and its sacred notion of protons and neutrons as fundamental particles. The concept of “fundamental particles” would be viewed as just synonymous with those appearing in the Fermi theory. Occasionally someone would broach the possibility that neutrons might be made out of other particles, and that understanding them might solve the divergence problem. But for centuries these people would be branded as anti-social cranks. Neutrons 👏 are 👏 fundamental 👏.
Eventually there might be some kind of revolution within physics out of which a new orthodoxy could emerge. If we got lucky, Fermi’s theory would be desacralized. Maybe then people would start acknowledging the divergence problem and making some actual progress. All hail the standard model of particle physics, discovered in the year of our Lord 2274!
Of course this is silly, given the actual completion of the standard model around 1974. There was a nugget of truth in Fermi’s theory, with its neutron fundamentality. But physicists didn’t sacralize the neutron. They entertained the idea that neutrons may be composed of other, more-fundamental particles subject to new kinds of forces that were weak enough not to have been detected before. That was the environment in which people like Steven Weinberg were free to work on new ideas. And so he was able to complete the electroweak theory, which became a pillar of the standard model.
Likewise, altruism should not be sacralized by those working within the human sciences. Let us not disqualify, from jump, the idea of separating altruism into more fundamental components—some of which might turn out to be effective for humanity, and some of which might not.
The part about not cheating, or robbing, or murdering people is very effective. Also the part about not being an automaton, allowing yourself to care deeply for others, and working together for long-run gains. Good stuff. You need it as part of a universal morality that everyone can practice and keep society stable. Also, it’s totally in line with the millions of years of evolution leading to this strong psychological matrix that formed in our brains around the values of cooperation and love and raising children.
But then there’s the part about caring equally for some distant stranger’s child as you would for your own child. That is just manifestly silly. Every nook and cranny of our evolved nature as human beings screams out against that proposition. The moment you consider it as a simple truth claim, rather than a sacred commandment, it just dissolves into an irrational puddle without the slightest plausibility.
Let us not confuse this egalitarian commandment with the idea that one should have a generalized concern for all human beings. That idea is perfectly defensible. All things equal, there is a natural solidarity you will feel with anyone facing the same broad problem as you—the problem of life, of securing food and shelter, of making friends, of having kids and protecting them, etc. A similar sense carries down to other forms of life too. Dogs in particular have exploited this aspect of us and, in return, give us tons of joy. Life is beautiful.
Therefore, you should sacrifice your child’s life if it would save two random children in Nigeria. Record screech. No. That’s an obvious non sequitur. (And let’s not even broach the whole shrimp thing again.)
Everyone knows this. Everyone knows that everyone knows it. For the love of Pinker, can the moral philosophy community just step out of its categorical wonderland and complete this totally obvious recursive chain? We have to let it be socially acceptable to acknowledge the gaping hole—the mess of divergence problems—at the heart of altruism as a moral theory.
The one plausible concern holding this up (societal collapse) is what’s resolved by understanding altruism as an effective theory. Once it is broken up into more fundamental components, we can make sure to retain the ones that promote a basic level of social stability and peace. And we can throw out the divergent, self-sacrificial ones. Indeed the quantum field theory story has some additional lessons for us here.

As physicists gained a deeper understanding of these theories in the 1970’s, they started to glimpse what was really going on with those divergences. It had already been known that while the vast majority of possible quantum field theories have divergences, a tiny subset of them happen to be “renormalizable” (i.e., essentially divergence-free). This term refers to a certain behavior of these theories where, as you go to smaller and smaller scales, the particle energy calculations can still be “normalized” in the sense of still producing convergent, finite answers.
The old perspective on this was that actual divergences couldn’t exist in nature, so the true theory must be a renormalizable one. The awkward fact, though, was that even the renormalizable ones still have divergences buried inside them. It is just that when it comes to any actual experiment you can perform, these internal divergences wind up cancelling out, leaving a finite answer.
But then something important was noticed. Suppose you start out at a really small scale—say, many undiscovered layers of particles down below quarks—and you define some random (even unrenormalizable) theory involving a much more complicated set of fundamental particles on that scale. If you then calculate what kind of effective theory will emerge from this on much larger scales, it will be precisely one of the renormalizable ones. In other words, the composite particles that emerge will always look similar in important ways to the quarks and electrons we actually observe.
What this means is that there’s not a lot of reason to believe quarks and electrons themselves are truly fundamental. They’re the kinds of things that will always come up, whatever nature really looks like at the tiniest scales. The standard model itself is likely just an effective theory. And the whole quest for that perfect mathematical form—the “Theory Of Everything” (TOE) that would be the last word in particle physics—is probably in vain.
That quest for the TOE is like the EA quest for a perfect moral theory, which could finally show us how to get a clear, rational answer for the size of your “expanding circle” of moral concern. Every specific attempt to do this, over centuries, has fallen apart, spewing out the same anti-evolutionary, self-sacrificial divergences. At a certain point you have to recognize this quest as a degenerating research program.
The solution is not to throw up your hands and declare that moral theory is just inapplicable to real life. It’s to first acknowledge that all plausible theories are merely effective ones (in the physics sense), but that some of them are simply more effective than others. When you roll up your sleaves and look at what each moral theory implies, you’ll find that some of them are unrenormalizable and destined to blow up into absurdities when applied to your actual life. But some moral theories are renormalizable. They don’t blow up. They still make sense at the scales you care about—your career, your love life, your recreation.
For those who might continue down this intellectual path, there is still one daunting stretch to get past. If altruism really must go because it’s unrenormalizable, what we’re left with is some form of egoism. Social undesirability alert! But let’s be a little introspective and realize that we’ve already partaken in a good dollop of egoism just by setting foot on such a path.
Effective Altruism was motivated by an intolerance for the sloppy reasoning and lackluster results produced for centuries by Vibe Altruism. The Vibe Altruists tell you to relax, and don’t be so weird. Morality is just that thing about being nice and getting along with people, so you can concentrate on getting into Brown and ultimately landing a sweet product manager gig. It’s not about going all intellectual Columbine whenever someone mentions a charity they donated to.
The point of EA was to expose that kind of superficial thought process for what it is and give some pause to the vibers. The movement was kicked off by a deep sense of cognitive dissatisfaction—a sense that conventional wisdom on this topic is out of order conceptually, chafing against your rational desire to put things into coherent systems. Like all cognitive progress, it’s driven by an order-craving, Aspie kind of urge.
Let us now admit it. Where that urge comes from is not exactly a heart-felt care for and connection with others. The source is more like: No I will not go along with the flow. It is my system of values. It is my mind. Leave yours in shallow puddles of disorder. But I will order mine according to my knowledge, the evidence I think is robust, and the results of my own reasoning. EA was founded on a kind of epistemological pride and even selfishness.
Sorry boys, you’ve already begun it. Now look at the face of your girlfriend, or of your giggling, dimpled little child. Continue down the path and you will eventually realize that this face is rationally more valuable to you than two strangers on the other side of the planet could ever be. And then it will be time for sleave-rolling and moving a bit further afield on the meta-ethical landscape than you might have thought necessary before.
I don't think if we start from evolved moral predispositions, we need to get rid of altruism altogether. My common sense says there is a biological desire to save a drowning child if we come across one. There is just a mismatch between the small communities we are evolved to live in and the current global village. Moreover, of course that desire is stronger the closer that child is to us. There is no impartiality in anything evolved
While normative ethics is different from the evolution of morality, for me it is clear that you need an understanding of moral evolution to create a sound normative ethics:
https://forum.effectivealtruism.org/posts/aCEuvHrqzmBroNQPT/the-evolution-towards-the-blank-slate
.