Human morality is not at root a matter of dogma or subjective preference. There is a scientific way to approach moral problems and identify our most basic values, starting from our nature as biological—and cognitive—beings. While acknowledging the social role played by religious moral systems, this leads us to a much broader, more agentic conception of human flourishing made possible within modern societies.
People are tribal. Something deep in our natures drives us to find a group that feels like a good fit, join into it, then go to war against other groups. As civilization progresses, our interests and feuds become more abstract, so our groups become less about kinship or even nationality and more about a shared set of ideas—a worldview. You can observe these changes over millennia but also over decades. Our politics—i.e., the domain in which the clashes play out at any moment in history—is increasingly about questions of morality.
Many think this means the clashes are irreconcilable. They think statements about morality are unprovable and occupy a totally different realm than factual statements about the world. From our earliest childhood experiences, learning about what is seems very different from learning about how we ought to behave. And this “is-ought gap” has become something that natural and social scientists usually take for granted. How, they ask, could you possibly derive moral principles like “Don’t steal!” from science — from things you can test in a lab?
Occasionally some philosopher will object and say that you can in fact derive moral ideas from mere facts. What usually happens, then, is a descent into semantic debates about what the concepts of “morality” and “values” even mean. Civilizational clashes hang in the balance, but we can’t seem to reduce it to anything empirical or testable.
The usual response to this is that it’s just too abstract. Leave it to philosophy seminars, and don’t expect any practical insights for the problems society faces today. What if, instead, it’s not abstract enough? If we go one level deeper, more abstract, I believe there’s a different route that leads right to the bedrock of empirical data—of science.
It begins with an epistemic question. Do moral concepts mean anything at all? Are notions like “value” legitimate even as concepts? Or are they just floating abstractions like “fairy” or “phlogiston” that don’t actually refer to any objects in reality. If legitimate, they must help us to organize some set of facts that we can perceive in the world. Which facts then, and how?
Value is, at root, a biological concept. We don’t have to delve into questions of human morality to see why we need this concept in its broadest form. So let’s first consider values on a biological level, in terms of what all life has in common, not specifically in terms of humans with complex brains making choices.
Sensing Values
The most basic organisms are clearly just little machines. A bacterium detects some decaying organic matter and acts to break it down, assimilating its energy-rich components. It’s not hard to imagine a quantum mechanical simulation, atom by atom, of how this happens.
And even at a higher level of analysis, something like that of thermodynamics, there is a lot that can be understood about this process. The organic matter possesses a biological order. Technically speaking, this order is a kind of low-entropy thermodynamic resource that is used up by the bacterium, which releases higher-entropy streams of carbon dioxide, water, and heat along the way. If you step back, forgetting that there are life-forms involved here, the bacterium looks like a gigantic catalyst molecule. It’s just accelerating what was already a spontaneous chemical reaction — the reaction of hydrocarbons gradually disintegrating into bits, their biological order being annihilated.
In this process the bacterium has gained some internal order, which is to say it has decreased its internal entropy. But the entropy of its surroundings (the stuff it didn’t eat plus its own waste products) has increased by a greater amount. The total entropy of the universe marches onward and upward. You can think of the internal entropy decrease as a kind of compensation the bacterium receives from the universe for the thermodynamic service it provides of increasing the universe’s total entropy. The universe is kind of making a deal with the bacterium, saying: “Look, I know you’re actually gaining some internal order by being what you are and doing what you do, but you’re demolishing much more order outside you—so I, the universe, am still winning at the end of the day.”
Imagine for a moment an opposite kind of thermodynamic service provider that, instead of receiving an entropy decrease for itself, receives an entropy increase. Each time it eats, this anti-bacterium itself gets more disordered, internally. That’s a recipe for quick death, as more and more disorder would build up inside it and ultimately make basic metabolic functioning impossible. Such anti-bacteria, were they somehow to come into existence, would quickly die out, succumbing to normal bacteria in the competition for survival. This is the logic of natural selection. Organisms that pursue things conducive to their own biological order — their lives — multiply and evolve. Those that do not, die out. It is useful to have a concept for the things that an organism pursues in order to increase its own biological order: we call these things the organism’s values.
Strictly speaking, this is an analogy rather than a precise application of thermodynamics. Entropy is a subtle concept, and there are already a number of different flavors of it that have arisen in related but distinct contexts, all the way from Clausius’ original thermodynamics, to Boltzmann’s statistical mechanics, to the information theories of Shannon and Kolmogorov. Someday we might find the precise definition of a new, biological flavor of entropy, perhaps linking it to these other flavors.
A formal theory of life like this would be analogous to the formal theories of non-biological order conceived in these other areas. Perhaps such a theory could explain how life emerged from inorganic matter. Presumably it would also give a very concrete picture of the physical basis of values, even more detailed than the picture offered by modern biology.
But this latter picture is already quite powerful. How does an organism, even a simple little one like a bacterium, “know” what to value so as to sustain its own kind of biological order? Modern biology has a direct answer: it “knows” by means of its genetic code. The values conducive to its life are programmed into its genes. Of course a bacterium’s genome doesn’t have specific instructions to pursue any one particular piece of organic matter. It has a set of instructions that cause it to act in certain ways (react to certain kinds of stimuli according to certain genetic algorithms) that tend to result in its consumption of organic matter.
This is good, because if the bacterium needed specific instructions tailored to the exact conditions of each and every feeding opportunity, it would have to carry around an enormously large genome. So large it never could have evolved in the first place.
The genetic code is, in effect, written on a programming layer that uses abstraction — the stuff of rules, heuristics, and models that tend to “work” even if they are not perfect or exhaustive reproductions of reality. Abstraction allows organisms to economize on genetic complexity. Put another way, for the fixed level of complexity possible to an organism at a certain point in its species’ evolutionary history, using abstraction in its genetic code allows it to be more fit—able to deal with more kinds of environments, to pursue more kinds of food sources, to better sustain its biological order in general. This genetic kind of abstraction maximizes biological fitness for a limited genetic storage capacity.
We currently know something about how these effective abstractions work in the genomes of many different kinds of organisms. We know this one particular gene sequence codes for the production of that one kind of protein, which contributes to this trait or behavior, which enables the organism to better deal with these kinds of conditions in its environment. Today we have unlocked only a small fraction of how such genetic abstractions work across the biological world. In computational terms, we only understand a little bit of the code, and there are probably tons of basic features of the programming language itself we haven’t figured out yet.
It is very hard in general to achieve a precise, genetic explanation of any given large-scale trait or behavior of an organism. A trait is typically the cumulative effect of small contributions from very large numbers of genes that are each multitasking on many other traits as well. Still, if there is a systematic tendency for, say, a bacterium to find nutrients by swimming along the concentration gradient of certain chemicals it can sense, there’s almost certainly something in its genome controlling that behavior. The cumulative genetic pattern resulting in that stimulus-response is effectively a module in the larger program. And it’s been optimized over eons of evolution to pick up on regularities in nature to help it find more food.
Perhaps someday we will be able to translate a bacterium genome into a higher-level, readable programming language, and we will understand exactly how that gradient-descent module works. Whether or not we humans ever achieve this understanding, the actual genetic pattern constituting that module is, in fact, a physical referent of—a thing in reality referred to by—the concept of value.
In one sense it is obvious that there are some kinds of physical facts like this, pointed to by value. There must be, if biology is ultimately just atoms and molecules. But it is instructive to think concretely about how those facts are manifested in atoms. The concept of value thus acquires a set of physically real referents just as a much simpler concept like color has physical referents. No one worries about the fact-color distinction. What things in reality does color refer to? Just open your eyes.
Likewise, once we can really understand the bacterium’s genome, you could identify a specific genetic pattern like the one causing it to move toward higher concentrations of a particular chemical. You would thus have identified a set of physical features of the bacterium that encodes a particular one of its values. You could go to a lab, peer into a microscope, and see a value.
This genetic pattern would probably be very subtle, and you wouldn’t actually see it like you would see a very small splotch of color. You would see it like you “see” an electron, which itself is just a special kind of pattern appearing in certain quantum fields. You cannot directly see an electron under a microscope either. But you can see a whole bunch of things in a lab, and you can use a lot of math to infer the existence of quantum fields with certain kinds of patterns, one of which you call an electron. Likewise you can go to a lab and see, base pair by base pair, the entire bacterium genome, and you could (if you knew enough about it) infer certain kinds of patterns within that genome that are the physical referents of a bacterium’s values.
The goal of the science making this possible would of course not be to resolve an issue in moral philosophy. The goal would be to understand more about bacteria. And it would be impossible to even begin doing this without first having the concept of value. Just like it would have been impossible to identify the physical basis of colors in terms of different wavelengths of light without first having the concept of color. Nevertheless, discovering such a specific physical basis surely put the lie to anyone who thought colors were arbitrary mental phenomena disconnected from the physical natures of colored objects. And likewise for values.
Value is about as central to biology as color is to optics. Darwin could not have conceived the theory of evolution without first having the concept of value, whether or not he used that particular term. Evolution is about organisms that, unlike rocks or clouds, pursue certain things because those things maximize their fitness — i.e., sustain and expand (including through reproduction) the organism’s biological order. Pondering such a category of things for a given organism — its values — is among the very first steps in a long intellectual path leading to the theory of natural selection. It is not so much that you need to derive the concept of value through some elaborate deduction, anymore than you need to derive the concept of color. Value exists at the beginning. It’s where biology starts.
A moral skeptic might simply deny that this biological concept of value connects with moral values for human beings. Perhaps the values of simple organisms can be understood in essentially the same way we understand things like color. But can moral values? Let’s move up the biological complexity ladder and see.
Perceiving Values
More complex organisms generally have more genes and more external and internal modes of action. They evolved more organs (eyes, ears, etc.) to receive richer signals about the external world. They evolved brains to store, organize, and process these signals (memory, vision, etc.). And they evolved more behaviors to leverage this extra information processing in pursuit of their values. Each advance can be traced back in terms of which values it helped the organism pursue. The more important the value to an organism’s fitness, the bulkier and more sophisticated the internal structures devoted to it may become. Structural complexity is driven by value capture.
This is the Whig view of evolution, saying that some species are just inherently more sophisticated than others. Such a view is often called naive and unscientific, because every lineage alive today has been evolving for exactly the same period of time, since life began, so they’re all equally “as evolved.” But clearly different environments lead to different effective rates of evolution. In general, benign, stable environments produce less evolutionary change. Moreover, genome size can be taken as a crude measure of evolutionary sophistication; although more involved measures than just counting base pairs could yield better results.
If one day we arrive at a formal theory of life, we might be able to rigorously measure and compare the levels of algorithmic complexity encoded by different genomes, reflecting a broad range of different body plans, brain types, survival strategies, etc. It would be a shocking coincidence if such a measure were to reveal that the evolutionary sophistication of a lineage depends only on the time-span of that lineage. Scientifically, as a hypothesis, that idea is very improbable. The anti-Whig view itself seems to be rooted less in fidelity to the science than in susceptibility to an anti-human ideological bias — a kind of biological relativism coming from the same impulse behind the idea of cultural relativism.
More complex organisms, in fact, not only tend to be better at pursuing basic values, they acquire new kinds of values. Better memory, for example, means a fox can explore more territory for new prey. Better vision means a bird can discriminate more effectively between different patterns on the plumage of potential mates. Such things might increase fitness only indirectly. The mere exercise of a new ability to recognize plumage patterns may not by itself produce more offspring, but it may be used in conjunction with certain new mate selection behaviors to produce more offspring. The plumage-recognition value exists for the sake of another more basic value, mate selection, which itself exists for the sake of producing more and fitter offspring. More complex sets of interdependent values emerge in a natural hierarchy, all serving the ultimate goal of the organism’s overall fitness.
A major divider in this complexity zoo is the emergence of consciousness.
Perception, the ability to experience the world in terms of unitary objects rather than just fleeting sensations, is one key aspect of consciousness. This ability appears to be a prerequisite for the later development of concepts and abstract thought.
Another key aspect of consciousness is the experience of pleasure and pain. As the information processing power of the brain expands, it executes increasingly complex algorithms fed by richer perceptual signals. The decision-making part of the organism becomes more sophisticated, more autonomous, more differentiated from the organism’s sensory inputs and motor functions. Decision is ruled by a new mechanism—the pleasure/pain mechanism, which constitutes a new executive layer in the nervous system. Many different sub-algorithms may assess different aspects of a perceived object, and the new mechanism processes it all together into one conclusion: pleasure (value), pain (disvalue), or indifferent (neutral). This pleasure/pain mechanism allows a quick response to complex, multi-faceted stimuli that are affecting the animal immediately. It can also be combined with the brain’s predictive processing for subtler evaluations involving projected future benefits or dangers.
A fox sees something scurry in the grass, he smells it, he hears it make some sounds. He perceives all this as a unitary object. His brain begins an evaluation process—chug, chug, chug — and all results are quickly summed up into a single anticipation of pleasure, pain, or indifference. This leads directly to action: pounce on the prey (value), avoid the menace (disvalue), or disregard the nuisance (neutral).
The larger computational power involved in a brain with consciousness makes possible more flexibility, broader scope, and more accurate prediction. And this leads again to new values embedded in a more complicated value hierarchy, all still rooted in the ultimate value for an organism, the reason that the phenomenon of organisms pursuing things came about in the first place: achieving a state of higher fitness, of greater biological order.
Up to this point, what has been described might as well be entirely mechanical and unreflective. These genomes are a lot more complicated than the bacterium’s, but a fox’s values are ultimately still just genetic patterns you could, in principle, infer from things measured in the lab today. One limitation of this kind of biological system is that the only way to change an organism’s values is slowly, through genetic evolution. It might be objected that a mother fox, by teaching her pup, is bringing about an actual change in its values. Without the mother, the pup would not have the same, adapted set of values. But what the mother fox teaches the pup is itself genetically programmed in the mother. So, to be more precise, there is a value-determination process that is genetically programmed and can only be changed from generation to generation, and just by a little bit each time.
It would be nice for an individual organism to be able to identify new kinds of values directly, in real time, within its own lifespan. Actually, it wouldn’t just be nice, it would be an enormous fitness enhancement to pull that off. The most straightforward mechanism for this would seem to be an ability for the organism to purposefully change its own genes. But this isn’t the only way. Another way is to interpose a whole new kind of executive layer on top of pleasure/pain. Simple pleasure or pain signals would be passed as inputs into this new layer, when evaluating some object. The new layer could weigh these feelings and pass judgment on the object — value, disvalue, or neutral. But it wouldn’t only have access to feelings. It could access other mental resources like raw perceptual data, memories, and ideas. Indeed it would have the ability to make the organism act to gather more data, generate new memories, and form new ideas.
Fundamentally, this new layer— a cognitive layer — is a means of directing awareness, both externally and internally. It controls what the organism focuses its awareness on and what degree of intensity, of cognitive effort, the organism exerts.
A waiter brings a basket of warm, nice-smelling bread to your table. You experience a desire to eat it, and then also an apprehensive feeling. You have a choice: focus on that apprehension or ignore it and concentrate on the pleasant qualities of the bread. You choose to examine the apprehension: “Oh yes, I told myself I was going to cut down on empty calories.” Now you have to weigh the anticipated enjoyment of the bread against the long-run projected health benefit of avoiding it. A thought occurs to you: “This one piece of bread will have such a small long-run health impact, who cares?”, which begins to heighten the anticipated enjoyment. But you experience some apprehension about that thought. Again you have a choice: ignore the apprehension and just grab the bread, or focus on this thought. You focus on it and another thought occurs: “I could always use that excuse, then I’ll always just wind up eating the bread, and there will be a big long-run health impact…”
In this case, the genetically programmed value of satiating your hunger is overridden by a cognitive layer that has accessed memories, which themselves resulted from other cognitive activity, built up over years of reflecting on people’s eating habits and health outcomes, pondering different ideas about nutrition, looking at observational data from studies, etc.
The feeling of hunger, the desire for the bread, occurred spontaneously as a result of your genetic programming, which encodes some kind of effective abstraction representing that value. But you did not automatically respond to this value. Your mind intervened, identifying this desire with the actual abstraction hunger that you have constructed mentally. You were thus able to compare and weigh the value of satiating your hunger against other values that you have created abstract, mental representations of as well.
Conceiving Values
An organism with this kind of conceptual consciousness creates within its brain, and over a period of just years, a whole system of abstractions mirroring the effective abstractions that took eons to become encoded in its genome. Indeed, such mental abstractions don’t just mirror its genetic survival algorithms, they take in the results of these algorithms as just one input to a master process of cognitive valuation. The benefits of those simple algorithms are still available to conceptual consciousness as signals (feelings), but a much more powerful, predictive faculty is also available when needed. This faculty is an evolved one, subject to genetic evolution, but it is also responsive to the much more flexible forces of cultural evolution. And the faculty is creative, itself a driver of cultural evolution. It is a microcosm of the evolutionary process — contained within the mental history of a single organism.
Cognition, as with other advances before it, both enables organisms to more effectively pursue their existing values and brings about new values for them to pursue. In a certain sense the new values, like listening to a nice piece of music, may be in competition with the old ones. You could have spent that time hunting rodents in your backyard. But the new values are unavoidable byproducts of this spectacularly successful new evolutionary tool, the mind. Thus your sustenance is not just berries and furs, it’s now also Mozart and Carl Sagan. It’s not just your bare physical subsistence as a primitive organism, it’s your flourishing as a human being — your career, your hobbies, your romantic interests, your friendships, your appreciation of art, your curiosity.
Of course it is possible for a given individual to become fixated on one of these new values, to his own detriment. A young man may choose to spend all his time playing video games alone in his parents’ basement. Just like a peacock may devote so many metabolic resources to developing elaborate plumage that it winds up shortening its own lifespan and decreasing its total reproductive success. But this is a localized, maladaptive outcome, one bad little ditch within a vast landscape of evolutionary possibilities. It doesn’t alter the fact that the whole structure of our bodies and brains has been honed over eons to maximize the individual’s biological fitness. The reason a bird has the plumage it has, and that a human has the playful curiosity he has, is to promote its individual flourishing.
Consciousness, as a new piece of biological technology, creates powerful new ways of flourishing beyond what is possible to non-conscious organisms, while bringing with it many new ways for organisms to malfunction. And likewise for conceptual consciousness beyond other simpler forms of consciousness.
Abstract, conceptual thinking—i.e., cognition — is a biological technology that allows us to effectively process and improve on our pre-programmed genetic survival algorithms. The cognitive faculty re-implements effective abstractions encoded in our genetic hardware, now using a kind of flexible software layer on top of it. While natural selection would take millennia to accomplish a major restructuring of our survival algorithms, cognition does it in years or even seconds with the flash of a new insight.
Evolution works by the amplification of small differences over many, many repeated trials. For instance, a particular gene may have, on average, a tiny impact on disease resistance, hence on survival or reproductive success for any one person. But added up over thousands of years and millions of people, the statistical effect can be powerful, driving up the frequency of that gene (allele) in the whole population. And if enough such genes are favored in this way, average health can increase very slowly, generation by generation.
Compare this to the realization that sanitary drinking water promotes health for people in general, and for growing children in particular. After some human first created this knowledge, anyone could be taught that carefully controlling your child’s drinking water might make the difference between that very child becoming a crippled genetic dead-end, or a vigorous, attractive adult. Your lineage wouldn’t have to wait eons to develop a new water-awareness instinct.
Knowing what to eat and drink is of course just the beginning of how thinking enhances our existence. With all of the new ways of life made possible by cognition, questions multiply at increasingly abstract levels. What kind of person should you date? How much time should you devote to sustaining your friendships? What kind of career should you pursue?
Consider a typical thought process provoked by the career question, for example. Becoming a doctor would be lucrative. But perhaps you’re squeamish about blood, so it’s not right for you. However, you could overcome that squeamishness. Then again, why should making money be a consideration in what you do with your life? Although, even if you don’t care about money, your friends do and probably so do potential lovers. You don’t want to wind up as a penny-pinching loser.
One key distinction among these questions is that some of them hinge on things very specific to you, and others apply more broadly. Clearly your own idiosyncrasies will play a big role in finding the right answers for you, but it would be nice not to have to reinvent the wheel every time a deep question arises, one that we all face in essentially the same way.
Morality is the field of inquiry concerned with answering those general-interest questions about what kind of life to lead. The ability of an individual to even pose such questions is the result of conceptual consciousness. Bats cannot ponder abstract questions about different survival strategies. But the underlying reason such questions exist — that some strategies are better for your life than others — is the same for humans as for bats. Bats can only answer these questions, as a species, by evolving over eons. Humans can answer them quickly and individually, by thinking.
It is possible for someone’s erroneous thinking to cause him harm, like in the case of the basement gamer. From the cognitive-biological perspective, the possibility for a person to choose badly is not fundamentally different from the possibility for an organism to mutate badly as its genetic code is first formed during its parents’ reproductive process. The fact that evolutionary malfunction is possible in either of these ways is not grounds to discard the key result of the theory of natural selection: that fitness maximization is the organizing principle behind all biological structure and activity.
The same fundamental standard we use to evaluate whether or not a given mutation is biologically successful is what we can use to evaluate human choices. A mutation that makes a bird over-invest in plumage is a biological failure. The mutation is fitness-destructive; it inhibits the bird from flourishing. A choice that tends to inhibit the human choice-maker from flourishing is just another kind of biological failure.
Of course not all choices are equally important. A bad choice about what to eat for dinner is usually not that big a deal. There are broader kinds of failures that are bad not just because of own idiosyncrasies but because of your nature as a human being.
We have a special category for this — we call it a moral failure — to distinguish it from other biological failures that are either not under the choice-maker’s mental control or not broad enough to apply to humans in general.
For instance, choosing a career just for the money is wrong. It is immoral. Not because of some religious or societal axiom that the pursuit of money is bad. It’s immoral mainly because choosing a career that holds no interest for you is likely to make you unhappy and bad at your job. It is just a mistake you are making. You are responding to certain readily available facts, like the need to support yourself materially, but ignoring other subtler facts, like that success in a demanding field requires sustained concentration over years, which will likely become excruciating when you’re uninterested in the actual content of the field. Choosing a career that doesn’t interest you, under normal circumstances, is wrong because it is detrimental to your flourishing as a human being.
Choice is a special, executive capacity emerging within conceptual consciousness that enables ideas to bring about action. By focusing your awareness on an idea (e.g., that eating too much bread is unhealthy) you allow that idea to cause you to act in a certain way (to avoid being tempted by the bread). Likewise, the executive capacity associated with pleasure/pain enables an organism’s feelings to bring about action. Both of these capacities emerged for the same reason: to maximize fitness. The capacity to choose no more detaches an organism from its fundamental goal of flourishing than does the capacity to feel and respond to pleasure. Both of these capacities can misfire, but they both exist because of — indeed for the sake of — their ability to promote flourishing.
Misconceiving Values
My aim here is not to methodically analyze which values a human can pursue to best promote his own flourishing as a cognitive being. It’s just to argue that this bio-cognitive view of morality is a natural extension of the Darwinian program.
The crux of this view, and the key meta-ethical insight behind it, is not new. The iconoclastic writer Ayn Rand hit on it in the 1950’s. Because of her political views, a classical liberalism that was considered beyond the pale at the time, she found few friends among the intellectuals who would have been receptive to an evolutionary-biology frame for these questions. They dismissed her as a crude reactionary, like the old “Social Darwinists,” without any kind of attempt to understand what she was really trying to do intellectually.
Even before that, it was a tragic error that Social Darwinism was considered the moral-political expression of evolutionary biology. The very name of the doctrine shines a light on this error—the idea that the distinguishing characteristic of human beings is our social nature. Clearly society is a major aspect of human life. But the thing that fundamentally distinguishes us from lower animals is the human mind. Ours is not the morality of ants.
Social Darwinism asserted that the dumb and incompetent among us deserve to lose the competition for survival, and so should rot in poverty and have their lineages die off for the good of the species. But when you recognize the role of the mind in human society, it’s clear that there is something very different about how a human flourishes and how a lower animal flourishes. Animals compete with each other for a fixed pool of resources; the strong triumph over the weak by obtaining a greater share of that pool. Humans compete with each other to create new resources—to increase the size of the pool overall, benefiting everyone. The strong—i.e., those most capable of creating resources—are the biggest benefactors of the weak. Jeff Bezos became rich by creating enormous benefits for our entire society.
This has been true in varying degrees over the course of human history, but never more than it is today. The profits of a modern corporation (i.e., what is set aside for the owners) are a small fraction of its total production. Moreover, corporate profits are not the best indicator of how much the owners of a corporation take for themselves. Most of those profits wind up being funneled back into other productive enterprises, inside or outside that corporation. The real measure of how much the rich take for themselves is their own personal consumption spending, and that constitutes an even smaller fraction of total production. Their reward is quite small in relative terms. What they themselves consume is really a tiny fraction of what they create.
In exchange for this minor reward, a small cadre of thinkers — the most productive scientists, engineers, entrepreneurs — have dramatically multiplied what can be produced with each hour of regular labor. If the median worker earns $20/hour today, compared to less than the equivalent of $1/hour in ages past, it is not because today’s worker possesses 20 times greater natural capacity. It is because the productivity of his labor has been enormously amplified by a number of key inventions, which relied on key scientific discoveries, and were integrated into the larger economy by key entrepreneurial insights.
In industrial societies today, it is the economically strong (past and present) who have not only raised the living standards of everyone else by orders of magnitude, but who made the latter’s existence possible in the first place. A world population of 8 billion people has been sustained disproportionately because of the intellectual activity of a tiny fraction of this population over the last few centuries.
This is not to discount the role played by everyone else—the whole spectrum from manual laborers up to the highest echelon of producers. Such breakthroughs rely for their implementation on a huge network of actors—employees, customers, cops, plumbers, etc.—each contributing their own pieces to the larger structure. But it’s just a historical fact that productive contributions follow a lopsided distribution, like the 80-20 rule but even more lopsided.
The critical point is that, however lopsided this distribution, every echelon gains enormous net benefits from the other echelons. Jeff Bezos would not be a billionaire if there were no package-stuffers for him to hire, and the package-stuffers would be relegated to brutal lives on the farm, terminating at age 40 on average, were there no entrepreneurs to pump out a stream of productivity-enhancing innovations, decade by decade, for the last few hundred years.
Conceptual consciousness and its ultimate expressions in modern science and technology have led to a new natural order for human beings where our flourishing occurs not at each other’s expense but to our mutual benefit. Not all the time, but vastly more now than in past ages. This is not the zero-sum dynamics of Social Darwinism. It is a Cognitive Darwinism dominated by positive-sum enterprises.
Why then did no Cognitive Darwinian movement emerge in the 19th century? It doesn’t seem that the impediment has to do with the subtlety of these ideas. The science of evolutionary biology itself required much more exacting techniques of inference than what is needed here. Something else must have been blocking straightforward moral inferences from well-established knowledge in biology and economics.
Morality, by the time of Darwin, was of course not a green field. It had been trammeled for millennia by a different kind of intellectual system: religion. By then, however, religious belief was already on a clear path of decline. And some anti-religious thinkers like Nietzsche, who would prove influential down the road, were groping toward a morality with non-religious foundations.
Indeed Nietzsche’s worldview was inspired, in part, by Darwin’s scientific discoveries. Nietzsche held that what is good for any creature is founded in its evolved, biological nature. Unfortunately, he did not recognize the major, universal role of cognition in our nature. Instead, he thought there were two different basic kinds of humans—the master-type and the slave-type—with fundamentally different psychological natures and needs. Naturally then, two different moral systems would be appropriate for them as well. What is good for the master is to rule the slaves, and what is good for the slave is to trick the master into not doing so. This is a zero-sum world, where the goal is to obtain power over others—a world still wedded to the primeval law of kill-or-be-killed. Nietzsche’s worldview had not assimilated the radical break in human history kicked off by the Enlightenment and the industrial revolution.
Perhaps the most influential attempt to define a set of moral ideals from within the Enlightenment program was that of Kant, Nietzsche’s avowed nemesis. Kant did understand there to be a universal human nature, supporting a single coherent moral system. But he was premature in defining it, without the crucial context provided by Darwin. What resulted was an Enlightenment strawman—a pretense of rationally derived morality that was really just a repackaging of ancient religious doctrines about self-abnegation and obedience.
Judeo-Christian morality, unlike Kant’s, does not make an appeal to your rationality. It starts by declaring God as your omniscient master and simply demands your obedience to him. It does not offer any reason why its edicts will lead to a better life for you here on Earth. It just implants in your mind, from childhood, the threat that disobedience will result in your banishment to a realm of eternal torture.
Why did such a cruel and unattractive concept of morality ever gain currency in the first place? Two basic reasons stand out: (i) some of its rules — e.g., against interpersonal violence — were socially beneficial, and (ii) the alternative for the largely illiterate societies of the time was anarchy and chaos.
In the pre-industrial world, it was not nearly as clear — and it was often actually false — that the best survival strategy was to create new values and engage with others in mutually beneficial trade. Today life is a long sequence of prisoner’s dilemmas, where the accumulated pay-offs to cooperation (value creation and trade) far outweigh the limited opportunities for defectors (con-artists, thieves, etc.). But this was not the pay-off matrix for most of human history. Cooperation paid less. So people defected more, generating more negative externalities both for their immediate victims and for everyone else. The result was lower-trust, more security-conscious societies.
Religious dogma addressed this ancient externality problem by creating artificial negative payoffs inside the minds of the defectors. Fear of an all-knowing judge who would be tallying up all your bad deeds worked in reducing violence. It did not require subtle, long-range thinking about costs and benefits, both material and psychological. All it required was the ability to memorize a few commandments and to hold them sacred.
The dogma encoded a kind of cultural wisdom hard won through prior ages of even greater violence and misery. Similarly, animals hold in their genes a kind of biological wisdom won over eons of evolutionary trial and error. They are not consciously aware of this wisdom, but it operates inside them to their own benefit. So ancient believers were not conceptually aware of the wisdom encoded in their religion. They didn’t understand where it came from or how to validate it. But they held it fixed in their heads as a set of unprocessed, context-less absolutes, and it operated within them to produce social benefits.
The downside of religious dogma was that it constituted an inflexible mental system that would, itself, periodically spin out of control to everyone’s detriment. Irrational superstitions, moral panics, and the suppression of innovative thinkers — these were the inevitable side-effects of religion.
But an emerging scientific worldview posed a threat to this ancient order. It caused people to question the metaphysical picture Christianity offered as the source of its moral code. The social fabric was put at risk. Fortunately, science also offered a solution. The higher productivity created by technology made a life of honest work and trade much more obviously attractive. It also created more leisure time to instruct children calmly, methodically, and resorting less to the expedient of other-worldly threats. We increasingly didn’t need the old tales of fire and brimstone to discourage anti-social deceit and violence.
This left us in a peculiar cognitive position, however. For ages, our moral education consisted in simple commands, ingrained from early childhood. Do this. Don’t do that. This is good. That is bad. Moral concepts were taught to mean nothing other than edicts pronounced by some authority. And when a child might ever think to apply that dangerous concept of “Why?” to such an edict, all we had to fall back on was the old tales. Shrugging off these tales, you could only retain the edicts by constructing a separate realm of “morality” to house them—a realm disconnected from facts about the physical world, from evidence, from science.
A cognitive environment was created where a giant chasm separated the thing known as morality from the rest of our knowledge. Philosophers called this the “is-ought gap.”
The is-ought gap is the disorientation experienced by someone who knows only the morality of commandments, when he is asked to rationally justify his values. For him, moral reasoning is about deducing which action to take from a fixed set of moral axioms, which are themselves not susceptible to rational proof. Such axioms — e.g., “Thou shalt not kill,” Kant’s “categorical imperative,” or “the greatest good for the greatest number” — are cognitive dead-ends with no connections reaching into the physical world of facts and evidence.
A biologically founded moral system escapes this trap. It does not attempt to derive values from the nature of our divine souls, or else from the most abstract qualities of disembodied choice-making minds. It discovers the values contained in our nature as human beings, a very specific nature determined by billions of years of evolution, and millions of years differentiating us from even our closest cousin species.
Our massively complex brains have spawned a host of new values, especially in relation to the much more complicated, long-range relationships we have with each other. These new values arose alongside religion, the original killer app of morality, which was necessary among ancient humans to clamp down on anti-social violence. But over the last few hundred years, industrial civilization has obviated that original app while also creating new problems whose solution requires a major operating system update.
Our lives have become more complex in the modern world, with much more variability over more years of a typical lifespan. It’s not just a matter of entering your father’s vocation and finding a healthy spouse anymore. People need to identify their own values, in terms of career, friendships, romance, and other long-range pursuits.
The original killer app for violence reduction, in its updated non-dogmatic version, still represents an important moral value. But it’s now just one among many. Indeed an exclusive fixation on this one value — not harming others— detracts from your ability to apply moral reasoning in all the fast-multiplying, new aspects of that one life you are best-positioned to understand and to improve: your own. This fixation on altruism came about, and was socially necessary, in a disharmonious world where the individual’s own interests were often served by deceit and violence. But in today’s world, harming others is generally an indication of malfunction on the part of the aggressor, who doesn’t sufficiently understand his own long-run interests, both materially and psychologically.
The cognitive revolution has led to a technological society of unprecedented wealth. It has also increased demands on us to integrate our own specialized plans into this new culture of discovery and production. But our moral code has not kept up with these developments. It is still fixated on a talisman conceived by some ancient shaman conniving to gain control of an unruly tribe — the talisman of self-sacrifice. We need a new code appropriate for the modern world.
Our biological understanding of human nature, including the massive positive-sum implications of conceptual consciousness, points toward such a code. It doesn’t require obeying commandments or believing incredible stories. It does require thinking deeply about the logic of one’s own long-run flourishing, hence quite a bit of cognitive effort.
Those who can’t or won’t summon this effort can get along just by doing what everyone else expects of them on a basic do-no-harm level. For those who can and will summon the effort, however, modernity offers a cornucopia of different ways to flourish — and of ways to flounder. Morality is the science that helps you tell the difference between these ways of living, before you’re forced to learn it the hard way.