End War Or Mosquitoes?

Malaria may have killed half of all the people that ever lived. (more)

Over one million people die from malaria each year, mostly children under five years of age, with 90% of malaria cases occurring in Sub-Saharan Africa. (more)

378,000 people worldwide died a violent death in war each year between 1985 and 1994. (more)

Over the last day I’ve done two Twitter polls, one of which was my most popular poll ever. Each poll was on whether, if we had the option, we should try to end a big old nemesis of humankind. One was on mosquitoes, the other on war:

In both cases the main con argument is a worry about unintended side effects. Our biological and social systems are both very complex, with each part having substantial and difficult to understand interactions with many other parts. This makes it hard to be sure that an apparently bad thing isn’t actually causing good things, or preventing other bad things.

Poll respondents were about evenly divided on ending mosquitoes, but over 5 to 1 in favor of ending war. Yet mosquitoes kill many more people than do wars, mosquitoes are only a small part of our biosphere with only modest identifiable benefits, and war is a much larger part of key social systems with much easier to identify functions and benefits. For example, war drives innovation, deposes tyrants, and clean out the inefficient institutional cruft that accumulates during peacetime. All these considerations favor ending mosquitoes, relative to ending war.

Why then is there so much more support for ending war, relative to mosquitoes? The proximate cause seems obvious: in our world, good people oppose both war and also ending species. Most people probably aren’t thinking this through, but are instead just reacting to this surface ethical gloss. Okay, but why is murderous nature so much more popular than murderous features of human systems? Perhaps in part because we are much more eager to put moral blame on humans, relative to nature. Arguing to keep war makes you seem like allies of deeply evil humans, while arguing to keep mosquitoes only makes you allies of an indifferent nature, which makes you far less evil by association.

How To Prep For War

In my last two posts I’ve noted while war deaths have fallen greatly since the world wars, the magnitude and duration of this fall isn’t that far out of line with previous falls over the last four centuries, falls that have always been followed by rises, as part of a regular cycle of war. I also noted that the theory arguments that have been offered to explain why this trend will long continue, in a deviation from the historical pattern, seem weak. Thus there seems to be a substantial and neglected chance of a lot more war in the next century. I’m not the only one who says this; so do many war experts.

If a lot more war is coming, what should you do personally, to help yourself, your family, and your friends? (Assuming your goal is mainly to personally survive and prosper.) While we can’t say that much specifically about future war’s style, timing, or participants, we know enough to suggest some general advice.

1. Over the last century most war deaths have not been battle deaths, and the battle death share has fallen. Thus you should worry less about dying in battle, and more about other ways to die.

2. War tends to cause the most harm near where its battles happen, and near concentrations of supporting industrial and human production. This means you are more at risk if you live near the nations that participate in the war, and in those nations near dense concentrations and travel routes, that is, near major cities and roads.

3. If there are big pandemics or economic collapse, you may be better off in more isolated and economically self-sufficient places. (That doesn’t include outer space, which is quite unlikely to be economically self-sufficient anytime soon.) Of course there is a big tradeoff here, as these are the places we expect to do less well in the absence of war.

4. Most of your expected deaths may happen in scenarios where nukes are used. There’s a big literature on how to prepare for and avoid harms from nukes, so I’ll just refer you to that. Ironically, you may be more at risk from being hurt by nukes in places that have nukes to retaliate with. But you might be more at risk from being enslaved or otherwise dominated if your place doesn’t have nukes.

5. Most of our computer systems have poor security, and so are poorly protected against cyberwar. This is mainly because software firms are usually more eager to be first to market than to add security, which most customers don’t notice at first. If this situation doesn’t change much, then you should be wary of depending too much on standard connected computer systems. For essential services, rely on disconnected, non-standard, or high-security-investment systems.

6. Big wars tend to induce a lot more taxation of the rich, to pay for wars. So have your dynasty invest more in having more children, relative to fewer richer kids, or invest in assets that are hidden from tax authorities. Or less bother to invest for the long run.

7. The biggest wars so far, the world wars and the thirty years war, have been driven by strong ideologies, such as communism and catholicism. So help your descendants avoid succumbing to strong ideologies, while also avoiding the appearance of publicly opposing locally popular versions. And try to stay away from places that seem more likely to succumb.

8. While old ideologies still have plenty of fire, the big new ideology on the block seems related to woke identity. While this seems to inspire sufficiently confident passions for war, it seems far from clear who would fight who and how in a woke war. This scenario seems worth more thought.

Big War Remains Possible

The following poll suggests that a majority of my Twitter followers think war will decline; in the next 80 years we won’t see a 15 year period with a war death rate above the median level we’ve see over the last four centuries:

To predict a big deviation from the simple historical trend, one needs some sort of basis in theory. Alas, the theory arguments that I’ve heard re war optimism seem quite inadequate. I thus suspect much wishful thinking here.

For example, some say the world economy today is too interdependent for war. But interdependent economies have long gone to war. Consider the world wars in Europe, or the American civil war. Some say that we don’t risk war because it is very destructive of complex fragile physical capital and infrastructure. But while such capital was indeed destroyed during the world wars, the places most hurt rebounded quickly, as they had good institutional and human capital.

Some note that international alliances make war less likely between alliance partners. But they make war more likely between alliances. Some suggest that better info tells us more about rivals today, and so we are less likely to misjudge rival abilities and motives. But there still seems plenty of room for errors here as “brinkmanship” is a key dynamic. Also, this doesn’t prevent powers from investing in war abilities to gain advantages via credible threats of war.

Some point to a reduced willingness by winners to gain concrete advantages via the ancient strategies of raping and enslaving losers, and demanding great tribute. But we still manage to find many other motives for war, and there’s no fundamental obstacles to reviving ancient strategies; tribute is still quite feasible, as is slavery. Also, the peak war periods so far have been associated with ideology battles, and we still have plenty of those.

Some say nuclear weapons have made small wars harder. But that is only between pairs of nations both of which have nukes, which isn’t most nation pairs. Pairs of nations with nukes can still fight big wars, there are more such pairs today than before, over 80 years there’s plenty of time for some pair to pick a fight, and nuke wars casualties may be enormous.

I suspect that many are relying on modern propaganda on our moral superiority over our ancestors. But while we mostly count humans of the mid twentieth century as morally superior to humans from prior centuries, that was the period of peak war mortality.

I also suspect that many are drawing conclusions about war from long term trends regarding other forms of violence, as in slavery, crime, and personal relations, as well as from apparently lower public tolerance for war deaths and overall apparent disapproval and reluctance regarding war. But just before World War I we had also seen such trends:

Then, as now, Europe had lived through a long period of relative peace, … rapid progress … had given humanity a sense of shared interests that precluded war, … world leaders scarcely believed a global conflagration was possible. (more)

The world is vast, eighty years is a long time, and the number of possible global social & diplomatic scenarios over such period is vast. So it seems crazy to base predictions on future war rates on inside view calculations from particular current stances, deals, or inclinations. The raw historical record, and its large long-term fluctuations, should weigh heavily on our minds.

Will War Return?

Usually, I don’t get worked up about local short term trends; I try to focus on global long term trends, which mostly look pretty good (at least until the next great era comes). But lately I’ve seen some worrying changes to big trends. For example, while for over a century IQ has risen and death rates have fallen, both steadily, in the last two decades IQ has stopped rising in most rich nations, and in the U.S. death rates have started rising. Economic growth also seems to have slowed, thought not stopped, world-wide.

Added to these are some worrisome long term trends. Global warming continues. Fertility has been falling for centuries. Rates of innovation per innovator have been falling greatly for perhaps a century. And since the end of the world wars, inequality and political polarization has been increasing.

One good-looking trend that hasn’t reversed lately is a falling rate of violence, via crime, civil war, and war between nations. But this graph of war deaths over the last 600 years makes me pause:

Yes, war death rates have fallen since the world wars, but those wars were a historical peak. And though the pattern is noisy, we seem to see a roughly half century cycle, a cycle that is perhaps increasing in magnitude. So we have to wonder: are we now near a war cycle nadir, with another war peak coming soon?

The stakes here are hard to exaggerate. If war is coming back soon, the next peak might make for record high death. And the easiest way to imagine achieving that is via nukes. If war may come back soon with a vengeance, we must consider preparing for that possibility.

Not only have we seen fewer war deaths since the world wars, we’ve also seen a great reduction in social support for military virtues, values, and investments. Compared to our ancestors, we glorify soldiers less, and less steel non-soldiers to sacrifice for war. (E.g., see They Shall Not Grow Old.) In contrast, ancient societies were in many ways organized around war, offering great status and support for warriors. They even supported soldiers raping, pillaging, exterminating, and enslaving enemies.

Yes, trying to create more local social support for war might well help create the next rise of war. Which could be a terrible thing. (Yes my even talking about this could help cause it, but even here I prioritize honesty.) However, if preparing more sooner for war helps nations to win or at least survive the next war peak, do you really want it to be only other nations who gain that advantage?

Given the stakes here, it seems terribly important to better understand the causes of the recent decline in war deaths. I’ve proposed a farmers-returning-to-foragers story, whose simplest version predicts a continuing decline. But I’m far from confident of that simplest version, which would not have predicted the world wars as a historical peak. Please fellow intellectuals, let’s figure this out!

Beware Nggwal

Consider the fact that this was a long standing social equilibrium:

During an undetermined time period preceding European contact, a gargantuan, humanoid spirit-God conquered parts of the Sepik region of Papua New Guinea. … Nggwal was the tutelary spirit for a number of Sepik horticulturalist societies, where males of various patriclans were united in elaborate cult systems including initiation grades and ritual secrecy, devoted to following the whims of this commanding entity. …

a way of maintaining the authority of the older men over the women and children; it is a system directed against the women and children, … In some tribes, a woman who accidentally sees the [costumed spirit or the sacred paraphernalia] is killed. … it is often the responsibility of the women to provide for his subsistence … During the [secret] cult’s feasts, it is the senior members who claim the mantle of Nggwal while consuming the pork for themselves. …

During the proper ritual seasons, Ilahita Arapesh men would wear [ritual masks/costumes], and personify various spirits. … move about begging small gifts of food, salt, tobacco or betelnut. They cannot speak, but indicate their wishes with various conventional gestures, …
Despite the playful, Halloween-like aspects of this practice … 10% of the male masks portrayed [violent spirits] , and they were associated with the commission of ritually sanctioned murder. These murders committed by the violent spirits were always attributed to Nggwal.

The costumes of the violent spirits would gain specific insignia after committing each killing, … “Word goes out that Nggwal has “swallowed” another victim; the killer remains technically anonymous, even though most Nggwal members know, or have a strong inkling of, his identity.” … are universally feared, and nothing can vacate a hamlet so quickly as one of these spooks materializing out of the gloom of the surrounding jungle. … Nggwal benefits some people at the expense of others. Individuals of the highest initiation level within the Tambaran cult have increased status for themselves and their respective clans, and they have exclusive access to the pork of the secret feasts that is ostensibly consumed by Nggwal. The women and children are dominated severely by Nggwal and the other Tambaran cult spirits, and the young male initiates must endure severe dysphoric rituals to rise within the cult. (more)

So in these societies, top members of secret societies could, by wearing certain masks, literally get away with murder. These societies weren’t lawless; had these men committed murder without the masks, they would have been prosecuted and punished.

Apparently many societies have had such divisions between an official legal system that was supposed to fairly punish anyone for hurting others, along side less visible but quite real systems whereby some elites could far more easily get away with murder. Has this actually been the usual case in history?

Pay More For Results

A simple and robust way to get others to do useful things is to “pay for results”, i.e., to promise to make particular payments for particular measurable outcomes. The better the outcomes, the more someone gets paid. This approach has long been used in production piece-rates, worker bonuses, sales commissions, CEO incentive paylawyer contingency fees, sci-tech prizes, auctions, and outcome-contracts in PR, marketing, consulting, IT, medicine, charities, development, and in government contracting more generally. 

Browsing many articles on the topic, I mostly see either dispassionate analyses of its advantages and disadvantages, or passionate screeds warning against its evils, especially re sacred sectors like charity, government, law, and medicine. Clearly many see paying for results as risking too much greed, money, and markets in places where higher motives should reign supreme.

Which is too bad, as those higher motives are often missing, and paying for results has a lot of untapped potential. Even though the basic idea is old, we have yet to explore a great many possible variations. For example, many of social reforms that I’ve considered promising over the years can be framed as paying for results. For example, I’ve liked science prizes, combinatorial auctions, and:

  1. Buy health, not health careGet an insurer to sell you both life & health insurance, so that they lose a lot of money if you are disabled, in pain, or dead. Then if they pay for your medical expenses, you can trust them to trade those expenses well against lower harm chances.
  2. Fine-insure-bounty criminal law systemCatch criminals by paying a bounty to whomever proves that a particular person did a particular crime, require everyone to get crime insurance, have fines as the official punishment, and then let insurers and clients negotiate individual punishments, monitoring, freedoms, and co-liabilities. 
  3. Prediction & decision markets – There’s a current market probability, and if you buy at that price you expect to profit if you believe a higher probability. In this way you are paid to fix any error in our current probabilities, via winning your bets. We can use the resulting market prices to make many useful decisions, like firing CEOs. 

We have some good basic theory on paying for results. For example, paying your agents for results works better when you can measure the things that you want sooner and more accurately, when you are more risk-averse, and when your agents are less risk-averse. It is less less useful when you can watch your agents well, and you know what they should be doing to get good outcomes.

The worst case is when you are a big risk-neutral org with lots of relevant expertise who pays small risk-averse individuals or organizations, and when you can observe your agents well and know roughly what they should do to achieve good outcomes, outcomes that are too complex or hidden to measure. In this case you should just pay your agents to do things the right way, and ignore outcomes.

In contrast, the best case for paying for results is when you are more risk-averse than your agents, you can’t see much of what they do, you don’t know much about how they should act to best achieve good outcomes, and you have good fast measure of the outcomes you want. So this theory suggests that ordinary people trying to get relatively simple things from experts tend to be good situations for paying for results, especially when those experts can collect together into large more-risk-neutral organizations.

For example, when selling a house or a car, the main outcome you care about is the sale price, which is quite observable, and you don’t know much about how best to sell to future buyers. So for you a good system is to hold an auction and give it to the agent who offers the highest price. Then that agent can use their expertise to figure out how to best sell your item to someone who wants to use it.

While medicine is complex and can require great expertise, the main outcomes that you want from medicine are simple and relatively easy to measure. You want to be alive, able to do your usual things, and not in pain. (Yes, you also have a more hidden motive to show that you are willing to spend resources to help allies, but that is also easy to measure.) Which is why relatively simple ways to pay for health seem like they should work. 

Similarly, once we have defined a particular kind of crime, and have courts to rule on particular accusations, then we know a lot about what outcomes we want out of a crime system: we want less crime. If the process of trying to detect or punish a criminal could hurt third parties, then we want laws to discourage those effects. But with such laws in place, we can more directly pay to catch criminals, and to discourage the committing of crimes. 

Finally when we know well what events we are trying to predict, we can just pay people who predict them well, without needing to know much about their prediction strategies. Overall, paying for results seems to still have enormous untapped potential, and I’m doing my part to help that potential be realized. 

 

 

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.