Open Policy Evaluation

Hypocrisy is a tribute vice pays to virtue. La Rochefoucauld, Maximes

In some areas of life, you need connections to do anything. Invitations to parties, jobs, housing, purchases, business deals, etc. are all gained via private personal connections. In other areas of life, in contrast, invitations are made open to everyone. Posted for all to see are openings for jobs, housing, products to buy, business investment, calls for proposals for contracts and grants, etc. The connection-only world is often suspected of nepotism and corruption, and “reforms” often take the form of requiring openings to be posted so that anyone can apply.

In academia, we post openings for jobs, school attendance, conference attendance, journal publications, and grant applications for all to see. Even though most people know that you’ll actually need personal connections to have much of a chance for many of these things. People seems to want to appear willing to consider an application from anyone. They allow some invitation-only conferences, talk series, etc., but usually insist that such things are incidental, not central to their profession.

This preference for at least an appearance of openness suggests a general strategy of reform: find things that are now only gained via personal connections, and create an alternate open process whereby anyone can officially apply. In this post, I apply this idea to: policy proposals.

Imagine that you have a proposal for a better policy, to be used by governments, businesses, or other organizations. How can you get people to listen to your proposal, and perhaps endorse it or apply it? You might try to use personal connections to get an audience with someone at a government agency, political interest group, think tank, foundation, or business. But that’s stuck in the private connection world. You might wait for an agency or foundation to put out an open call for proposals, seeking a solution to exactly the problem your proposal solves. But for any one proposal idea, you might wait a very long time.

You might submit an article to an open conference or journal, or submit a book to a publisher. But if they accept your submission, that mostly won’t be an endorsement of whether your proposal is good policy by some metric. Publishers are mostly looking at other criteria, such as whether you have an impressive study using difficult methods, or whether you have a book thesis and writing style that will attract many readers.

So I propose that we consider creating an open process for submitting policy proposals to be evaluated, in the hope of gaining some level of endorsement and perhaps further action. This process won’t judge your submission on wit, popularity, impressiveness, or analytical rigor. Their key question is: is this promising as a policy proposal to actually adopt, for the purpose of making a better world? If they endorse your proposal, then other actors can use that as a quality signal regarding what policy proposals to consider.

Of course how you judge a policy proposal depends on your values. So there might be different open policy evaluators (OPE) based on different sets of values. Each OPE needs to have some consistent standards by which they evaluate proposals. For example, economists might ask whether a proposal improves economic efficiency, libertarians might ask if it increases liberty, and progressives might ask whether it reduces inequality.

Should the evaluation of a proposal consider whether there’s a snowball chance in hell of a proposal being actually adopted, or even officially considered? That is, whether it is in the “Overton window”? Should they consider whether you have so far gained sufficient celebrity endorsements to make people pay attention to your proposal? Well, those are choices of evaluation criteria. I’m personally more interested in evaluating proposals regardless of who has supported them, and regardless of their near-term political feasibility. Like how academics say we do today with journal article submissions. But that’s just me.

An OPE seems valid and useful as long as its actual choices of which policies it endorses match its declared evaluation criteria. Then it can serve as a useful filter, between people with innovative policy ideas and policy customers seeking useful ideas to consider and perhaps implement. If you can find OPEs who share your evaluation criteria, you can consider the policies they endorse. And of course if we ever end up having many of them, you could focus first on the most prestigious ones.

Ideally an OPE would have funding from some source to pay for its evaluations. But I could also imagine applicants having to pay a fee to have their proposals considered.

Stubborn Attachments

Tyler Cowen’s new book, Stubborn Attachments, says many things. But his main claims are, roughly, 1) we should care much more about people who will live in the distant future, and 2) promoting long-run economic growth is a robust way to achieve that end. As a result, we should try much harder to promote long-run economic growth.

Now I don’t actually think his arguments are that persuasive to those inclined to disagree. On 1), the actions of most people suggest that they don’t actually care much about the distant future, and there exist quite consistent preferences (including moral preferences) to represent this position. (Also, I have to wonder how much Tyler cares, as in the 20 years I’ve known him I’ve often worked on distant future issues, and he’s shown almost no interest in such things.)

On 2), while Tyler mainly argues for econ growth by pointing to good trends over the last few centuries, many people see bad trends as outweighing the good, and many others see recent trends as temporary historical deviations. Tyler also doesn’t consider that future techs which speed population growth could cut the connection observed recently between total and per-capita growth; I describe such a scenario in my book Age of Em.

Tyler being Tyler, he is generally vague and gives himself many outs to avoid criticism. For example, he says that rights should take priority over growth, but he doesn’t specify those rights. He says he only advocates growing “wealth plus” which includes any good thing you could want, so don’t complain that growth will hurt a good thing. He notes that the priority on growth can justify the usual intuition excusing limited redistribution, but doesn’t mention that this won’t at all excuse not doing everything possible to promote growth. He says he isn’t committed to econ growth being possible forever, but only to a finite chance of eternal growth. Yet focusing all policy on trying to increase growth within some tiny-chance eternal growth scenario is overwhelmingly likely to seem a huge mistake later.

However, as I personally happen to agree with his main claims, at least the way I phrased them, I’d rather focus on their implications, which Tyler severely neglects. The following are the only “concrete” things he says about how exactly to promote long term econ growth:

For some more concrete recommendations, I’ll suggest the following: a) Policy should be more forward-looking and more concerned about the more distant future. b) Governments should place a much higher priority on investment than is currently the case, in both the private sector and the public sector. … c) Policy should be more concerned with economic growth, properly specified, and policy discussion should pay less heed to other values. … d) We should be more concerned with the fragility of our civilization. … e) We should be more charitable on the whole, but we are not obliged to give away all of our wealth. … f) We can embrace much of common sense morality with the knowledge that it is not inconsistent with a deeper ethical theory. … g) When it comes to most “small” policies affecting the present and the near-present only, we should be agnostic.

More “investment” and “growth”, that’s it?! We actually know of many more specific ways to promote long term growth, but they mostly come at substantial costs. I don’t how much you actually support faster long-term growth until I hear which such policies you’ll support.

Simple options include moving taxes away from investment and toward consumption. For example, eliminate taxes on bequests. We might even subsidize investment. If successful, such policies will naturally result in much more wealth inequality; are you okay with that?

We might also more strongly promote innovation. For example, we could subsidize it or more directly pay for it. We might less protect dying industries and firms against newcomers, and make it easier for all firms to fire failing workers. We could impose fewer complex regulations that make it hard to change business practices. We could make international treaties to better coordinate our promotion of innovation across nations, such as perhaps via stronger intellectual property rights.

Today in the US, the House takes a shorter view than the Senate, as House members are elected every two years while Senators are elected every six years. And they all take shorter views just before an election. So to promote longer term political views, we could give elected representatives longer terms. Maybe twenty year Congress terms. Would presidents for life actually be a good idea?

As Tyler notes, most people discount the future much less when comparing one distant future date to another. And so one way to get more future oriented choices is to let people commit more way ahead of time, while they still have long views. That is, have law enforce more longterm commitments. This can include stronger marriages wherein divorce is harder, and non-compete clauses and other terms that make it harder to leave jobs. We might even revive ancient practices like oaths of fealty, letting parents commit their kids to particular life choices, and letting people sell themselves into slavery. Are you okay with these?

Tyler says free markets naturally neglect future people:

When it comes to non-tradable and storable assets, markets do not reflect the preferences of currently unborn individuals. The branch of economics known as welfare economics holds up perfect markets as a normative ideal, yet future generations cannot contract in today’s markets. If we were to imagine future generations engaging in such contracting, current decisions might run more in their favor. Circa 2018, the future people of 2068 can’t express their preferences across a lot of the choices we are making today, such as how rapidly to boost future wealth or how much to mitigate the risk of serious catastrophes.

This is wrong, and is like saying that car firms neglect car customers because they don’t consult each one when designing new cars. It is enough that car firms can roughly anticipate the distribution of customer preferences, even if they don’t consult each one ahead of time. Similarly, one can make deals with future people by creating organizations today that will be empowered to offer deals later to future people. For example, you might give money to an organization to invest and then later pay people to hold an event wherein they celebrate your life. This deal can work even if you and they are never alive at the same time.

The main problem is that law has placed many obstacles explicitly designed to prevent the “dead hand of the past” from controlling current choices. For example, the terms in wills are greatly limited. Many advocate adding even more obstacles. Which greatly limits the kinds of deals that people today can make with future people. People today would save more to support deals with future people if they were more free to set the terms of such deals. And that would make future people matter more today.

An important special case is the possibility of creating very long lived organizations that spend little and mostly reinvest the returns on their assets. If we let people create such organizations and didn’t severely steal from or tax them, the fact that investment rates of return can typically exceed economic growth rates implies that their assets held could grow as a faction of world assets, up until they came to own most world assets, and controlled the allocation of most world investment. From that point on their holdings would continue to grow and drive down market rates of return, in effect making the future matter more to investors.

Anticipating this sort of scenario, laws have created things like the “rule against perpetuities”, to make it hard for organizations to grow by reinvesting most of their asset returns. Law has thus gone out of its way to prevent the natural tendency of free markets over the long run to create lots of capital and low rates of return on capital, which would make future people matter a lot more. Are you willing to let such long-lived autonomous investment organizations grow and come to dominate and control most world investment, including most real estate, inducing housing, and most firms, including most jobs? Even if they are not accountable to anyone else and yet their preferences also carry a lot of weight in political systems?

So who is persuaded to adopt these many policies to create longer views, to make distant future people matter more? Will you endorse no taxes on savings including bequests, no protection of out-of-date firms and jobs, only simple and flexible regulation, strong worldwide intellectual property, presidents for life, parents committing kids, selling yourself into slavery, and complete freedom of the terms of wills, including allowing long-lived investment orgs that control most world asset use? Or will you declare that all these options somehow violate someone’s rights? Or will you admit that while it sounded good to talk about caring for future people, once you realize the costs you’d rather keep the usual policies, and toss those future folk under the proverbial bus? After all, what have future folk done for us lately?

Added 10:30p: Radical life extension would also be a way promote a longer view, if it were feasible. I’m skeptical about feasibility for bio-humans anytime soon, but eventually when ems are feasible they automatically allow very long lifespans, and give long views for sufficiently slow ems.

Moral Choices Express Preferences

Tyler Cowen has a new book, Stubborn Attachments. In my next post I’ll engage his book’s main claim. But in this post I’ll take issue with one point that is to him relatively minor, but is to me important: the wisdom of the usual economics focus on preferences:

Sometimes my fellow economists argue that “satisfying people’s preferences” is the only value that matters, because in their view it encapsulates all other relevant values. But that approach doesn’t work. It is not sufficiently pluralistic, as it also matters whether our overall society encompasses standards of justice, beauty, and other values from the plural canon. “What we want” does not suffice to define the good. Furthermore, we must often judge people’s preferences by invoking other values external to those preferences. …

Furthermore, if individuals are poorly informed, confused, or downright inconsistent— as nearly all of us are, at times— the notion of “what we want” isn’t always so clear. So while I am an economist, and I will use a lot of economic arguments, I won’t always side with the normative approach of my discipline, which puts too much emphasis on satisfying preferences at the expense of other ethical values. … We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences. …

iI traditional economics— at least prior to the behavioral revolution and the integration with psychology— it was commonly assumed that what an individual chooses, or would choose, is a good indicator of his or her welfare. But individual preferences do not always reflect individual interests very well. Preferences as expressed in the marketplace often appear irrational, intransitive, spiteful, or otherwise morally dubious, as evidenced by a wide range of vices, from cravings for refined sugar to pornography to grossly actuarially unfair lottery tickets. Given these human imperfections, why should the concept of satisfying preferences be so important? Even if you are willing to rationalize or otherwise defend some of these choices, in many cases it seems obvious that satisfying preferences does not make people happier and does not make the world a better place.

Tyler seems to use a standard moral framework here, one wherein we are looking at others and trying to agree among ourselves about what moral choices to make on their behalf. (Those others are not included in our conversation.) When we look at those other people, we can use the choices that they make to infer their wants (called “revealed preferences”), and then we can then make our moral choices in part to help them get what they want.

In this context, Tyler accurately describes common morality, in the sense that the moral choices of most people do not depend only on what those other object people want. Common moral choices are instead often “paternalistic”, giving people less of what they want in order to achieve other ends and to satisfy other principles. We can argue about how moral such choices actually are, but they clearly embody a common attitude to morality.

However, if these moral choices that we are to agree on satisfy some simple consistency conditions, then formally they imply a set of “revealed preferences”.  (And if they do not actually satisfy these conditions, we can see them as resulting from consistent preferences plus avoidable error.) They are “our” preferences in this moral choice situation. Looked at this way, it is just not remotely true that “ ‘What we want’ does not suffice to define the good” or that “Justice cannot be reduced to … what satisfies our preferences.” Our concepts of the good and justice are in fact exactly described by our moral preferences, the preferences that are revealed by our various consistent moral choices. It is then quite accurate to say that our moral preferences encapsulate all our relevant moral values.

Furthermore, the usual economics framework is wise and insightful because we in fact quite often disagree about moral choices when we take moral action. This framework that Tyler seems to use above, wherein we first agree on which acts are moral and then we act, is based on an often quite unrealistic fiction. We instead commonly each take moral actions in the absence of agreement. In such cases we each have a different set of moral preferences, and must consider how to take moral action in the context of our differing preferences.

At this point the usual economists’ framework, wherein different agents have different preferences, becomes quite directly relevant. It is then useful to think about moral Pareto improvements, wherein we each get more of what we want morally, and moral deals, where we make verifiable agreements to achieve moral “gains from trade”. The usual economist tools for estimating and calculating our wants and the location of win-win improvements then seem quite useful and important.

In this situation, we each seek to influence the resulting set of actual moral choices in order to achieve our differing moral preferences. We might try to achieve this influence via preaching, threats, alliances, wars, or deals; there are many possibilities. But whatever we do, we each want any analytical framework that we use to help us in this process to reflect our actual differing moral preferences. Yes, preferences can be complex, must be inferred from limited data on our choices, and yes we are often “poorly informed, confused, or downright inconsistent.” But we rarely say “why should the concept of satisfying [my moral] preferences be so important?”, and we are not at all indifferent to instead substituting the preferences of some other party, or the choice priorities of some deal analyst or assistant like Tyler. As much as possible, we seek to have the actual moral choices that result reflect our moral preferences, which we see as a very real and relevant thing, encapsulating all our relevant moral values.

And of course we should expect this sort of thing to happen all the more in a more inclusive conversation, one where the people about whom we are making moral choices become part of the moral “dealmaking” process. That is, when it is not we trying to agree among ourselves about what we should do for them, but when instead we all talk together about what to do for us all. In this more political case, we don’t at all say “my preferences are poorly informed, confused, and inconsistent and hardly matter so they don’t deserve that much consideration.” Instead we each focus on causing choices that better satisfy our moral preferences, as we understand them. In this case, the usual economist tools and analytical frameworks based on achieving preferences seem quite appropriate. They deserve to sit center stage in our analysis.

On the Future by Rees

In his new book, On the Future, aging famous cosmologist Martin Rees says aging famous scientists too often overreach:

Scientists don’t improve with age—that they ‘burn out’. … There seem to be three destinies for us. First, and most common, is a diminishing focus on research. …

A second pathway, followed by some of the greatest scientists, is an unwise and overconfident diversification into other fields. Those who follow this route are still, in their own eyes, ‘doing science’—they want to understand the world and the cosmos, but they no longer get satisfaction from researching in the traditional piecemeal way: they over-reach themselves, sometimes to the embarrassment of their admirers. This syndrome has been aggravated by the tendency for the eminent and elderly to be shielded from criticism. …

But there is a third way—the most admirable. This is to continue to do what one is competent at, accepting that … one can probably at best aspire to be on a plateau rather than scaling new heights.

Rees says this in a book outside his initial areas of expertise, a book that has gained many high profile fawning uncritical reviews, a book wherein he whizzes past dozens of topics just long enough to state his opinion, but not long enough to offer detailed arguments or analysis in support. He seems oblivious to this parallel, though perhaps he’d argue that the future is not “science” and so doesn’t reward specialized study. As the author of a book that tries to show that careful detailed analysis of the future is quite possible and worthwhile, I of course disagree.

As I’m far from prestigious enough to get away a book like his, let me instead try to get away with a long probably ignored blog post wherein I take issue with many of Rees’ claims. While I of course also agree with much, I’ll focus on disagreements. I’ll first discuss his factual claims, then his policy/value claims. Quotes are indented; my responses are not. 

FACTS

Social media are now globally pervasive. … Those in deprived parts of the world are aware of what they are missing. This awareness will trigger greater embitterment, motivating mass migration or conflict, if these contrasts are perceived to be excessive and unjust. … Citizens of these privileged nations are becoming far less isolated from the disadvantaged parts of the world. Unless inequality between countries is reduced, embitterment and instability will become more acute because the poor, worldwide, are now, via IT and the media, far more aware of what they’re missing.

There is little evidence that mere awareness of inequality induces violence conflict. And I’m pretty sure that the poor knew they were poor before. This seems mostly wishful thinking, a threat to induce redistribution. (When I merely compared this common sort of income-oriented threat to sex-oriented threats, many accused me of supporting sex violence. Few will complain Rees is advocating violence here.)

We can’t confidently forecast lifestyles, attitudes, social structures, or population sizes even a few decades hence.

We can actually predict future population pretty well, as death rates are quite predictable, and birth rates have followed pretty predictable trends. Most human social structures, like families, firms, cities, nations, are pretty stable over decades. We can be pretty sure most structures in these systems won’t be that different twenty years hence.

Human beings themselves—their mentality and their physique—may become malleable through the deployment of genetic modification and cyborg technologies. This is a game changer. When we admire the literature and artefacts that have survived from antiquity, we feel an affinity, across a time gulf of thousands of years, with those ancient artists and their civilisations. But we can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us—even though they may have an algorithmic understanding of how we behaved.

If is fine to worry about future changes, but the mere possibility of malleability seems far from sufficient to conclude that our descendants will have no “emotional resonance with us”. Existing mental and social structures have huge inertia, and at each point the incentives will be to adopt changes that match well with existing structures. I foresee a lot of resonance.

This century is special. It is the first when one species, ours, is so empowered and dominant that it has the planet’s future in its hands. … This century is the first in which one species—ours—can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating breakdowns. … This is the first era in which humanity can affect our planet’s entire habitat: the climate, the biosphere, and the supply of natural resources.

We have always had the planet’s future in our hands. Rates of change has increased during the industrial era, but humans have long been changing the climate, biosphere, and natural resources. This century isn’t unique here.

Back in 2003 I was worried about these hazards and rated the chance of bio error or bio terror leading to a million deaths as 50 percent by 2020. I was surprised at how many of my colleagues thought a catastrophe was even more likely than I did. More recently, however, psychologist/author Steven Pinker took me up on that bet, with a two-hundred-dollar stake. … Bio error and bio terror are possible in the near term—within ten or fifteen years. … The public is still in denial about two kinds of threats: harm that we’re causing collectively to the biosphere, and threats that stem from the greater vulnerability of our interconnected world to error or terror induced by individuals or small groups. … The emergent threat from technically empowered mavericks is growing. … If there is indeed a growing risk of conflicts triggered by ideology or perceived unjust inequality, it will be aggravated by the impact of new technology on warfare and terrorism.

His 2003 prediction seems crazy huge to me, and I and many others would have been happy to bet him, if he was willing to bet folks other than celebrities. As I posted on recently, we see little evidence that individuals or small groups actually cause more harm to the world than before.

Demographers predict continuing urbanisation, with 70% of people living in cities by 2050. Even by 2030, Lagos, São Paulo, and Delhi will have populations greater than thirty million. Preventing megacities from becoming turbulent dystopias will be a major challenge to governance.

There is little evidence that big cities are “becoming turbulent dystopias”.

European villages in the mid-fourteenth century continued to function even when the Black Death almost halved their populations; the survivors were fatalistic about a massive about a massive death toll. In contrast, the feeling of entitlement is so strong in today’s wealthier countries that there would be a breakdown in the social order as order as soon as hospitals overflowed, key workers stayed at home, and health services were overwhelmed. This could occur when those infected were still a fraction of 1 percent.

In general we see little breakdown in social order in big temporary crises. Social order would stay fine with one percent infected.

Earlier [SETI] searches … didn’t find anything artificial. But they were very limited—it’s like claiming that there’s no life in the oceans after analysing one glassful of seawater.

Actually, any one glass of seawater typically holds much life; that would indeed tell you there’s life in the ocean.

Even if intelligence were widespread in the cosmos, we may only ever recognise a small and atypical fraction of it. Some ‘brains’ may package reality in a fashion that we can’t conceive. Others could be living contemplative energy-conserving lives, doing nothing to reveal their presence.

I’m skeptical that there’s much life hiding without doing anything. The competitive gains to metabolism and structure seem strong, and useful living metabolism and structure should be noticeably different than the dead versions we now see.

Whereas there are many composers whose last works are their greatest, there are few scientists for whom this is so.

Actually, a recent Nature paper found “From artists to scientists, anyone can have a successful streak at any time.”

We might be able to download our thoughts and memories into a machine. If present technical trends proceed unimpeded, then some people now living could attain immortality—at least in the limited sense that their downloaded thoughts and memories could have a life span unconstrained by their present bodies. Those who seek this kind of eternal life will, in old-style spiritualist parlance, ‘go over to the other side’. We then confront the classic philosophical problem of personal identity. If your brain were downloaded into a machine, in what sense would it still be ‘you’?

This isn’t wrong, but as author of a book that tries to get past these tired aspects, I’m disappointed he isn’t aware there’s much more to say.

VALUES

There are some who promote a rosy view of the future, enthusing about improvements in our moral sensitivities as well as in our material progress. I don’t share this perspective. … The gulf between the way the world is and the way it could be is wider than it ever was. … The plight of the ‘bottom billion’ in today’s world could be transformed by redistributing the wealth. … The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will require redistribution of that wealth. … But various types of human enhancements are possible. … As with so much technology, priorities are unduly slanted towards the wealthy. … The criterion for a progressive government should be to provide for everyone the kind of support preferred by the best-off—the ones who now have the freest choice.

It seems crazy to say a world or an era isn’t good because you think with distribution it could be much better. Redistribution has many subtle effects and problems, and so larger versions may just not be feasible. It also seems crazy infeasible to give everyone whatever the rich prefer; most likely that violates the budget constraint.

The planning horizon for infrastructure and environmental policies needs to stretch fifty or more years into the future. If you care about future generations, it isn’t ethical to discount future benefits at the same rate as you would if you were a property developer planning an office building. … Appliances and vehicles could be designed in a more modular way so that they could be readily upgraded by replacing parts rather than by being thrown away. … Effective action needs a change in mind-set. We need to value long-lasting things—and urge producers and retailers to highlight durability.

If we are going to value the future more, we should do it consistently across all our choices, including when planning office buildings. Office buildings are also there to provide future benefits to people. It is incoherent to discount the future more for some kinds of choices than for others.

African cultural preferences may lead to a persistence of large families as a matter of choice even when child mortality is low. If this happens, the freedom to choose your family size, proclaimed as one of the UN’s fundamental rights, may come into question when the negative externalities of a rising world population are weighed in the balance. …

Population isn’t directly an externality. It is connected to the externality of innovation, but in that case more population is good. Natural resources like land, minerals, oil, and water are mostly covered by property rights, and so populations don’t cause externalities merely by consuming such things. There can be negative externalities associated with fishing and polluting commonly used biospheres like oceans, but that is all the more reason to create more property rights in such things.

I was once interviewed by a group of ‘cryonics’ enthusiasts – I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’—really old fashioned. I was surprised to learn later that to say not from my university) had signed up for ‘cryonics’.… It is hard for most of us mortals to take this aspiration seriously; moreover, if cryonics had a real prospect of success, I don’t think it would be admirable either.… the corpses would be revived into a world where they would be strangers—refugees from the past. … ‘thawed-out corpses’ would be burdening future generations by choice; so, it’s not clear how much consideration they would deserve.

Retirees today “burden” the world around them in the sense that they aren’t productively working. And they live in a world where they are relative strangers, which is why they often create retirement communities so they can be around more people like themselves. Is it not admirable for people to enjoy retirement; would they be more admirable if they died on ice floes instead? Cryonics patients today are happy to pay cash for their future revival and living expenses, just as retirees pay for their retirement via savings, but the legal system doesn’t make that easy.

AI system… is likely to create public concern if the system’s ‘decisions’ have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery, or even given a poor credit rating, we would expect the reasons to be accessible to us—and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy.

Actually you don’t know why you get the credit rating you do; that is an opaque algorithm. You know some things that might influence it, but that’s a very different thing. Many medical choices made on your behalf are also based on opaque algorithms. Your life is full of inaccessible non-contestable opaque algorithms that influence what happens to you. Wake up and look around you!

If the machines are zombies, we would not accord their experiences the same value as ours, and the posthuman future would seem bleak. But if they are conscious, why should we not welcome the prospect of their future hegemony?

I might not be one of them, but a lot of people disagree with this pretty strongly. Would be better to engage them in arguments here.

By attacking mainstream religion, rather than striving for peaceful coexistence with it, [hardline atheists] weaken the alliance against fundamentalism and fanaticism. They also weaken science.

There’s a lot to be said for speaking the truth simply and clearly. If that weakens science, so be it, some may reasonably say.

The space environment is inherently hostile for humans. … Pioneer explorers will have a more compelling incentive … [to] harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. These techniques will be, one hopes, heavily regulated on Earth, on prudential and ethical grounds, but ‘settlers’ on Mars will be far beyond the clutches of the regulators. … Posthumans … won’t need an atmosphere. … So it’s in deep space—not on Earth, or even on Mars—that nonbiological ‘brains’ may develop powers that humans can’t even imagine. … We are perhaps near the end of Darwinian evolution, but a faster process, artificially directed enhancement of intelligence, is only just beginning. It will happen fastest away from the Earth—I wouldn’t expect (and certainly wouldn’t wish for) such rapid changes in humanity here on Earth.

Due to agglomeration externalities, social and economic activity will stay here on Earth up until the point where Earth growth is forced to slow down due to filling up the Earth. A great deal of posthuman change can happen before that time, and while those changes may give a few more advantages in space, they will also give great competitive advantages here on Earth. So it is hard to see why regulation of postman changes should be much differ much in space than here on Earth. This seems an attempt to reassure readers that posthuman changes needn’t bother them, when few such reassurances can actually be offered.

Vulnerable World Hypothesis

I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper, The Vulnerable World Hypothesis, fits in this theme:

Consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? … Maybe … ban all research in nuclear physics … [Or] eliminate all glass, metal, or sources of electrical current. … Societies might split into factions waging a civil wars with nuclear weapons, … end only when … nobody is able any longer to put together a bomb … from stored materials or the scrap of city ruins. …

The ​vulnerable world hypothesis​ [VWH] … is that there is some level of technology at which civilization almost certainly gets destroyed unless … civilization sufficiently exits the … world order characterized by … limited capacity for preventive policing​, … limited capacity for global governance.​ … [and] diverse motivations​. … It is ​not​ a primary purpose of this paper to argue VWH is true. …

Four types of civilizational vulnerability. … in the “easy nukes” scenario, it becomes too easy for individuals or small groups to cause mass destruction. … a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. … counterfactual in which a preemptive counterforce [nuclear] strike is more feasible. … the problem of global warming [could] be far more dire … if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlook …

two possible ways of achieving stabilization: Create the capacity for extremely effective preventive policing.​ … and create the capacity for strong global governance. … While some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. …

It goes without saying there are great difficulties, and also very serious potential downsides, in seeking progress towards (a) and (b). In this paper, we will say little about the difficulties and almost nothing about the potential downsides—in part because these are already rather well known and widely appreciated.

I take issue a bit with this last statement. The vast literature on governance shows both many potential advantages of and problems with having more relative to less governance. It is good to try to extend this literature into futuristic considerations, by taking a wider longer term view. But that should include looking for both novel upsides and downsides. It is fine for Bostrom to seek not-yet-appreciated upsides, but we should also seek not-yet-appreciated downsides, such as those I’ve mentioned in two recent posts.

While Bostrom doesn’t in his paper claim that our world is in fact vulnerable, he released his paper at time when many folks in the tech world have been claiming that changing tech is causing our world to in fact become more vulnerable over time to analogies of his “easy nukes” scenario. Such people warn that it is becoming easier for smaller groups and individuals to do more damage to the world via guns, bombs, poison, germs, planes, computer hacking, and financial crashes. And Bostrom’s book Superintelligence can be seen as such a warning. But I’m skeptical, and have yet to see anyone show a data series displaying such a trend for any of these harms.

More generally, I worry that “bad cases make bad law”. Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy. It may be very hard to weigh extreme but unlikely scenarios suggesting more governance against extreme but unlikely scenarios suggesting less governance. Perhaps the best lesson is that we should make it a priority to improve governance capacities, so we can better gain upsides without paying downsides. I’ve been working on this for decades.

I also worry that existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.

Added 8am:

Kevin Kelly in 2012:

The power of an individual to kill others has not increased over time. To restate that: An individual — a person working alone today — can’t kill more people than say someone living 200 or 2,000 years ago.

Anders Sandberg in 2018:

World Government Risks Collective Suicide

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

In the movie Lord of the Rings, Denethor Steward of Gondor is in a suicidal mood when enemies attack the city. If not for the heroics of Gandalf, that mood might have ended his city. In the movie Dr. Strangelove, the crazed General Ripper “believes the Soviets have been using fluoridation of the American water supplies to pollute the `precious bodily fluids’ of Americans” and orders planes to start a nuclear attack, which ends badly. In many mass suicides through history, powerful leaders have been able to make whole communities commit suicide.

In a nuclear MAD situation, a nation can last unbombed only as long as no one who can “push the button” falls into a suicidal mood. Or into one of a thousand other moods that in effect lead to misjudgments and refusals to listen to reason, that eventually leads to suicide. This is a serious problem for any nuclear nation that wants to live long relative to number of people who can push the button, times the timescale on which moods change. When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

This is a big problem for world or universal government. We today coordinate on the scale of firms, cities, nations, and internationals organizations. However, the fact that we also fail to coordinate to deal with many large problems on these scales shows that we face severe limits in our coordination abilities. We also face many problems that could be aided by coordination via world government, and future civilizations will be similarly tempted by the coordination powers of central governments.

But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

Divide the trillions of future years over which we want to last over the increasingly short periods over which moods and sanity changes, and you see a serious problem, made worse by the lack of a sufficiently long view to make us care enough to solve it. For example, if the suicide mood of a universal government changed once a second, then it needs about 1020 non-suicide moods in a row to last a trillion years.

Social Media Lessons

Women consistently express more interest than men in stories about weather, health and safety, natural disasters and tabloid news. Men are more interested than women in stories about international affairs, Washington news and sports. (more)

Tabloid newspapers … tend to be simply and sensationally written and to give more prominence than broadsheets to celebrities, sports, crime stories, and even hoaxes. They also take political positions on news stories: ridiculing politicians, demanding resignations, and predicting election results. (more

Two decades ago, we knew nearly as much about computers, the internet, and the human and social sciences as we do today. In principle, this should have let us foresee broad trends in computer/internet applications to our social lives. Yet we seem to have been surprised by many aspects of today’s “social media”. We should take this as a chance to learn; what additional knowledge or insight would one have to add to our views from two decades ago to make recent social media developments not so surprising?

I asked this question Monday night on twitter and no one pointed me to existing essays on the topic; the topic seems neglected. So I’ve been pondering this for the last day. Here is what I’ve come up with.

Some people did use computers/internet for socializing twenty years ago, and those applications do have some similarities to applications today. But we also see noteworthy differences. Back then, a small passionate minority of mostly young nerdy status-aspiring men sat at desks in rare off hours to send each other text, via email and topic-organized discussion groups, as on Usenet. They tended to talk about grand big topics, like science and international politics, and were often combative and rude to each other. They avoided centralized systems to participate in many decentralized versions, using separate identities; it was hard to see how popular was any one person across all these contexts.

In today’s social media, in contrast, most everyone is involved, text is more often displaced by audio, pictures, and video, and we typically use our phones, everywhere and at all times of day. We more often forward what others have said rather than saying things ourselves, the things we forward are more opinionated and less well vetted, and are more about politics, conflict, culture, and personalities. Our social media talk is also more in these directions, is more noticeably self-promotion, and is more organized around our personal connections in more centralized systems. We have more publicly visible measures of our personal popularity and attention, and we frequently get personal affirmations of our value and connection to specific others. As we talk directly more via text than voice, and date more via apps than asking associates in person, our social interactions are more documented and separable, and thus protect us more from certain kinds of social embarrassment.

Some of these changes should have been predictable from lower costs of computing and communication. Another way to understand these changes is that the pool of participants changed, from nerdy young men to everyone. But the best organizing principle I can offer is: social media today is more lowbrow than the highbrow versions once envisioned. While over the 1800s culture separated more into low versus high brow, over the last century this has reversed, with low has been displacing high, such as in more informal clothes, pop music displacing classical, and movies displacing plays and opera. Social media is part of this trend, a trend that tech advocates, who sought higher social status for themselves and their tech, didn’t want to see.

TV news and tabloids have long been lower status than newspapers. Text has long been higher status than pictures, audio, and video. More carefully vetted news is higher status, and neutral news is higher status than opinionated rants. News about science and politics and the world is higher status that news about local culture and celebrities, which is higher status than personal gossip. Classic human norms against bragging and self-promotion reduce the status of those activities and of visible indicators of popularity and attention.

The mostly young male nerds who filled social media two decades ago and who tried to look forward envisioned high brow versions made for people like themselves. Such people like to achieve status by sparring in debates on the topics that fill high status traditional media. As they don’t like to admit they do this for status, they didn’t imagine much self-promotion or detailed tracking of individual popularity and status. And as they resented loss of privacy and strong concentrations of corporate power, and they imagined decentralized system with effectively anonymous participants.

But in fact ordinary people don’t care as much about privacy and corporate concentration, they don’t as much mind self-promotion and status tracking, they are more interested in gossip and tabloid news than high status news, they care more about loyalty than neutrality, and they care more about gaining status via personal connections than via grand-topic debate sparring. They like wrestling-like bravado and conflict, are less interested in accurate vetting of news sources, like to see frequent personal affirmations of their value and connection to specific others, and fear being seen as lower status if such things do not continue at a sufficient rate.

This high to lowbrow account suggests a key question for the future of social media: how low can we go? That is, what new low status but commonly desired social activities and features can new social media offer? One candidate that occurs to me is: salacious gossip on friends and associates. I’m not exactly sure how it can be implemented, but most people would like to share salacious rumors about associates, perhaps documented via surveillance data, in a way that allows them to gain relevant social credit from it while still protecting them from being sued for libel/slander when rumors are false (which they will often be), and at least modestly protecting them from being verifiably discovered by their rumor’s target. That is, even if a target suspects them as the source, they aren’t sure and can’t prove it to others. I tentatively predict that eventually someone will make a lot of money by providing such a service.

Another solid if less dramatic prediction is that as social media spreads out across the world, it will move toward the features desired by typical world citizens, relative to features desired by current social media users.

Can You Outsmart An Economist?

Steven Landsburg’s new book, Can You Outsmart An Economist?, discusses many interesting questions. For example, in this nice and real example, median wages for all workers only rose 3% from 1980-2005, yet they rose 15% or more for each race/sex subgroup. Because the relative group sizes changed:

Taking the article title as a challenge, however, I have to point out the one place where I disagreed with the book. Landsburg says:

In a recent five-year period on the Maryland stretch of I-95, a black motorist was three times as likely as a white motorist to be stopped and searched for drugs. Black motorists were found to be carrying drugs at pretty much exactly the same rate as whites. (A staggeringly high one-third of stopped blacks and the same staggeringly high one-third of stopped whites were caught with drugs in their cars.) This was widely reported in the news media as clear-cut evidence of racial discrimination. … If you believe that people respond to incentives, then you must believe that if blacks were stopped at the same lower rate that whites were, more of them would have carried drugs. …

If [police] were single-mindedly out to maximize arrests, they’d start by focusing their attention on the group that’s most inclined to carry drugs—in this case, blacks. … If blacks are still carrying more drugs than whites, the police shift even more of their focus to blacks, leading the gap to close a bit more. This continues until whites and blacks are carrying drugs in equal proportions. … If you want to maximize deterrence, you’ll concentrate more on stopping whites, because there are more whites in the population to deter, … which would deter more whites from carrying drugs—and then the average white motorist would carry fewer drugs than the average black.

I’m with him until that last sentence. I think he is assuming that each choice to carry drugs or not is chosen independently, that choice is deterred independently via a perceived chance of being stopped, that potential carriers know only the average chance that someone in their groups is stopped, and that police can’t usefully vary the stopping chance within groups.

If a perceived stopping chance could be chosen independently for each individual, then to maximize deterrence overall that chance would be set somewhat differently for each individual, according to their differing details. But the constraint that everyone in a group must share the same stopping chance will prevent this detailed matching, making it a bit harder to deter drug carrying in that group. This is a reason that, all else equal, police motivated by deterrence may try a little less harder to deter larger groups, who have more internal variation.

Landsburg instead argues that you’ll put more effort into deterring the larger group, apparently just because there is a larger overall benefit from deterring a larger group. Yes, of course, deterring a group twice as larger could produce twice the deterrent benefit in terms of an effect on the overall drug-carrying crime rate. But that comes at twice the cost in terms of all those traffic stops. I don’t see how there is a larger benefit relative to cost from focusing deterrence efforts on larger groups.

How To Fund Prestige Science

How can we best promote scientific research? (I’ll use “science” broadly in this post.) In the usual formulation of the problem, we have money and status that we could distribute, and they have time and ability that they might apply. They know more than we do, but we aren’t sure who is how good, and they may care more about money and status than about achieving useful research. So we can’t just give things to anyone who claims they would use it to do useful science. What can we do? We actually have many options.

A relatively easy case is science that might be useful for improving a product or service in the near future. In this case providers of such goods or services can have good incentives to find the best people and to give them good incentives to help improve their offerings. At least this can work well if researchers can keep their improvements secret within their organizations for a long enough duration.

A related case is where we can generally distinguish discrete inventions (e.g., creations, insights, or techniques) that have commercial value, and where we can identify which small group is responsible for creating each one. In this case, we can set up a system of intellectual property, wherein we give each inventor a property right to control the use of their invention. This can create incentives to develop commercially-valuable inventions that are hard to keep secret, and this works even when providers are not very good at identifying who to hire how to create such improvements. This does, however, impose substantial transaction and enforcement costs.

Another relatively easy case is where we have something particular that we want accomplished, like figuring out how to find a ship’s longitude at sea. At least this is easy when only one small group is responsible for each accomplishment, we can later both verify that the accomplishment happened and identify who caused it. In this case we can offer a prize to whomever causes this accomplishment. Prizes were actually a far more common method of science funding centuries ago, during and after the scientific revolution.

Another very common method from centuries ago was to subsidize complements to scientific research, complements not very useful for other purposes. So long ago science patrons would subsidize scientific journals, libraries, meeting places, equipment, etc. Using a similar approach, today governments often offer research tax credits. This method requires that someone else also fund science via better targeted incentives. Long ago aristocratic scientists funded their own research, for example.

Sometimes what we want are not particular accomplishments, but accurate estimates on particular questions, and increases in that accuracy over time. For example, we might want to know if most dark matter is made of axions, or if adopting the death penalty reduces the crime rate. We want the best current estimate on such things, and then want to learn more over time. For questions in areas where we will eventually know the answers, subsidized prediction markets can both induce accurate-as-possible current estimates, and incentives to increase that accuracy over time, even when credit for such increases should be spread widely over thousands of contributors. It is enough that each trader is reward for moving the price toward its final more-accurate position. These methods haven’t been used much so far for such purposes, but their potential seems large.

For many topics and areas of science, the above methods seem awkward and difficult to apply. If these methods were all we had, then of course we’d make do as best we could. But in fact we have another general way to promote science, one that actually gets disproportionate status and attention. Students, media interviewers, and grant funders all crave public affliction with prestigious scientists, and in effect pay scientists and their institutions for such affiliations. Scientists and the larger world collectively create prestige rankings of papers, people and institutions, including schools, journals, media, and patrons. In this context, individual scientists try to do impressive things, including scientific research, to gain prestige. They also play politics, in order to slant the prestige consensus in their favor.

This prestige system has many problems. For example, mutual admiration societies often form to reinforce each others’s prestige votes on each other, even when these groups don’t actually seem very impressive to outsiders. Even so, this system has long been big and influential, and it can claim substantial credit for many of our most famous and lauded scientific achievements.

In the rest of this post I will make a proposal that I hope can improve this general prestige system. I don’t propose to replace the other better grounded ways to fund science that I described above. If you can use one of them, you probably should. But the demand for prestige affiliation is strong, and may never go away, so let’s see if we can’t find a better way to harness that demand to encourage scientific research.

I don’t propose to turn the focus of this prestige system away from prestige; that seems a bridge too far. I propose to instead turn its attention to a better proxy of true prestige: careful future historian evaluations of deserved prestige. Today, we make crude prestige judgments based on past accomplishments and past prestige judgments by others. But we should see these judgments as noisy and error-prone, relative to what future historians could produce if many good ones spent a lot of time looking at any one paper, person, or institution. When science makes progress, we learn better what science claims to believe, and we can then better credit prestige to those who supported such claims.

In particular, I propose that we create speculative markets which estimate the future prestige of each scientific paper, person, project, and institution, and that we treat market prices socially as our main consensus on how prestigious are such things. The historians who make these judgments will be themselves evaluated by yet further-future historians. Let me explain each of these in turn.

For each scientific paper, there is a (perhaps small) chance that it will be randomly chosen for evaluation in, say, 30 years. If it is chosen, then at that time many diverse science evaluation historians (SEH) will study the history of that paper and its influence on future science, and will rank it relative to its contemporaries. To choose this should-have-been prestige-rank, they will consider how important was its topic, how true and novel were its claims, how solid and novel were its arguments, how influential it actually was, and how influential it would have been had it received more attention.

Different independent groups of SEH using different approaches and unaware of each other might be used, with their median choice becoming the official rank. Large areas of related papers, etc. might be judged together to reduce judging costs. Future SEH can similarly be asked to rank individual people, projects, or institutions.

Assets are created that pays if a paper is selected to be evaluated, and that pay in proportion to the key prestige parameter: some monotonic function (perhaps logarithm) of the rank of that paper relative to its contemporaries. As the number of contemporary papers is known, the max and min of this prestige parameter is known. Similar assets can be created regarding the prestige of a person, project, or institution.

Using these assets, markets can be created wherein anyone can trade in the prestige of a paper conditional on that paper being later evaluated. Yes traders have to wait a long time for a final payoff. But they can sell their assets to someone else in the meantime, and we do regularly trade 30 year bonds today. Some care will have to be taken to make sure the base asset that is bet is stable, but this seems quite feasible.

The prices of such markets are visible and recommended as best current estimates of the prestige of such things. Markets can also be created that estimate prestige conditional on favorable treatment by current institutions, to advise the choices of such institutions. For example, one can estimate the prestige of a paper conditional on it being published in a particular journal, or the prestige of a person condition on that person be hired into a particular job. A journal might publish the submissions expected to be most prestigious, given publication.

Scientific institutions should be embarrassed if market price estimates of prestige differ greatly from other markers of prestige, such as jobs, awards, publications, grants, etc. They should work to reduce these differences, both by changing who gets jobs, grants, etc., and by trading to change market prices. In addition to the usual funding of journals, jobs, grants, libraries, etc., science funding could now also take the form of subsidizing historian judging, subsidizing market trading via market makers, and financing hedge funds to trade in these markets. Science funding could also support the collection of whatever data historians say is useful for making their later evaluations. This may include prediction market price histories on the chance that key experiments would replicate, that key claims are true, etc.

The SEH who judge papers, etc. from 30 years ago should themselves be judged by possible re-evaluation attempts by SEH another 30 years into the future, and so on to infinity. That is, there should be market prices estimating how much future SEH would agree with the evaluations of particular current SEH, and those prices should be used to rank and choose SEH today. Ideally most SEH are young and expect to be alive in 30 years when they may be judged, personally have a lot of money riding on that judgement, and there’s a big chance (say 1.3) of that happening. It should be a scandal to see clear differences in prestige estimates for the same thing evaluated at different times in the future. Of course other historians who do ordinary history research could be judged in the usual way.

The approach I’ve just described assumes a common prestige ranking of topics and people across all space and time. In contrast, some funders may want to promote science within a topic area or nation without subjecting their choices to uncertainty about how generic future SEH will judge their topic area or nation. For example, someone might want to fund work on cancer in the US regardless of how important future SEH judge US cancer to be. If so, one might vary the above approach to tie current choices to market estimates of future historians rankings relative to a particular chosen topic area or nation. Generic SEH can probably produce such rankings at a relatively small additional cost, as long as they are already doing a general evaluation of some paper, person, project, or institution.

Okay, I’ve outlined a pretty grand vision here. What needs to happen next to explore this idea? One very useful exercise would be to hire historians to try and evaluate and rank scientific work from 30 to 300 years ago. We’d like to see how such rankings vary with time elapsed, topic areas, effort levels, and different random teams assigned. This would give us a much better idea of what timescales and effort levels to try in such a system.

(The 30 year time duration is for illustration purposes. The duration should probably be tied to overall rates of economic and scientific change, and thus get shorter if such rates speed up.)

Non-Conformist Influence

Here is a simple model that suggests that non-conformists can have more influence than conformists.

Regarding a one dimensional choice x, let each person i take a public position xi, and let the perceived mean social consensus be m = Σiwixi, where wi is the weight that person i gets in the consensus. In choosing their public position xi, person i cares about getting close to both their personal ideal point ai and to the consensus m, via the utility function

Ui(xi) = -ci(xi-ai)2 – (1-ci)(xi-m)2.

Here ci is person i’s non-conformity, i.e., their willingness to have their public position reflect their personal ideal point, relative to the social consensus. When each person simultaneously chooses their xi while knowing all of the ai,wi,ci, the (Nash) equilibrium consensus is

m = Σi wiciai (ci + (1-ci)(1-wi))-1 (1- Σjwj(1-cj)(1-wj)/(cj + (1-cj)(1-wj)))-1

If each wi<<1, then the relative weight that each person gets in the consensus is close to wiciai. So how much their ideal point ai counts is roughly proportional to their non-conformity ci times their weight wi. So all else equal, non-conformists have more influence over the consensus.

Now it is possible that others will reduce the weight wi that they give the non-conformists with high ci in the consensus. But this is hard when ci is hard to observe, and as long as this reduction is not fully (or more than fully) proportional to their increased non-confomity, non-conformists continue to have more influence.

It is also possible that extremists, who pick xi that deviate more from that of others, will be directly down-weighted. (This happens in the weights wi=k/|xi-xm| that produce a median xm, for example.) This makes more sense in the more plausible situation where xi,wi are observable but ai,ci are not. In this case, it is the moderate non-conformists, who happen to agree more with others, who have the most influence.

Note that there is already a sense in which, holding constant their weight wi, an extremist has a disproportionate influence on the mean: a 10 percent change in the quantity xi – m changes the consensus mean m twice as much when that quantity xi – m is twice as large.