Fine Grain Futarchy Zoning Via Harberger Taxes

Futarchy” is my proposed system of governance which approves a policy change when conditional prediction markets give a higher expected outcome, conditional on that change. In a city setting, one might be tempted to use a futarchy where the outcome is the total property value of all land in and near that city. After all, if people don’t like being in this city, and are free to move elsewhere, city land won’t be worth much; the more attractive a city is as a place to be, the more its property will be worth.

Yes, we have problems measuring property values. Property is only traded infrequently, sale prices show a marginal not a total value, much land is never offered for sale, sales prices are often obscured by non-cash contributions, and regulations and taxes change sales and use. (E.g., rent control.) In addition, we expect at least some trading noise in the prices of any financial market. As a result, simple futarchy isn’t much help for decisions whose expected consequences for property values are smaller than its price noise level. And yes, there are other things one might care about beside property values. But given how badly city governance often actually goes, we could do a lot worse than to just consistently choose policies that maximize a reasonable estimate of city property value. The more precise such property estimates can be, the more effective such a futarchy could be.

Zoning is an area of city policy that seems especially well suited to a futarchy based on total property value. After all, the main reason people say that we need zoning is because using some land in some ways decreases how much people are willing to pay to use other land. For example, people might not want their home next to a bar, liquor store, or sex toy store, are so are willing to pay less to buy (or rent) next to such a place. So choosing zoning rules to maximize total property value seems especially promising. (Similar mechanisms could also be used for other policies that limit or change property use.)

I’ve also written before favorably on Harberger taxes (which I once called “stability rents”). In this system, owners of land (and property tied to that land) must set and may continuously adjust a declared property “value”; they are taxed per unit time as a percentage of momentary value, and must always agree to sell their property at their currently declared value. This system has great advantages in inducing property to be held by those who can gain the most value from it, including via greatly lowering the transaction costs of putting together big property packages. With this system, there’s no more need for eminent domain.

I’ve just noticed a big synergy between futarchy for zoning and Harberger taxes. The reason is that such taxes allow the creation of prices which support a much finer grain accounting of the net value of specific zoning changes. Let me explain.

First, Harberger taxes create a continuous declared value on each property all the time, not just a few infrequent sales prices. This creates a lot more useful data. Second, these declared values better approximate the value that people place on property; the higher an actual value, the higher an owner will declare his or her taxable value to be, to avoid the risk of someone taking it away. Thus the sum total of all declared property values is a decent estimate of total city property value. Third, it is possible to generalize the Harberger tax system to create zoning-conditional property ownership and prices.

That is, relative to current zoning rules, one can define a particular alternative zoning scenario. Such as changing the zoning of a particular area from residential to commercial. For such a scenario, one might declare a particular period during which a decision will be made on it. Given a defined scenario, one can create conditional ownership; I own this property if (and when) this zoning change is made, but not otherwise. The usual ownership then becomes conditional on no zoning changes soon.

With conditional ownership, conditional owners can declare conditional values; you can buy my property under this condition if you pay this declared amount of conditional cash. For example, I might offer to make a conditional sale of my property for $100,000, and you might agree to that sale, but this sale only happens if a particular zoning change is approved. Taxes paid are based on the actual zoning scenario that obtains.

The whole Harberger tax system can be generalized to support such conditional trading and prices. In the simple system, each property has a declared value set by its owner, and anyone can pay that amount at any time to become the new owner. In the generalized system, each property also has a declared value for each approved alternative zoning scenario. By default, alternative declared values are equal to the ordinary no-zoning-change declared value, but property owners can set them differently if they want, either higher or lower. Anyone can make a scenario-conditional purchase of a property from its current (conditional) owner at its scenario-conditional declared value. Physical control of a property only transfers if and when that zoning scenario is actually approved. There’s probably a delay between announcing a new approved scenario and allowing conditional trades in it, to give time to set declared values.

Having declared values for all properties under all scenarios gives us even more data with which to estimate total city property value, and in particular helps with estimating the difference in total city property value due to a zoning change. To a first approximation, we can just add up all the conditional declared values, and compare that sum to one from the no-change declared values. If the former sum is consistently and clearly higher than the latter sum during the declared decision period for this proposal, that seems a strong argument for adopting this zoning proposal. At least if the news that this zoning proposal seems likely be approved at current declared values has been spread widely enough to give owners sufficient time to express their actual conditional declared values.

Actually, to calculate the net property value difference that a zoning change makes, we need only sum over the properties that actually have a conditional declared value different from its no-change declared value. For small local zoning changes, this might only be a small number of properties within a short distance of the main changes. As a result, this system seems capable of giving useful advice on very small and local zoning changes, in dramatic contrast to a futarchy based on prices estimating total city property values. For example, it might even be able to say if a particular liquor store should be allowed at a particular location. As promised, this new system offers much finer grain accounting of the net value of specific zoning changes.

Note that in this simple system, losers are not compensated by winners for zoning rule changes, even though we can actually identify winners and losers. There are probably variations on this system where losers do actually compensate winners via transfers, though I haven’t yet figured out how to arrange that. Simply doing a transfer based on declared values won’t work, as then declared values will be changed to include expected transfer amounts.

We are close to a workable system, but not quite there yet. This is because we face the problem of owners temporarily inflating their declared values conditional on a zoning change that they seek to promote. This might tip the balance to get a change approved, and then after approval they could cut their declared values back down to something reasonable, and only pay a small extra tax for that small decision period. The Harberger tax system has a strong discipline against declaring overly low values, but less so for overly high values.

A solution to this problem is to correct using prices for the purely financial assets that represent claims on all future tax revenue from the Harberger tax on a particular property. That is, each property will pay a tax over time, we could divert that revenue into a particular account, and an asset holder could own the right to spend a fraction of the funds from that account. Such assets could be bought and sold in financial markets, and could also be made conditional on particular zoning scenarios. As such assets are easy to create and duplicate, the usual speculation pressures should make it hard to distort these prices much in any direction.

A plan to temporarily inflate the declared value of a property shouldn’t do much to the market price for a claim to part of all future tax revenue from that property. So when conditional and no-change prices for such tax revenue assets are available regarding a property, it is probably better to use these (scaled by the right factor) instead of the declared property values on that property when calculating the net effect of a zoning change on city property values.

So that’s the plan for using futarchy and Harberger taxes to pick zoning changes. Instead of just one declared value per property, we allow owners to specify declared values conditional on each approved zoning change scenario, and allow conditional purchases as well. By default, conditional values equal no-change values. During a scenario’s decision period, we add ups its conditional values, and if those clearly and consistently exceed the no-change values, even after correcting suspicious inflations with tax revenue asset prices, then the zoning proposal should be adopted.

Thanks to Alex Tabarrok & Keller Scholl for their feedback.

Added 25Jan: One complaint people have about a Harberger tax is that owners would feel stressed to know that their property could be taken at any time. Here’s a simple fix. When someone takes your property at your declared value you can pay 1% of that value to get it back, if you do so quickly. But then you’d better raise your declared value or someone else could do the same thing the next day or week. You pay 1% for a fair warning that “your value is too low!”

Distant Future Tradeoffs

Over the last day on Twitter, I ran three similar polls. One asked:

Software design today faces many tradeoffs, e.g., getting more X costs less Y, or vice versa. By comparison, will distant future tradeoffs be mostly same ones, about as many but very different ones, far fewer (so usually all good features X,Y are feasible together), or far more?

Four answers were possible: mostly same tradeoffs, as many but mostly new, far fewer tradeoffs, and far more tradeoffs. The other two polls replaced “Software” with “Physical Device” and “Social Institution.”

I now see these four answers as picking out four future scenarios. A world with fewer tradeoffs is Utopian, where you can more get everything you want without having to give up other things. In contrast, a world with many more tradeoffs is more Complex. A world where most of the tradeoffs are like those today is Familiar. And a world where the current tradeoffs are replaced by new ones is Radical.  Using these terms, here are the resulting percentages:

The polls got from 105 to 131 responses each, with an average entry percentage of 25%, so I’m willing to believe differences of 10% or more. The most obvious results here are that only a minority foresee a familiar future in any area, and answers vary greatly; there is little consensus on which scenarios are more likely.

Beyond that, the strongest pattern I see is that respondents foresee more complexity, relative to a utopian lack of tradeoffs, at higher levels of organization. Physical devices are the most utopian, social institutions are the most complex, and software sits in the middle. The other possible result I see is that respondents foresee a less familiar social future. 

I also asked:

Which shapes the world more in the long run: the search for arrangements allowing better compromises regarding many complex tradeoffs, or fights between conflicting groups/values/perspectives?

In response, 43% said search for tradeoffs while 30% said value conflicts, and 27% said hard to tell. So these people see tradeoffs as mattering a lot.  

These respondents seriously disagree with science fiction, which usually describes relatively familiar social worlds in visibly changed physical contexts (and can’t be bothered to have an opinion on software). They instead say that the social world will change the most, becoming the most complex and/or radical. Oh brave new world, that has such institutions in it!

How Does Brain Code Differ?

The Question

We humans have been writing “code” for many decades now, and as “software eats the world” we will write a lot more. In addition, we can also think of the structures within each human brain as “code”, code that will also shape the future.

Today the code in our heads (and bodies) is stuck there, but eventually we will find ways to move this code to artificial hardware. At which point we can create the world of brain emulations that is the subject of my first book, Age of Em. From that point on, these two categories of code, and their descendant variations, will have near equal access to artificial hardware, and so will compete on relatively equal terms to take on many code roles. System designers will have to choose which kind of code to use to control each particular system.

When designers choose between different types of code, they must ask themselves: which kinds of code are more cost-effective in which kinds of applications? In a competitive future world, the answer to this question may be the main factor that decides the fraction of resources devoted to running human-like minds. So to help us envision such a competitive future, we should also ask: where will different kinds of code work better? (Yes, non-competitive futures may be possible, but harder to arrange than many imagine.)

To think about which kinds of code win where, we need a basic theory that explains their key fundamental differences. You might have thought that much has been written on this, but alas I can’t find much. I do sometimes come across people who think it obvious that human brain code can’t possibly compete well anywhere, though they rarely explain their reasoning much. As this claim isn’t obvious to me, I’ve been trying to think about this key question of which kinds of code wins where. In the following, I’ll outline what I’ve come up with. But I still hope someone will point me to useful analyses that I’ve missed.

In the following, I will first summarize a few simple differences between human brain code and other code, then offer a deeper account of these differences, then suggest an empirical test of this account, and finally consider what these differences suggest for which kinds of code will be more cost-effective where.

Differences

The code in our heads is the product of learning over our lifetimes, inside a biological brain system that has evolved for eons. Though brain code was designed mainly for old problems and environments, it represents an enormous investment into a search for useful code. (Even if some parts seem simple, the whole system is not.) In contrast, the artificial code that we’ve been writing started from almost nothing a few decades ago.

Our brain code seems to come in a big tangled package that cannot easily be broken into smaller units that can usefully function independently. While it has identifiable parts, connections are dense everywhere; brains seem less modular than artificial code. Relatedly, brains seem much more robust to local damage, perhaps in part via having more redundancy. Brains seem designed for the case where communication is relatively fast and cheap within a brain, but between grains it is far more expensive, slow, and unreliable.

The code in our head does not take much advantage of many distinctions that we often use in artificial code. In our artificial systems, we gain many advantages by separating hardware from software, learning from doing, memory from processing, and memory addresses from content. But it seems that evolution just couldn’t find a way to represent and act on such distinctions.

Artificial code seems to “rot” more quickly. That is, as we adapt it to changing conditions, it becomes more fragile and harder to useful change. As a result, most of the code we now use is not an edited version of the first code that accomplished each task. Instead, we repeatedly re-write similar systems over from scratch. In contrast, while the parts of brain code that we learn over a lifetime do seem to slowly rot, in that we become less mentally flexible with age, we see little evidence of rot in the older evolved brain systems that we use to learn.

Our brain code is designed for hardware with many parallel but slow computing units, while our artificial code has so far mostly been designed for fewer fast and more sequential units. That is, brains calculate many things all at once slowly, while most artificial code calculates one thing at a time.

Our brains do some pre-determined tasks quickly in parallel. Such as simultaneously recognizing both visual and auditory signs of a predator. However, when we humans work on the tasks that most display our versatile generality, the sort of tasks that are most likely to matter in the future, our brains mostly function both slowly and sequentially. That is, we accomplish such tasks by proceeding step by step, and at each step our whole brain works in parallel by adding up many small contributions to that step.

Even so, the power and generality that often results from this process is truly stunning, being far beyond anything we know how to achieve with artificial code, no matter how much hardware we use and how many man-years we devote to writing it. This generality is why humans brains still earn the vast majority of “wages” in our economy. Artificial code is very useful but gets paid much less in the aggregate, and in this sense is still far less useful than brain code. (“AI” software, artificial software intended to more directly mimic brain software, earns a much smaller fraction of aggregate wages.)

While the code in our heads resulted largely from simple variation and selection of whole organisms, human brains use more directed processes to generate the artificial code that we write. For example, one common procedure is to repeatedly have a brain imagine the results of particular small sections of code being executed in particular contexts, and repeatedly edit code until such imagined executions produce desired outcomes. This is commonly interleaved with actual execution of trial code on test cases. This process works better with more modular code expressed in terms of logical concepts that have sharp boundaries and implications, as it is easier for our brains to predict what happens in such contexts. Some parts of artificial code are generated via statistical analysis of large datasets.

When we have invested in having a kind of code actually do a task, such investments tend to give an advantage to that kind of code in continuing to do that task. Also, when each type of code can more easily connect to, or coordinate with, other code of its type, each type of code gains an advantage at a task when more of the tasks that it must coordinate with are also done by the same type of code. Thus each type of code has momentum, in continuing on where it was, and it naturally clumps together, especially in the most highly clumped sections of the network of tasks.

Easy Implications

So what do these various considerations imply about which kinds of code win where? We can identify a few weak “all else equal” tendencies.

Brain code has at least a small temporary advantage on tasks that have been done by brain code lately, and that must coordinate with many other tasks done by brain code. The fact that that brain code requires an entire brain when being general suggests that artificial code is more cost-effective on small simple problems where brains do not have special abilities. At least if hardware costs to run code are important relative to costs to write code. Artificial code also seems much cheaper to run when a relatively simple sequential algorithm is sufficient, as brain code uses a whole brain to execute simple sequential computations. Artificial code also has advantages when lots of fast precise communication is desired across scopes larger than a brain.

The fact that brain code was designed for old problems suggests that it has advantages on old and long-lasting problems, relative to new problems. One the one hand, brain code being old and old systems being less flexible suggests using artificial code when great adaptability is required beyond the range for which brains were designed. On the other hand, the fact that artificial code rots more quickly suggests that artificial code has advantages when problems or contexts change quickly, which would force new code soon in any case, but disadvantages for stable long-lasting tasks.

In addition to these relatively simple and shallow implications, I have found a somewhat deeper perspective that seems useful. Let me explain.

Code Principles

Two key principles for managing code are: abstraction and modularity. With abstraction, you cut redundancy, via doing the same tasks but replacing many similar systems with a smaller number of more abstracted systems. Abstracted systems are smaller, in a code-length sense, though they may cost more in hardware to execute their code. By avoiding redundancy, abstracted systems make more efficient use of efforts to modify them.

With modularity, you try to limit the dependencies between subsystems, so that changes to one subsystem force fewer changes to other subsystems. Better ways to integrate and organize systems, which better “carve nature at its joints”, allow more effective modularity, and thus fewer design change cascades, wherein a change in one place forces many changes elsewhere.

It can take a lot of work to search for a better design architectures that better facilitate modularity. Given the usual rate at which artificial code rots, only a limited amount of work here is justified. Sometimes systems that have partially rotted are “refactored”, by changing their high level architecture. Though expensive, such refactoring can cut the level of system rot, and is often cost-effective in terms of delaying a complete system rewrite. Better abstractions tend to promote better organization, which induces more effective more-slowly-rotting modularity.

A Deep Difference

Today, humans wanting code to do a task will first search for close-enough pre-existing code that might be lightly modified to do the job. Absent that, they will think for a bit, open a blank screen, and start typing. They will connect their new code to existing code as convenient, but will be wary of creating too many independencies that reduce modularity. They will think and search a bit for good ways to organize this new system, including good abstractions, but will put only limited effort into this. That is, to manage code complexity, humans tend to make new code that is highly modular, but not very well organized.

In contrast, as evolution designed and redesigned brains, it faced strong limits on the amount of brain hardware available. Brains take precious volume and are energetically expensive. And because evolution never managed to separate hardware and software, this hardware limit created strong limits on the amount of software possible. Limits far more restrictive than the memory limits impose on humans who write code today. To add new software to brains, evolution could only a) add more hardware, b) delete old software, or c) seek more efficient representations and abstractions in order to save space.

Thus evolution just could not take the usual human approach of just opening a blank screen and writing new highly-modular but sloppily-organized code. Evolution instead had to keep searching hard for better ways to organize and integrate existing code, including better abstractions. This was a much more expensive process, but as it played out over eons it resulted in code that was much better organized and integrated, though less modular.

This perspective helps us to understand why brain code seems less modular than artificial code, why brain code doesn’t rot as fast and is more robust to damage, why it is harder to usefully break brain code into small units to do small tasks, why brain code is better at being more general, and why artificial code is more sequential. It also helps us understand why the usual focused process for having brains make artificial code works reasonably well: brains know enough to predict how small chunks of code will behave, but only when that code is relatively modular.

This perspective also helps us to understand why abstraction is one of the brain’s key organizing principles. As I’ve said, human brains

collect things at similar levels of abstraction. The rear parts of our brains tend to focus more on small near concrete details while the front parts of our brain tend to focus on big far abstractions. In between, the degree of abstraction tends to change gradually.

Another important if less well understood organizing principle of brains is to separate a left and a right brain, perhaps to separate systems of credit assignment that don’t mix well. That is, to separate bottom-up processing that searches for fewer big things to explain many details, from top-down processing that searches for details to best achieve abstract goals and to fill in details of abstract expectations.

In sum, a deeper perspective can help us to understand how brain and artificial code differ, and thus which kinds of code can win where: Brain code is better integrated and abstracted, but less modular, than artificial code.

Code-Cubed

Let me suggest a way to test this perspective, via data that should already be available, but which I haven’t yet found. In addition to brain code written by evolution, and artificial code written by brains, we can also consider “code-cubed”, i.e., code that is written by artificial code. This is “cubed’ because it is written by code which is written by brain code, which is written by a non-code evolutionary process. Such code-cubed can obviously be written much more quickly than can ordinary code, at least at low levels of quality. But how else does it differ?

A large well-integrated brain that focuses its whole effort on thinking about particular chunks of code should produce in that artificial code a substantial degree of coherence and integration, at least on the scale of those chucks. However, when more modular and less well integrated artificial code writes code, that resulting code-cubed should be less well integrated. And as artificial code can write code much faster than can humans, and has plenty of empty memory available, artificial code will be tempted all the more to rely on new code and modularity to help manage complexity in the code it writes.

Thus this perspective suggests that code-cubed is even more modular, less well organized, and rots more quickly, than ordinary artificial code written by humans. For example, when we change the source code or compiler for a system, we then typically re-execute that complier on the source code, in effect re-writing from scratch. We do this instead of trying to edit the old compiled code in order to match the new source or new compiler. Thus when we want code that can be usefully modified over longer periods of change, we should prefer ordinary artificial code to code-cubed.

What Wins Where

So in a future world where all types of code have access to the same cheap artificial hardware, and where competition pushes each application to use the type of code that is most locally cost-effective there, where should we expect to find brain code, where artificial code, and where code-cubed?

Brain code represents an enormous investment into a large tangled but well integrated and abstracted package that is hard to understand and modify, but has so far shown unparalleled power when applied to stable general broad tasks. This suggests that it may have a long future in applications that play to its strengths. (Long at least in terms of my usual favorite parameter: number of economic doubling times.)  Yes, eventually fully artificial code may become well-integrated, but if ems are possible before then, the descendants of brain code may then have become even better organized and designed.

Most human organizations today are hierarchical, with low level activity focused more on relatively narrow contexts that matter less, need faster responses, and evolve more rapidly. In contrast, higher level activity allows slower responses, has more stable considerations, and must consider broader scopes and implications. Most artificial code today is also hierarchical, with low level code that tends to have a more narrow focus and contexts that change more rapidly, compared to high level code that must consider a wider range of more stable contexts, inputs, and other systems. In both types of systems, lower level tasks are more naturally modular, depending on fewer other tasks.

As smaller more focused more modular tasks that change more rapidly are better suited to artificial code, while less modular tasks that must consider wider context are better suited to brain code, we should expect artificial code to be more common in low organization levels, and brain code to be more common at high organization levels. That is, brains will manage the big pictures, while artificial code manages details.

Among the tasks that humans do today, we can also distinguish more vs. less tangled tasks. Tangled tasks are closer to the more tangled center of a network of which tasks must coordinate with which other tasks. While tasks at higher organization levels do tend to be more tangled, some low level tasks are also highly tangled. Brains also have advantages in these more tangled tasks, and once they are entrenched in such tasks are harder to displace from them.

Youth As Abundance

Many technologies and business practice details have changed greatly over the last few centuries. And looking at the specifics of who did what when, much of this change looks like selection and learning. That is, people tried lots of things, some of these worked, and then others copied the winning practices. The whole pattern looks much like a hard to predict random walk.

Many cultural attitudes and values have also changed greatly over those same few centuries. However, the rate, consistency, and predictability of much of this change makes it hard to tell a similar story of selection and learning. This change instead looks more like how many of our individual human behaviors change over our lifespans – the execution of a previously developed strategy. We need not as individuals learn to explore more when young, and exploit more when old, if our genetic and cultural heritage can just tell us to make these changes.

The idea is that some key context, like wealth, has been changing steadily over the last few centuries, and our attitudes have changed steadily in response to that changing context. Just as individuals naturally change their behaviors as they age, cultures may naturally change their attitudes as they get rich. In addition to wealth, other plausibly triggering context factors include increasing health, peace, complexity, work structure, social group size, and alienation from nature.

Even if wealth isn’t the only cause, it seems a big cause, and it likely causes and it caused by other key causes. It also seems quite plausible for humanity to have learned to change our behavior in good times relative to bad times. Note that good time behavior overlaps with, but isn’t quite the same as, how individual behavior changes as individuals get rich, but their society doesn’t. The correlation between individual behavior and wealth is probably influenced a lot by selection: some behaviors tend more to produce individual wealth. Selection has less to do with how a society’s behaviors change as it gets rich.

I’ve written before on a forager vs. farmer account of attitude changes over the last few centuries. Briefly, the social pressures that turned foragers into farmers depended a lot on fear, conformity, and religion, which are complemented by poverty. As we get rich those pressures feel less compelling to us, and we less create such pressures on others. I think this forager-farmer story is helpful, but in this post I want to outline another complementary story: neoteny. One of the main ways that humans are different from other animals is our neoteny; we retrain youthful features and behaviors longer into life. This helps us to be more flexible and also learn more.

Being young is in many ways like living in a rich society. Young people have more physical energy, face less risk of physical damage, and have fewer responsibilities. Which is a lot like being rich. In a rich society you tend live longer, making you effectively younger at any given calendar age. And when young, it makes more sense to be more playful, to learn and explore new possibilities rather than just exploit old skills and possibilities, and to invest more in social connections and in showing off, such as via art, music, stories, or sport. All these also make more sense in good times, when resources are plentiful.

If living in a rich society is a lot like being young, then in makes sense to act more youthful during good times. And so humanity might have acquired the heuristic of thinking and acting more youthful in good times. And that right there can help explain a lot of changes in attitudes and behaviors over the last few centuries. I don’t think it explains quite as many as the back-to-foragers story, but it is very a priori plausible. Not that the forager story is that implausible, but still, priors matter.

From 2006 to 2009, Bruce Charlton wrote a series of articles exploring the idea that people are acting more youthful today:

A child-like flexibility of attitudes, behaviours and knowledge is probably adaptive in modern society because people need repeatedly to change jobs, learn new skills, move to new places and make new friends. (more)

Yes, the world changes more quickly in the industrial era than it did in the farming era, but that rate of change hasn’t increased much in the last century. So this one-time long-ago change in the social rate of change seems a poor explanation for the slow steady trend toward more youthful behavior we’ve seen over the last century. More neoteny as a response to increasing wealth makes more sense to me.

When Wholes Become Parts

Here’s a nice simple general principle to describe many kinds of systems. When once self-sufficient wholes join together to become parts of a new whole, the parts get simpler and also more different from one another:

The emergence of a higher level entity with functional capabilities is ordinary accompanied by the loss of part types within the lower-level organisms that constitute it. Thus … cells in multicellular organisms will have fewer part types than fee-living protists. … The lower-level organisms are transformed intodifferentijtalted parts within the higher-level entity. Along with this, as size increases, parts emerge at an intermediate scale, between the lower level organisms and the higher-level entity. …

In the evolution of multicellularity, cells are transformed from organisms into different tailed parts. Then, as the size of the multicellular entity increased, cells combined to form larger parts, intermediate in scale between as cell and the multicellular organism as a whole. … Cells in metazoans and land plants have fewer part types on average than free-living protists. … found a power law relationship between size and number of cell types in multicellular organisms. Also, the degree of morphological, physiological, and/or behavioral differentiation – in insect societies increases with colony size.

From: Daniel McShea and Carl Anderson. (2005) “The Remodularization of the Organism”, in Werner Callebaut and Diego Rasskin-Gutman, eds., Modularity: Understanding the Development and Evolution of Natural Complex Systems, pp. 185-206, MIT Press, May.

That is, while each cell might in essence need legs, eyes, a mouth, and a stomach, when cells join together they can each live without such things, and they may specialist in order to become part of a leg, eye, etc. for the new organism.

This has an obvious implication for our future. As we humans join together into larger more complex social organizations, our descendants will likely also become simpler and more differentiated. Of course there are limits on how fast these things can happen; even today the cells in each organism have a great many parts, and remain similar to each other in a great many ways. Changer will likely be much faster after ems become possible.

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them and they will represent it personally.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Of course beware of two failures modes of people with things: 1) not noticing when others don’t want to hear about your thing, 2) Not listening enough to criticism that shows you are wrong about your thing.

Gender Is Big

Consider the possibility of discrimination against the left-handed. Such discrimination might make efficiency sense in contexts where expensive-to-change complementary equipment is designed for the right-handed. Such as pilots. In other contexts, one might justify mild discrimination based on weak correlations, such as between handedness and intelligence, gender, and health. But these other factors tend to be directly observable, and correlations are weak. So stronger correlations of handedness with success, especially where not explained by these other correlations, are suspicious.

What do we suspect? One possibility is political equilibria wherein an established group of insiders arbitrarily favor people like them against outsiders. We might especially suspect this if we saw people rewarding others for discriminating against the left-handed, as something like this would need to be part of an insiders-favoring political equilibria. It is plausible, though not obvious, that disrupting such an insider-favoring equilibria is good for the world. So we might consider prohibiting or at least hindering discrimination against left-handers. (One might also just think we are in a bad choice out of multiple equilibria, and not blame insiders so much.)

This all makes sense as a way to think about discrimination for what are arguably relatively minor, or small, features such as height or hair-length. But now consider gender. It seems to me that the above framework is far less useful for gender, as gender is not remotely a small feature.

For most people, their main long-term spouse is the most important relationship in their life. And most care greatly about the gender of that spouse. It isn’t just ordinary “straight cis” people who think this way. Gay/lesbians also mostly agree that the genders differ greatly in important features, and they have a strong preference for one end of the gender spectrum. In part because others care about gender, most people also care greatly about how others see their own gender. Most transgender people also care a lot (almost by definition) about how others see their gender; they just make unusual choices about that. So most everyone agrees that most everyone cares a lot about the genders of their associates, and the genders that others assign to them.

Some may postulate gender as an innate atomic feature of the universe of human concerns, so that when we desire that an associate have a certain gender that has nothing to do with their many other associated features. But that seems crazy to me. Much more plausibly, what we like about a gender is strongly tied to the set of associated features that tend to go along with that gender. That is, we like the package of features that “are” a gender. In this case, the fact that we strongly care about genders suggests that different genders differ greatly in many features that are important to us. These features probably include habits, attitudes, preferences, and abilities. Gender is big, and it matters a lot.

Because gender is big, we expect it to correlate substantially with many features that we care about when assigning people to roles. But this means that even strong correlations of gender with success in particular roles is at best only a weak cause for suspicion about insider-favoring or other bad equilibria. There are just too many other good reasons to expect to see large gender-role correlations.

Now you might argue that today’s large correlations between gender and important features are largely a legacy resulting from a bad past. And change takes time. So creating pressures for low gender-role correlations today will push us to move faster toward a better future, even if that costs us today in terms of matching people to roles well.

However, the prospects for a world anytime soon where different genders correlate little with other important features seems to me quite low. (As low as the chance that communist governments would rapidly “whither away” to produce “true” communism.) Yes, gender correlations have changed across societies and across time, but almost always there have been strong correlations between gender and important things. The fact that societies with weaker gender roles have more strongly gendered personalities also (weakly) suggests to me that we fundamentally want genders to differ, even if we aren’t that stuck on most particular differences. We want gender to be big; we want to love and be loved by people that differ from us in big known ways.

Thus I don’t think we can well justify anti-discrimination efforts today that suppress gender-role correlations, at least in terms of disrupting insider-favoring or other bad equilibria, or in terms of promoting a low-gender-differences future. But I do see some other justifications, which I may write about in future posts.

It seems to me that our public discussion about gender has for a while been somewhat in denial about the likely long continuation of strong gender correlations with important features. If the genders continue to act differently on average, then observers will naturally form gendered expectations based on such behavior. That is, there will be gender roles. We can and should talk about what we want those gender roles to be, but we can’t do that until we admit that such roles will exist.

Replication Markets Team Seeks Journal Partners for Replication Trial

An open letter, from myself and a few colleagues:

Recent attempts to systematically replicate samples of published experiments in the social and behavioral sciences have revealed disappointingly low rates of replication. Many parties are discussing a wide range of options to address this problem.

Surveys and prediction markets have been shown to predict, at rates substantially better than random, which experiments will replicate. This suggests a simple strategy by which academic journals could increase the rate at which their published articles replicate. For each relevant submitted article, create a prediction market estimating its chance of replication, and use that estimate as one factor in deciding whether to publish that article.

We the Replication Markets Team seek academic journals to join us in a test of this strategy. We have been selected for an upcoming DARPA program to create prediction markets for several thousand scientific replication experiments, many of which could be based on articles submitted to your journal. Each market would predict the chance of an experiment replicating. Of the already-published experiments in the pool, approximately one in ten will be sampled randomly for replication. (Whether submitted papers could be included in the replication pool depends on other teams in the program.) Our past markets have averaged 70% accuracy, and the work is listed at the Science Prediction Market Project page, and has been published in Science, PNAS, and Royal Society Open Science.

While details are open to negotiation, our initial concept is that your journal would tell potential authors that you are favorably inclined toward experiment article submissions that are posted at our public archive of submitted articles. By posting their article, authors declare that they have submitted their article to some participating journal, though they need not say which one. You tell us when you get a qualifying submission, we quickly tell you the estimated chance of replication, and later you tell us of your final publication decision.

At this point in time we seek only an expression of substantial interest that we can take to DARPA and other teams. Details that may later be negotiated include what exactly counts as a replication, whether archived papers reveal author names, how fast we respond with our replication estimates, what fraction of your articles we actually attempt to replicate, and whether you privately give us any other quality indicators obtained in your reviews to assist in our statistical analysis.

Please RSVP to: Angela Cochran, PM acochran@replicationmarkets.com 571 225 1450

Sincerely, the Replication Markets Team

Thomas Pfeiffer (Massey University)
Yiling Chen, Yang Liu, and Haifeng Xu (Harvard University)
Anna Dreber Almenberg & Magnus Johannesson (Stockholm School of Economics)
Robin Hanson & Kathryn Laskey (George Mason University)

Added 2p: We plan to forecast ~8,000 replications over 3 years, ~2,000 within the first 15 months.  Of these, ~5-10% will be selected for an actual replication attempt.

Umpires Shouldn’t Be On Teams

There are many complex issues to consider when choosing between public vs private provision of a good or service. But one issue seems to me to clearly favor the private option: rights. If you want to make rights-enforcing rules that are actually followed, you are better off having courts or regulators enforcing rules on a competitive private industry.

Consider this excellent 2015 AJPS paper:

Many regulatory policies—especially health, safety, and environmental regulations—apply to government agencies as well as private firms. … Unlike profit‐maximizing firms, government agencies face contested, ambiguous missions and are politically constrained from raising revenue to meet regulatory requirements. At the same time, agencies do not face direct competition from other firms, rarely face elimination, and may have sympathetic political allies. Consequently, the regulator’s usual array of enforcement instruments (e.g., fines, fees, and licensure) may be potent enough to alter behavior when the target is a private firm, but less effective when the regulated entity is a government agency. …

The ultimate effect of regulatory policy turns not on the regulator’s carrots and sticks, but rather on the regulated agency’s political costs of compliance with or appeal against the regulator, and the regulator’s political costs of penalizing another government. One implication of this theory is that public agencies are less likely than similarly situated private firms to comply with regulations. Another implication is that regulators are likely to enforce regulations less vigorously against public agencies than against private firms because such enforcement is both less effective and more costly to the regulator. …

We find that public agencies are more likely than private firms to violate the regulatory requirements of the [US] Clean Air Act and the Safe Drinking Water Act. Moreover, we find that regulators are less likely to impose severe punishment for noncompliance on public agencies than on private firms. (more)

See also:

There is evidence … that [public entities] are [better] able to delay or avoid paying fines when penalties are assessed. (more)

Public sector employees experienced a higher incidence rate of work-related injuries and illnesses than their private industry counterparts. (more)

I’ve tried but failed to find stats on public vs private relative rates of abuse, harassment, bribery, embezzlement, nepotism, and test cheating. (Can you find more?) But I’d bet they’d also show government agencies violating such rules at higher rates.

This perspective seems very relevant to criminal justice reform. Our status quo criminal justice system embodies enormous inefficiencies and injustices, but when I propose changes that involve larger roles for private actors, I keep hearing “yes that might be more efficient, but won’t private actors create more rights violations?” But the above analysis suggests that this gets the comparison exactly wrong!

Yes of course, if you compare a public org that has a rule with a private actor to whom no such rules applies, you may get more rule “violations” with the latter. And yes, enforcement of central rules can be expensive and limiting, so sometimes it makes sense to use private competition as a substitute for central rules, and so impose fewer rules on private actors. But once we allow ourselves to choose which rules to impose, private orgs seem just overall better for enforcing rules.

Note that when a government agency directly contracts with a specific private organization, using complex flexible terms and monitoring, as in military procurement, the above theory predicts that this contractor will look much more like an extension of the government agency for the purpose of rule enforcement. Rule enforcement gains come instead from private orgs that compete to be chosen by the public, or that compete to win simple public prizes where public orgs do not have so much discretion over terms that they can pick winners, but get blamed for rights violations of losers.

It is these independent private actors that I seek to recruit to reform criminal justice. We will get more, not less, enforcement of rules that protect rights, when the umpires who enforce rights are less affiliated with the teams who can violate them.

Most Progress Not In Morals

Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best. Herodotus 440bc

Over the eons, we humans have greatly increased our transportation abilities. Long ago, we mostly walked everywhere. Then over time, we accumulated more ways to move ourselves and goods faster, cheaper, and more reliably, from boats to horses to gondolas to spaceships. Today, for most points A and B, our total cost to move from A to B is orders of magnitude cheaper than it would be via walking.

Even so, walking remains an important part of our transport portfolio. While we are able to move people who can’t walk, such as via wheelchairs, that is expensive and limiting. Yet while walking still matters, improvements in walking have contributed little to our long term gains in transport abilities. Most gains came instead from other transport methods. Most walking gains even came from other areas. For example, we can now walk better due to better boots, lighting, route planners, and paved walkways. Our ability to walk without such aids has improved much less.

As with transport, so with many other areas of life. Our ancient human abilities still matter, but most gains over time have come from other improvements. This applies to both physical and social tech. That is, to our space-time arrangements of physical materials and objects, and also to our arrangements of human actions, info and incentives.

Social scientists often use the term “institutions” broadly to denote relatively stable components social arrangements of actions, info and incentives. Some of the earliest human institutions were language and social norms. We have modestly improved human languages, such as via expanded syntax forms and vocabulary. And over history humans have experimented with a great range of social norms, and also with new ways to enforce them, such as oaths, law, and CCTV.

We still rely greatly on social norms to manage small families, work groups and friend groups. As with walking, while we could probably manage such groups in other ways, doing so would be expensive and limiting. So social norms still matter. But as with our walking, relatively little of our gains overtime has come from improving our ancient institution of social norms.

When humans moved to new environments, such as marshes or antic tundra, they had to adapt their generic walking methods to these new contexts. No doubt learning and innovation was involved in that process. Similarly, we no doubt continue to evolve our social norms and their methods of enforcement to deal with changing social contexts. Even so, social norm innovation seems a small part of total institutional innovation over the eons.

With walking, we seem well aware that walking innovation has only been a small part of total transport innovation. But we humans were built to at least pretend to care a lot about social norms. We consider opinions on and adherence to norms, and the shared values they support, to be central to saying who are “good” or “bad” people, and who we see as in “our people”. So we make norms central to our political fights. And we put great weight on norms when evaluating which societies are good, and whether the world has gotten better over time.

Thus each society tends to see its own origin, and the changes which led to its current norms, as enormously important and positive historical events. But if we stand outside any one society and consider the overall sweep of history, we can’t automatically count these as big contributions to long term innovation. After all, the next society is likely to change norms yet again. Most innovation is in accumulating improvements in all those other social institutions.

Now it is true that we have seen some consistent trends in attitudes and norms over the last few centuries. But wealth has also been rising, and having humans attitudes be naturally conditional on wealth levels seems a much better explanation of this fact than the theory that after a million years of human evolution we suddenly learned how to learn about norms. Yes it is good to adapt norms to changing conditions, but as conditions will likely change yet again, we can’t count that as long term innovation.

In sum: most innovation comes in additions to basic human capacities, not in tweaks to those original capacities. Most transport innovation is not in improved ways to talk, and most social institution innovation is not in better social norms. Even if each society would like to tell itself otherwise.