Long Legacies And Fights In An Uncaring Universe

What can one do today to have a big predictable influence on the long-term future? In this post I’ll use a simple decision framework, wherein there is no game or competition, one is just trying to influence an uncaring universe. I’ll summarize some points I’ve made before. In my next post I’ll switch to a game framework, where there is more competition to influence the future.

Most random actions fail badly at this goal. That is, most parameters are tied to some sort of physical, biological, or social equilibrium, where if you move a parameter away from its current setting, the world tends to push it back. Yes there are exceptions, where a push might “tip” the world to a new rather different equilibrium, but in spaces where most points are far from tipping points, such situations are rare.

There is, however, one robust way to have a big influence on the distant future: speed up or slow down innovation and growth. The extreme version of this preventing or causing extinction; while quite hard to do, this has enormous impact. Setting that aside, as the world economy grows exponentially, any small change to its current level is magnified over time. For example, if one invents something new that lasts then that future world is more able to make more inventions faster, etc. This magnification grows into the future until the point run time when growth rates must slow down, such as when the solar system fills up, or when innovations in physical devices runs out. By speeding up growth, you can prevent the waste all the negentropy that is and will continue to be destroyed until our descendants managed to wrest control of such processes.

Alas making roughly the same future happen sooner versus later doesn’t engage most people emotionally; they are much more interested in joining a “fight” over what character the future will take at any give size. One interesting way to take sides while still leveraging growth is to fund a long-lived organization that invests and saves its assets, and then later spends those assets to influence some side in a fight. The fact that investment rates of return have long exceeded growth rates suggests that one could achieve disproportionate influence in this way. Oddly, few seem to try this strategy.

Another way to leverage growth to influence future fights is via fertility: have more kids who themselves have more kids, etc. While this is clearly a time-tested strategy, we are in an era with a puzzling disinterest in fertility, even among those who claim to seek long-term influence.

Another way to join long-term fights is to add your weight to an agglomeration process whereby larger systems slowly gain over smaller ones. For example if the nations, cities, languages, and art genres with more participants tend to win over time, you can ally with one to help to tip the balance. Of course this influence only lasts as long as do these things. For example, if you push for short vs long hair in the current fashion change, that effect may only last until the next fashion cycle.

Pushing for the creation of a particular world government seems an extreme example of this agglomeration effect. A world government might last a very long time, and retain features from those who influenced its source nations and early structure.

One way to have more influence on fights is to influence systems that are plastic now but will become more rigid later. This is the logic behind persuading children while they are still ignorant and gullible, before they become ignorant and stubbornly unchanging adults. Similarly one might want to influence a young but growing firm or empire. This is also the logic behind trying to be involved in setting patterns and standards during the early days of a new technology. I heard people say this explicitly back when Xanadu was trying to influence the future web. People who influenced the early structure of AM radio and FAX machines had a disproportionate influence, though such influence greatly declines when such systems themselves later decline.

The farming and industrial revolutions were periods of unusual high amounts of change, and we may encounter another such revolution in a century or so. If so, it might be worth saving and collecting resources in preparation for the extra influence available during this next great revolution.

Intellectual Status Isn’t That Different

In our world, we use many standard markers of status. These include personal connections with high status people and institutions, power, wealth, popularity, charisma, intelligence, eloquence, courage, athleticism, beauty, distinctive memorable personal styles, and participation in difficult achievements. We also use these same status markers for intellectuals, though specific fields favor specific variations. For example, in economics we favor complex game theory proofs and statistical analyses of expensive data as types of difficult achievements.

When the respected intellectuals for topic X tell the intellectual history of topic X, they usually talk about a sequence over time of positions, arguments, and insights. Particular people took positions and offered arguments (including about evidence), which taken together often resulted in insight that moved a field forward. Even if such histories do not say so directly, they give the strong impression that the people, positions, and arguments mentioned were selected for inclusion in the story because they were central to causing the field to move forward with insight. And since these mentioned people are usually the high status people in these fields, this gives the impression that the main way to gain status in these fields is to offer insight that produces progress; the implication is that correlations with other status markers are mainly due to other markers indicating who has an inclination and ability to create insight.

Long ago when I studied the history of science, I learned that these standard histories given by insiders are typically quite misleading. When historians carefully study the history of a topic area, and try to explain how opinions changed over time, they tend to credit different people, positions, and arguments. While standard histories tend to correctly describe the long term changes in overall positions, and the insights which contributed to those changes, they are more often wrong about which people and arguments caused such changes. Such histories tend to be especially wrong when they claim that a prominent figure was the first to take a position or make an argument. One can usually find lower status people who said basically the same things before. And high status accomplishments tend to be given more credit than they deserve in causing opinion change.

The obvious explanation for these errors is that we are hypocritical about what counts for status among intellectuals. We pretend that the point of intellectual fields is to produce intellectual progress, and to retain past progress in people who understand it. And as a result, we pretend that we assign status mainly based on such contributions. But in fact we mostly evaluate the status of intellectuals in the same way we evaluate most everyone, not changing our markers nearly as much as we pretend in each intellectual context. And since most of the things that contribute to status don’t strongly influence who actually offers positions and arguments that result in intellectual insight and progress, we can’t reasonably expect the people we tend to pick as high status to typically have been very central to such processes. But there’s enough complexity and ambiguity in intellectual histories to allow us to pretend that these people were very central.

What if we could make the real intellectual histories more visible, so that it became clearer who caused what changes via their positions, arguments, and insight? Well then fields would have the two usual choices about how to respond to hypocrisy exposed: raise their behaviors to meet their ideals, or lower their ideals to meet their behaviors. In the first case, the desire for status would drive much strong efforts to actually produce insights that drives progress, making plausible much faster rates of progress. In this case it could well be worth spending half of all research budgets on historians to carefully track who contributed how much. The factor of two lost in all that spending on historians might be more than compensated by intellectuals focused much more strongly on producing real insight, instead of on the usual high-status-giving imitations.

Alas I don’t expect many actual funders of intellectual activity today to be tempted by this alternative, as they also care much more about achieving status, via affiliation with high status intellectuals, than they do about producing intellectual insight and progress.

Bottom Boss Prediction Market

Sheryl Sandberg and Rachel Thomas write:

Women continue to be vastly underrepresented at every level. For women of color, it’s even worse. Only about one in five senior leaders is a woman, and just one in twenty-five is a woman of color. Progress isn’t just slow—it’s stalled.

Women are doing their part. They’ve been earning more bachelor’s degrees than men for over 30 years. They’re asking for promotions and negotiating salaries as often as men. And contrary to conventional wisdom, women are not leaving the workforce at noticeably higher rates to care for children—or for any other reason. …

At the entry level, when one might expect an equal number of men and women to be hired, men get 54% of jobs, while women get 46%. At the next step, the gap widens. Women are less likely to be hired and promoted into manager-level jobs; for every 100 men promoted to manager, only 79 women are. As a result, men end up holding 62% of manager positions, while women hold only 38%.

The fact that men are far more likely than women to get that first promotion to manager is a red flag. It’s highly doubtful that there are significant enough differences in the qualifications of entry-level men and women to explain this degree of disparity. More probably, it’s because of performance bias. Research shows that both men and women overestimate men’s performance and underestimate women’s. …

By the manager level, women are too far behind to ever catch up. … Even if companies want to hire more women into senior leadership—and many do—there are simply far fewer of them with the necessary qualifications. The entire race has become rigged because of those unfair advantages at the start. …

Companies need to take bold steps to make the race fair. This begins with establishing clear, consistent criteria for hiring and reviews, because when they are based on subjective impressions or preferences, bias creeps in. Companies should train employees so they understand how unconscious bias can affect who’s hired and promoted—and who’s not. (more)

I can’t hold much hope for cutting all subjective judgements from hiring. Most jobs are just too complicated to reduce all useful candidate quality signals to objective measures. But I do have hopes of creating less biased subjective judgements, via (you guessed it) prediction markets. In the rest of this post, I’ll outline a vision for how that could work.

If the biggest problem is that not enough women are promoted to their first-level (bottom boss) management position, then let’s make prediction markets focused on that problem. For whatever consortium of firms join my proposed new system, let them post to that consortium a brief description of all candidates being considered for each of their open first-level management jobs. Include gender and color as two of the descriptors.

Then let all employees within that consortium bet, for any job candidate X, on the chance that if candidate X is put into a particular management job, then that candidate will be promoted to a one-level-higher management job within Y (five?) years. (Each firm decides what higher level jobs count, at that firm or another firm. And perhaps the few employees likely to actually hire those higher-level managers should not be allowed to bet anyone who they might hire.)

Firms give each consortium employee say $100 to bet in these markets, and let them keep any winnings. (Firms perhaps also create a few specialist traders with much larger stakes and access to deep firm statistics on hiring and performance.) Giving participants their stake avoids anti-gambling law problems, and focusing on first level managers avoids insider trading law problems.

It would also help to give participants easy ways to bet on all pools of job candidates with particular descriptors. Say all women, or all women older than thirty years old. Then participants who thought market odds to be biased against identifiable classes of people could easily bet on such beliefs, and correct for such biases. Our long experience with prediction markets suggests that such biases would likely be eliminated; but if not at least participants would be financially rewarded and punished for seeing versus not seeing the light.

It seems reasonable for these firms to apply modest pressure on those filling these positions to put substantial weight on these market price estimates about candidates. Yes, there may be hiring biases at higher levels, but if the biggest problem is at the bottom boss level then these markets should at least help. Yes, suitability for further promotions is not the only consideration in picking a manager, but it is an important one, and it subsumes many other important considerations. And it is a nice clearly visible indicator that is common across many divisions and firms. It is hard to see firms going very wrong because they hired managers a bit more likely to be promoted if hired.

In sum: if the hiring of bottom bosses is now biased against women, but a prediction market on promotion-if-hired would be less biased, then pushing hirers to put more weight on these market estimates should result in less bias against women. Compared to simply pushing hirers to hire more women, this approach should be easier for hirers to accept, as they’d more acknowledge the info value of the market estimates.

Bets As Signals of Article Quality

On October 15, I talked at the Rutgers Foundation of Probability Seminar on Uncommon Priors Require Origin Disputes. While visiting that day, I talked to Seminar host Harry Crane about how the academic replication crisis might be addressed by prediction markets, and by his related proposal to have authors offer bets supporting their papers. I mentioned to him that I’m now part of a project that will induce a great many replication attempts, set up prediction markets about them beforehand, and that we would love to get journals to include our market prices in their review process. (I’ll say more about this when I can.)

When the scheduled speaker for the next week slot of the seminar cancelled, Crane took the opening to give a talk comparing our two approaches (video & links here). He focused on papers for which it is possible to make a replication attempt and said “We don’t need journals anymore.” That is, he argued that we should not use which journal is willing to publish a paper as a signal of paper quality, but that we should use the signal of what bet authors offer in support of their paper.

That author betting offer would specify what would count as a replication attempt, and as a successful replication, and include an escrowed amount of cash and betting odds which set the amount a challenger must put up to try to win that escrowed amount. If the replication fails, the challenger wins these two amounts minus the cost of doing a replication attempt; if not the authors win that amount.

In his talk, Crane contrasted his approach with an alternative in which the quality signal would be the odds in an open prediction market of replication, conditional on a replication attempt. In comparing the two, Crane seems to think that authors would not usually participate in setting market odds. He lists three advantages of author bets over betting market odds: 1) Authors bets give authors better incentives to produce non-misleading papers. 2) Market odds are less informed because market participants know less that paper authors about their paper. 3) Relying on market odds allows a mistaken consensus to suppress surprising new results. In the rest of this post, I’ll respond.

I am agnostic on whether journal quality should remain as a signal of article quality. If that signal goes away, then we are talking about what other signals can be how useful. And if that signal remains, then we can be talking about other signals that might be used by journals to make their decisions, and also by other observers to evaluate article quality. But whatever signals are used, I’m pretty sure that most observers will demand that a few simple easy-to-interpret signals be distilled from the many complex signals available. Tenure review committees, for example, will need signals nearly as simple as journal prestige.

Let me also point out that these two approaches of market odds or author bets can also be applied to non-academic articles, such as news articles, and also to many other kinds of quality signals. For example, we could have author or market bets on how many future citations or how much news coverage an article will get, whether any contained math proofs will be shown to be in error, whether any names or dates will be shown to have been misreported in the article, or whether coding errors will be found in supporting statistical analysis. Judges or committees might also evaluate overall article quality at some distant future date. Bets on any of these could be conditional on whether serious attempts were made in that category.

Now, on the comparison between author and market bets, an obvious alternative is to offer both author bets and market odds as signals, either to ultimate readers or to journals reviewing articles. After all, it is hard to justify suppressing any potentially useful signal. If a market exists, authors could easily make betting offers via that market, and those offers could easily be flagged for market observers to take as signals.

I see market odds as easier for observers to interpret than author bet offers. First, authors bets are more easily corrupted via authors arranging for a collaborating shill to accept their bet. Second, it can be hard for observers to judge how author risk-aversion influences author odds, and how replication costs and author wealth influences author bet amounts. For market odds, in contrast, amounts take care of themselves via opposing bets, and observers need only judge any overall differences in wealth and risk-aversion between the two sides, differences that tend to be smaller, vary less, and matter less for market odds.

Also, authors would usually participate in any open market on their paper, giving those authors bet incentives and making market odds include their info. The reason authors will bet is that other participants will expect authors to bet to puff up their odds, and so other participants will push the odds down to compensate. So if authors don’t in fact participate, the odds will tend to look bad for them. Yes, market odds will be influenced by views others than those of authors, but when evaluating papers we want our quality signals to be based on the views of people other than paper authors. That is why we use peer review, after all.

When there are many possible quality metrics on which bets could be offered, article authors are unlikely to offer bets on all of them. But in an open market, anyone could offer to bet on any of those metrics. So an open market could show estimates regarding any metric for which anyone made an offer to bet. This allows a much larger range of quality metrics to be available under the market odds approach.

While the simple market approach merely bets conditional on someone attempting a replication attempt, an audit lottery variation that I’ve proposed would instead use a small fixed percentage of amounts bet to pay for replication attempts. If the amount collected is insufficient, then it and all betting amounts are gambled so that either a sufficient amount is created, or all these assets disappear.

Just as 5% significance is treated as a threshold today for publication evaluation, I can imagine particular bet reliability thresholds being important for evaluating article quality. News articles might even be filtered or show simple icons based on a reliability category. In this case the betting offer and market options would more tend to merge.

For example, an article might be considered “good enough” if it had no more than a 5% chance of being wrong, if checked. The standard for checking this might be if anyone was currently offering to bet at 19-1 odds in favor of reliability. For as long as the author or anyone else maintained such offers, the article would qualify as at least that reliable, and so could be shown via filters or icons as meeting that standard. For this approach we don’t need to support a market with varying prices; we only need to keep track of how much has been offered and accepted on either side of this fixed odds bet.

Rationality Requires Common Priors

Late in November 2006 I started this blog, and a month later on Christmas eve I reported briefly on the official publication (after 8 rejections) of my paper Uncommon Priors Require Origin Disputes. That was twelve years ago, and now Google Scholar tells me that this paper has 17 cites, which is about 0.4% of my 3933 total cites, which I’d say greatly under-estimates its value.

Recently I had the good fortune to be invited to speak at the Rutgers Seminar on Foundations of Probability, and I took that opportunity to raise awareness about my old paper. Only about ten folks attended (a famous philosopher spoke nearby at the same time), but this video was taken:

In the video my slides are at times dim, but they can be seen sharp here. Let me now try to explain why my topic is important, and what is my result.

In economics, the most common formal model of a rational agent, by far, is that of a Bayesian. This standard model is also very common in business, political science, statistics, computer science, and many other fields. As there is actually a family of related models, we can use this space to argue about what it means to be “rational”. People argue over various particular proposed “rationality constraints” which limit this space of possibilities to varying degrees.

In economics, the standard model starts with a large (finite) state space, wherein each state resolves all relevant uncertainty; every interesting question is completely answered once you know which state is the true state. Each agent in this model has a prior function which assigns a probability to each state in this space. For any given time and situation an agent’s info can be expressed as as set; at any state, each agent has an info set of states where they know that the true state is somewhere within that set, but don’t know where within that set. Any small piece of info is also expressible as a set; to combine info, you intersect sets.

Given a state space, prior, and info, an agent’s expectation or belief is given by a weighted average, using their prior and conditioned on their info set. That is, all variations in agent beliefs across time or situation are to be explained by variations in their info. We usually assume that info is cumulative, so that each agent knows everything that they have ever known in the past. In order to predict actions, in addition to beliefs, the most common approach is to assume agents maximize expected utility, where each agent has another function that assigns a numerical utility value to each possible state.

Some people study ways to relax these assumptions, such as by using a set of priors instead of a single prior, by seeking computationally feasible approximations, or by allowing agents to forget info they once knew. Other people focus on adding stronger assumptions. For example, when a situation has a natural likelihood function giving the chances of particular outcomes assuming particular parameter settings, we usually assume that each agent’s prior agrees with this likelihood. Some people offer arguments for why particular priors are natural for particular situations. And models also usually assume that differing agents have the same prior.

One key rationality question is when it is reasonable to disagree with other people. Most intellectuals see disagreement as rational, and are surprised to learn that theory often says otherwise. This issue turns crucially on the common prior assumption. Given uncommon priors, it is easy to disagree, but given common priors it is hard to escape the conclusion that it is irrational to knowingly disagree, in the following sense of “foresee to disagree.” Assume you are now estimating some number X, and also now estimating some other person’s future estimate of X, an estimate that they will make at some future time. There is a difference now between these two numbers, and you will now clearly tell that other person the sign of this difference. They will then take this sign into account when making their future estimate.

In this situation, for standard Bayesians, this sign must equal zero; you can’t both warn them that you expect their estimate will be too high relative to your estimate, and then also still expect them to remain too high. They will instead listen to your warning and correct enough based on it. This sort of result holds nearly exactly for many slight weakenings of the standard rationality assumptions, but not if we assume big prior differences. And we have seen clearly in the lab, and in real life, humans can in fact often “foresee to disagree” in this sense.

Humans do foresee to disagree, while Bayesians with common priors do not. So are humans rational or irrational here? To answer that question, we must study the arguments for and against common priors. Not just arguments that particular aspects of priors should be common, or that they should be the common in certain simple situations. No, here we need arguments that entire prior functions should or should not be the same. And you can look long and hard without finding much on this topic.

Some people simply declare that differing beliefs should only result from differing information, but others are not persuaded by this. Some people note that as expected utility is a sum over products of probability and utility, one can arbitrarily rescale each probability and utility together holding constant that product, and get all the same decisions. So one can assume common priors without loss of generality, as long as one is free enough to change utility functions. But of course this also makes uncommon priors also without loss of generality. And we are often clear that we mean different things by probabilities and utilities, and thus are not free to vary them arbitrarily. If it means something different to say that an event is unlikely than it means to say that that event’s outcome differences are less important to you, then probabilities mean something different from utilities.

And so finally we get to my paper, Uncommon Priors Require Origin Disputes, which offers one of the few papers I have ever seen to give a concrete argument on common priors. Most everyone who hears it seems persuaded, yet it is rarely mentioned when people summarize what we know about rationality in Bayesian frameworks. If you read the rest of this post, at least you will know.

My argument is pretty simple, though I needed a clever construction to let me say it formally. If the beliefs of a person are described in part by a prior, then that prior must have come from somewhere. My key idea is to use beliefs about the origins of priors to constrain rational priors. For example, if you knew that a few minutes ago someone stuck a probe into your brain and randomly changed your prior, you would probably want to reverse that change. So not all causal origins of priors seem equally rational.

However, there’s one big obstacle to reasoning about prior origins. The natural way to talk about origins is to make and use some sort of probability distribution over different possible priors, origin features, and other events. But in every standard Bayesian model, the priors of all agents are common knowledge. That is, priors are all the same in all possible states, so no one can have any degree of uncertainty about them, or about what anyone else knows about them. Everyone is always completely sure about who has what priors.

To evade this obstacle, I chose to embed a standard model within a larger standard model. So there is a model and a pre-model. While the ordinary model has ordinary states and priors, the pre-model has pre-states and pre-priors. It is in the pre-model that we can reason about the causal origins of the priors of the model.

The pre-states of the pre-model are simply pairs of an ordinary state and an ordinary prior assignment, that says which agents get which priors. So a pre-prior is a probability distribution over the set of all combinations of possible states in the ordinary model, and possible prior assignments for that ordinary model. Each agent would initially know nothing about anything, including about ordinary states or who will get which prior. Their pre-prior would summarize their beliefs in this state of ignorance. Then at some point all agents would have learned about which prior they and the other agents will be using. From this point forward, agent info sets are entirely within an ordinary model, where their prior is common knowledge and gives them ordinary beliefs about ordinary states. So from this point on, an ordinary model is sufficient to describe everyone’s beliefs.

The key pre-rationality constraint that I propose is to have pre-priors agree with priors when they can condition on the same info. So if we condition an agent’s pre-prior on the assignment of who gets which priors, and then ask for the probability of some ordinary event, we should get the same answer as when we simply ask their prior for the probability of that ordinary event. And merely inspecting the form of this simple key equation is enough to draw my key conclusion: Within any single pre-prior that satisfies the pre-rationality condition, all ordinary events are conditionally independent of other agent’s priors, given that agent’s prior.

So, within a pre-prior, an agent believes that ordinary events and their own prior are informative about each other; priors are different when events are different, and in the sensible way. But also within this pre-prior, each agent believes that the priors of other agents are not otherwise informative about ordinary events. The priors of other agents can only predict ordinary events by predicting the prior of this agent; absent that connection, ordinary events and other priors do not predict each other.

I summarize this as believing that “my prior had special origins.” My prior was created via a process that caused it to correlate with other events in the world, but the priors of other agents were not created in this way. And of course this belief that you were made special is hard to square with many common beliefs about the causal origins of priors. This belief is not consistent with your prior being encoded in your genes via the usual processes of genetic inheritance and variation. It is similarly not consistent with many common theories of cultural inheritance and variation.

The obvious and easy way to not believe that your prior resulted from a special unusual origin process is to have common priors. And so this pre-rationality constraint can be seen as usually favoring common priors. I thus have a concrete argument that Bayesians should have common priors, an argument based on the reasonable rationality consideration that not all causal origins of priors are equally rational. If priors should be consistent with plausible beliefs about their causal origins, then priors must typically be common.

Dominance Hides in Prestige Clothing

21 months ago, I said: 

We like to give others the impression that we personally mainly want prestige in ourselves and our associates, and that we only grant others status via the prestige they have earned. But let me suggest that, compared to this ideal, we actually want more dominance in ourselves and our associates than we like to admit, and we submit more often to dominance. In the following, I’ll offer three lines of evidence for this claim. First consider that we like to copy the consumer purchases of people that we envy, but not of people we admire for being “warm” and socially responsible. … Second, consider the fact that when our bosses or presidents retire and leave office, their legitimate prestige should not have diminished much. … Yet others usually show far less interest in associating with such retirees. … For my third line of evidence, … for long term mates we more care about prestige features that are good for the group, but for short term mates, we care more about dominance features that are more directly useful to us personally. (more)

Today I’ll describe a fourth line of evidence: when ranking celebrities, we don’t correct much for the handicaps that people face. Let me explain.

Dominance is about power, while prestige is about ability. Now on average having more ability does tend to result in having more power. But there are many other influences on power besides individual ability. For example, there’s a person’s family’s wealth and influence, and the power they gained via associating with powerful institutions and friends.  

As I know the world of intellectuals better than other worlds, let give examples from there. Intellectuals who go to more prestigious schools and who get better jobs at more prestigious institutions have clear advantages in this world. And those whose parents were intellectuals, or who grew up in more intellectual cultures, had advantages. Having more financial support and access to better students to work with are also big helps. But when we consider which intellectuals to most praise and admire (e.g., who deserves a Nobel prize), we mainly look at the impact they’ve had, without correcting this much for these many advantages and obstacles. 

Oh sure, when it is we ourselves who are judged, we are happy to argue that our handicaps should be corrected for. After all, most of us don’t have as many advantages as do the most successful people. And we are sometimes willing to endorse correcting for handicaps with politically allied groups. So if we feel allied with the religious and politically conservative, we may note that they tend more obstacles in intellectual worlds today. And if we feel allied with women or ethnic minorities, we may also endorse taking into account the extra obstacles that they often face. 

But these corrections are often half-hearted, and they seem the exceptions that prove a rule: when we pick our intellectual heroes, we don’t correct much for all these handicaps and advantages. We mainly just want powerful dominant heroes. 

In acting, music, and management, being good looking is a big advantage. But while we tend to say that we disapprove of this advantage, we don’t correct for it much when evaluating such people. Oscar awards are mostly the pretty actors, for example. 

Challenge Coins

Imagine you are a king of old, afraid of being assassinated. Your king’s guard tells you that they’ve got you covered, but too many kings have been killed in your area over the last century for you to feel that safe. How can you learn of your actual vulnerability, and of how to cut it?

Yes, you might make prediction markets on if you will be killed, and make such markets conditional on various policy changes, to find out which policies cut your chance of being killed. But in this post I want to explore a different solution.

I suggest that you auction off challenge coins at some set rate, say one a month. Such coins can be resold privately to others, so that you don’t know who holds them. Each coin gives the holder the right to try a mock assassination. If a coin holder can get within X meters of you, with a clear sight of a vulnerable part of you, then they need only raise their coin into the air and shout “Challenge Coin”, and they will be given N gold coins in exchange for that challenge coin, and then set free. And if they are caught where they should not be then they can pay the challenge coin to instead be released from whatever would be the usual punishment for that intrusion. If authorities can find the challenge coin, such as on their person, this trade can be required.

Now for a few subtleties. Your usual staff and people you would ordinarily meet are not eligible to redeem challenge coins. Perhaps you’d also want to limit coin redeemers to people who’d be able to kill someone; perhaps if requested they must kill a cute animal with their bare hands. If a successful challenger can explain well enough how they managed to evade your defenses, then they might get 2N gold coins or more. Coin redeemers may be suspected of being tied to a real assassin, and so they must agree to opening themselves to being investigated in extra depth, and if still deemed suspicious enough they might be banned from ever using a challenge coin again. But they still get their gold coins this time. Some who issue challenge coins might try to hide transmitters in them, but holders could just wrap coins in aluminum foil and dip them in plastic to limit odor emissions. I estimate that challenge coins are legal, and not prohibited by asset or gambling regulations.

This same approach could be used by the TSA to show everyone how hard it is to slip unapproved items past TSA security. Just reveal your coin and your unapproved item right after you exit TSA security. You could also use this approach to convince an audience that your accounting books are clean; anyone with a coin can point to any particular item in your books, and demand an independent investigation of that item, paid for at the coin-issuer’s expense. If the item is found to not be as it should, the coin holder gets the announced prize; otherwise they just lose their coin.

In general, issuing challenge coins is a way to show an audience what rate of detection success (or security failure) results from what level of financial incentives. (The audience will need to see data on the rates of coin sales and successful vs. unsuccessful redemptions.) We presume that the larger the payoff to a successful challenge, the higher the fraction of coins that successfully result in a detection (or security failure).