It’s a weird world this simulated universe we’re part of. It’s dangerous to go poking around too deep under the covers. What do we do with the coincidences. Do we chalk them up to nothing more than that, or do we look for deeper meaning? I can know, and as what’s now called a data scientist I do know, that the odds of strange things happening are 100 percent. Everything that happens to us is highly unlikely. At the same time, I also know this: what we know (or think we know) about how the universe works is just a tiny fraction of what we could know. I know that our information is incomplete, inaccurate, biased by the preconceptions of those we learn from, and likely to be overturned within years or decades. I know that all of our models are simplified, and that we have many reports of things that do not fit the established, mainstream scientific view.
So in this context, to what extent do I decide that the odd thing that happened is chance within the existing consensus understanding of the universe, and to what extent do I wonder if some thing else, some un- or under-documented force, is in effect. And suppose that I hold that alternative hypothesis as highly unlikely, but the same kind of odd thing happens again. Also as a good data scientist I know to update my prior beliefs with this new information, which pushes me more in the direction of belief in something more than chance at a time when science shrugs and says, “it’s chance”.
Suppose I say No Way to the chance hypothesis and decide this is signal, not noise. There’s a problem with taking the leap. I don’t know where I’m leaping to. If I decide an event or particular sequence of events is occurring more often than expected, that tells me exactly nothing about why. Or how. This may be something I can test, or it may not. I think we so often struggle with odd experience because we have no reference class to fit them in.
Let’s go right to an example that’s sure to set off many, if not all, of the people who read this: the Clinton Body Count (CBC). For those unaware of the theory (I won’t call it a “conspiracy” theory, as whatever its merits, adding the “c” word carries no meaning other than showing an intention to scorn (or to be seen as scorning)), it goes like this: an unusually large number of people associated with the Clintons meet unlikely, untimely deaths. Especially if they are in a position to have dirt on the Clintons or have gone against them.
How do we evaluate this claim? Of course, any direct evidence of people being killed for knowing too much would be the strongest evidence: a confession, a paper trail, phone records of conversations with a known hit man. If we want to build a case based solely on the unlikelyhood of these all being chance events, we have two major hurdles (and bear with me, because these do resemble the same problems I have in evaluating my own odd experiences): one, we need to draw outlines around our groups so we can assess predicted versus observed odds, and we need to justify the particular form our conclusion takes.
Let’s begin by trying to draw outlines. The Clintons are extraordinarily well networked. Within two hops you could probably get to almost every prominent politician, journalist, Democratic party staffer or major donor. How big is this set? One way to draw the boundary would be to take every death that looks suspicious to you, then add in that category of person. So if we want to add the former head of security for Clinton-Gore to the list (official cause of death, suicide), we need to also add everyone who’s worked as head of security for a Clinton campaign.
Maybe you’ve already noticed problems with this approach. For starters, how big should this category be. Head of security seems too specific, but if we include anyone who worked security for the Clintons, even one day rent-a-cops for specific events, this seems to broad. Where do we draw boundary for this group. More generally, we’ve introduced bias by including only those categories in which someone has died. For a proper evaluation, we’d need to pick any category which *would* be included if someone in that category were to die.
Even if we come up with a justifiable set of clearly defined groups of Clinton associates, we need to establish a base-line of deaths for people in these groups. That’s going to be tricky, given the number of moving parts: groups overlap, people enter and exit groups as time goes by, age matters, range of years matters. We need a justifiable starting time and ending time for our window. Are we looking at all deaths, or only deaths that fall into certain categories (e.g. plane crash). By chance, if we look at 20 categories, one of them is likely to rise to the generally accepted level of significance (though for a claim like this, the significance level would have to be much higher to conclude shenanigans). We can still limit to one category, but will have to find a truly extraordinary death rate to be convinced.
Now we hit the second problem, so often overlooked by both those who posit unpopular theories and those who prematurely shoot down those who question official narratives. Establishing that something out of the ordinary (perhaps even way out of the ordinary) has happened isn’t the same as having an explanation for what happened. So even if we manage to solve all the epistemic issues in evaluating the CBC versus some appropriate reference group, and we find effectively no chance that so many deaths could have happened by chance alone, this doesn’t prove that the Clintons are murderers. It only proves something extremely odd. Sloppy theorists make the jump from “this is clearly screwy” to “must be aliens”, while sloppy (or dishonest) thinkers on the other side see someone pointing out an anomaly and attack a specific and (in their minds) unlikely cause of that screwiness. They jump hearing “aren’t crop circles odd when you examine them closely” to replying “if aliens wanted to communicate with us they could do it directly”.
So even if we find clear statistical evidence of non-random death patterns, that’s not enough to pin it on anyone. And here we enter the strange catch 22 of finding anomalies. The more evident they are, the more uncomfortable we become with a lack of a well-grounded alternative theory. There’s an old joke (why it has to be old should be clear) about a confused traveler standing on a street corner, studying a map. A local approaches and looks over the traveler’s shoulder at the map, and quickly realizes it’s for a different city. He points this out to the traveller, who replies, “I know, but it’s the only map I have”. It’s like we don’t want to give up the map we have, no matter how strong the evidence it’s the wrong one, until we have a new one to take it’s place. And if we distrust the new map, we defend the old one even more. The more evidence you have that show a traveller he isn’t in Cincinnati, the more he’ll demand proof that he’s in another specific city, and perhaps a solution for getting to Cincinnati. Absent these, he’ll stick with the map he has, thank you very much.
The problem is that whether or not our world is a simulation in the computer sense, it is most certainly simulated in our own heads. We perceive this underlying world, and in perceiving it we simulate it in our heads. We model the universe we perceive, and this model is filled with all the components of a good simulation: it has objects, events, and consequences. These internal simulations often do a good job of tracking our external world (or at the least, they can appear to be wonderfully consistent). My internal simulation tells me that if I raise the cup of coffee to my lips, the liquid will be warm but not hot. And sure enough, it is. Most of the time, our simulated worlds model the world we perceive with great accuracy, and can not only predict most banal consequences (walking out the door leaves the room), but can let us identify situations with highly unpredictable outcomes (if we go out to a bar tonight looking for action, who might we go home with).
In my own mental model of the world, where I try and match up my internal simulation to the perceived workings of a reality (itself simulated or not), I struggle as much as the traveller who realizes he’s got the wrong map, or a CBC believer who’s yet to find a Clinton confederate with a smoking gun.