Nothing kills communication like jargon: it signals the tribe you belong to. Jargon makes the distinction between the insiders and the outsiders painfully clear. One particular piece of jargon that has always bothered me is the concept of a “model.” I suppose this has been on my mind recently because I have heard people relay the famous quote from George Box, “All models are wrong, but some are useful.” This is an adage that is hard to escape in Statistics, and like all maxims it becomes trite when overused. I am most annoyed when presenters throw this into their lecture as some legal caveat emptor to mitigate the criticisms of their work….
…But, getting back on topic, what exactly did Box mean by a model? We use this term all the time. Taking a blunt view of Statisticians, all we really do is build models. Of course other scientists also build models, we don’t have a monopoly–yet (insert evil laugh). My definition of a model, albeit inept, is: a description of either an object or a process. Now some descriptions are better than others. A detailed blueprint is a more useful description for building a skyscraper than a poem. This is why mathematical models are so prevalent. They cut directly to a quantitative description without any confusion. Models don’t need to be equations; they can take many different forms, for example a computer program. The important thing is that it is a description.
Joyce’s second blog post discusses two camps of modeling. There are those that want the model to be interpretable and those that do not care about the form of the model but instead only want them to achieve some result, say winning at go or chess. Both are valid descriptions, but they illuminate different aspects of the same object. Neither of them is right and neither of them is wrong. The only flawed assumption is that the only correct description is your own model.
My research deals specifically with what are called surrogate models. These are models that are built and calibrated to produce the same results as another model. Now why would anyone want to do this? It seems meta and academic. Well, you’re not wrong! But there are very good reasons to do this. Simpler models, assuming they have enough fidelity, are easier to analyze and understand without losing relevant information. When thinking about surrogate models I always remember the short story “On Exactitude in Science” by Jorge Luis Borges, which describes an empire whose cartographers were so proficient that their maps were the same size as the empire itself. Every detail of the terrain reproduced exactly. Obviously such a map, although accurate, is rather unwieldy. A cut down version would be sufficient for most practical purposes. The tricky issue is how to perform the trimming.
My surrogate modeling falls into a gray region between the two camps Joyce describes. Often the surrogate model takes the form of some Gaussian Process model that’s inscrutable, and the model being approximated is a computer simulation built up from scientific knowledge. The simulation is understandable but slow, whereas the surrogate is the reverse. The Gaussian Process model is not a better description of the reality that the computer code is stimulating, but it does make certain information available to us that would otherwise be locked away in a computer program running until the end of time. In my case, one model is not enough to describe everything. I believe this plurality is true across Statistics and the other sciences. We must be flexible so we are not dogmatically stuck at the expense of progress.
Isaac is a PhD Candidate whose research interests include epidemiology, differential equation modeling, and reinforcement learning. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We asked a fellow Laber Labs colleague to ask Isaac a probing question.
On the table in front of you are two boxes. 1 is clear and contains $1000. The other is opaque, and contains either $1 million or nothing. You have two choices:
The catch is, before you were asked to play this game, a being called Omega, who has nearly-perfect foresight, predicted what you would do. If Omega predicted you would take one box, they put $1 million in the opaque box. If Omega predicted you’d take 2 boxes, they put nothing in the opaque box.
Do you choose one or two boxes?
Both boxes. Depending on what you assume about Omega’s foresight and objectives will lead to different conclusions. If I believe that Omega is more likely to be correct than wrong when predicting my actions, then I would choose the opaque box in order to maximize my expected reward. But if I assume that Omega is a rational being who believes I am a rational being and has the goal of maximizing the chance of being correct, then she will know that I am going to choose the opaque box with 100% certainty and will predict it. But I, knowing that Omega will choose this, will maximize reward by instead choosing both boxes. Omega will know this adjust accordingly. I would then have no reason to switch to picking the opaque box because it would have nothing in it. Instead, I would settle for taking both boxes while realizing a 1000 dollar prize, and Omega would be correct in her foresight. Moral of the story: 1000 dollars on the table is worth 1 million gambling with an omniscient being.
This is Isaac’s second post! To learn more about his research, check out his first article here!