April 3, 2018
Conor Artman, PhD Candidate
Many TV shows and movies characterize the act of law enforcement chasing cybercriminals as a classic “cat and mouse” game, but it can be more akin to a “shark and fish”. If you’ve ever watched anything during Shark Week, or any other docu-drama on ocean biology (think, Blue Planet), you might have seen a birds-eye view of sharks repeatedly spear-bombing through massive clouds of fish. Each time they dive, the fish somehow form a shark-shaped gap, and from our vantage point, it looks like the shark doesn’t really catch anything. In reality, the shark scrapes by with a few of the stragglers in the school and dives time after time for more fish to eat. Likening law enforcement to the shark, this kind of pursuit and evasion more accurately depicts their efforts when discovering and unwinding a criminal network. Now imagine that the shark is doing this blind and may or may not start out in the middle of the school of fish. If we can’t see the fish, how can we find them? How can we catch them?
Working with the Lab for Analytic Sciences (LAS), we are using an approach known an agent based models (ABMs) to help find answers to these questions and provide strategies for law enforcement to find and disrupt criminal networks. This project is a collaboration with specialists in backgrounds ranging from anthropology and forensic psychology to work experience as intelligence analysts in the FBI.
Agent based models is a “bottom-up” approach. Typically, one defines an environment, agents, and behavioral rules for the agents. An “agent” is often defined to be the smallest autonomous unit acting in a system—this could be a cubic foot of air in the atmosphere, a single machine in a factory, or a single person in a larger economy. Agents follow rules that can be as simple as a series of if-else statements, with no agent adaptation, or as sophisticated as Q-learning, which allows each agent to learn over time. The goal is to recreate the appearance of complex real-world phenomena. We call the appearance of complex aggregate relationships from micro-level agents “emergent” behavior. The broad idea with ABMs is that we run many simulations, and when we attain stable emergent behavior in a particular simulation, we will then calibrate the ABM to data.
But why do we have to jump to using ABMs in the first place? Can’t we use real-world data or try to set up natural experiments? Human self-awareness creates notoriously difficult problems for those trying to model human behavior. Richard Feynman bluntly illustrates this idea: “Imagine how much harder physics would be if electrons had feelings!” Feynman implies that the volatility in modeling human behavior extends from emotions, but I don’t believe that is really the problem. The problem is how decision-making processes in human beings change over time. Human beings learn to adapt for the task at hand, which isn’t inherently problematic — lots of scenarios in physical sciences show adaptive behavior. The trouble comes in when human beings willfully improve their ability to learn. How should someone trying to come up with good rules for human behavior make adaptive rules for making rules? Ideally, social scientists would like to emulate the success of physical scientists by starting with simple rules, and then trying to generate the behavior they are interested in observing from those rules. Unfortunately, this approach has had mixed success at best.
As a result, social scientists often take one of two approaches. The first is a “take what you can get” approach. Scientists build a statistical model based on their field’s theories. From there, they run these models on observational data (or data collected outside of a randomized experiment) to find empirical evidence for or against their theories. The downside is that disentangling the causes from the effects can be difficult in this approach. For instance, ice cream purchases and murder-rates have a strong positive correlation. But does it make sense to say ice cream causes murder, or murder causes ice cream? Of course not! There’s an effect that’s common to both of them that makes it look like they’re related: heat. As heat increases in cities, murder rates often increase and so do ice cream purchases. Statisticians refer to this concept as “confounding,” where ice cream purchases and murder rates in this example are said to be “confounded” with heat. As a result, if we don’t have the correct insights or the right data, observational data can be confounded with many things at once, and we may not be able to tell.
The second approach uses experiments to formalize questions of cause and effect. The rationale is that the world is a perfect working model of itself. So, the problem isn’t that we do not have a perfect working model but that we do not understand the perfect working model. This means that if we could make smaller working versions of the world in a controlled experimental setting, then we should be able to make some ground in understanding how the perfect model works.
However, for studying illicit networks, we often do not have good observational data available: criminals don’t want to be caught, so they try to make their activities difficult to observe. Similarly, it is usually impossible to perform experiments. To illustrate, if we wanted to study the effect of poverty on crime, there is no way for us as scientists to randomize poverty in an experiment.
A third approach says to simply try to simulate! If you can construct a reliable facsimile of the environment in which your phenomenon exists, then the data generated from the facsimile may help your investigation. In some environments, this works great. For instance, in weather forecasting simulations climatologists can apply well-developed theories of atmospheric chemistry and physics to get informative simulated data. Unfortunately, this may not be a great deal of help if we do not already have a strong theoretical foundation from which to work.
As a result, we try to pool information from experts and the data we have available to build a simplified version of criminal agents, and we make tweaks on our simulation until it produces data that look similar to what we see in the real world (as told by our content specialists and the data we have available). From there, we do our best to make informed decisions based on the results of the simulation. ABMs have their own issues, and they may not be the ideal way to look at a problem. But, we hope that they’ll give us insights into what an optimal strategy for finding and disrupting networks may look like to prevent crime in the future.
Conor is a PhD Candidate whose research interests include reinforcement learning, dynamic treatment regimes, statistical learning, and predictive modeling. His current research focuses on pose estimation for predicting online sex trafficking. We asked a fellow Laber Labs colleague to ask Conor a probing question.
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
In either case, she will be awakened on Wednesday without interview and the experiment ends. Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Beauty is asked: “What is the probability that the coin landed heads?” What would your answer be? Please explain.
I think you could approach this problem from lots of perspectives, depending on how you conceptualize randomness and uncertainty, and how you conceptualize how people actually think versus what we say they should think.
On one hand, speaking purely from the perspective of Sleeping Beauty, I think there’s an argument to be made that you could claim that the probability is still just ½. If, from the perspective of Sleeping Beauty, you do not gain any information from being awakened, you could say, “Well, it was 50-50 before we started, and since I get no information, it’s equivalent to asking me this question before we even started the experiment.” On the other hand, you could try to think of this experiment in terms of a long-term repeated average, or you could even think of it in a more Bayesian way, so I think the point of this question is to give an example of the tension that exists between human heuristic reasoning about uncertainty and precisely converting that intuition into useful statements about the world. (So that’s neat.)
If you ask me what I would personally think, given that I’ve just presumably awakened in some place where I can’t tell what day it is, I might say, “Well, I know I’m awake with an interviewer, so it’s definitely Monday or Tuesday. From my perspective, I can’t tell the difference between awakening on Monday via heads, Monday via tails, or Tuesday via tails. So, from my perspective, there’s only one version of the world corresponding to heads of these three versions, so if you absolutely must have me give a guess for the probability of heads for this experiment to continue, I think one reasonable guess is 1 out of 3.”