Every child has learned of the usual scientific method: the variable you change is the independent variable, and the variable that changes as a result is the dependent variable. To ensure causation of the dependent, only one independent variable can be changed at a time while many other things must be kept constant between trials. If two independent variables are used, any recorded change in the dependent variable cannot be attributed precisely to either one.
To put it in perspective, if a child wanted to test what helps grow a taller sunflower, they might change how the amount of sunlight two sunflower plants receive or the amount of water they receive. If the child changes both sunlight and amount of water in one trial and the flower grew taller than the control, it would be impossible to say what caused the difference in height. At this point, both the amount of water and sunlight would only be correlated with higher plant growth rather than causing it.

The world of data is inherently different than the world of scientific experimentation, even if they are not necessarily mutually exclusive. While experiments are focused on causation, this would be highly impractical, if not physically impossible when it comes to big streams of data. Rather, data investigations are mainly focused on correlation. Simply put, if certain things are highly correlated with each other, changing some variables may influence a change in other variables. Of course, the complex systems rarely work so simply, and this can lead to detrimental decisions, especially when it comes to policy.
One of the potential pitfalls of this mindset revolves around Goodhart’s Law—an adage named after economist Charles Goodhart. Anthropologist Marilyn Strathern summarizes this law concisely in the following words: When a measure becomes a target, it ceases to be a good measure.
Similar to a science experiment, measure is similar to the independent variable while target is akin to the dependent variable. They are not exactly alike because in correlation, there can be many measures correlated to the target. It is normally fine to have many measures and one target, as a change in one or more measures can indicate a possible change in the target. For example, the population of certain species of frogs are usually taken as measures of general environmental health (the target). This obviously does not imply causation as frogs have only a fractional impact on the environment.
The issue occurs when the system turns the measure into the target. In the case of frogs, if a governmental agency created incentives based solely on the number of frogs in an area, the environment’s health would no longer be the target. Denizens of that area may work to clean up the overall environment to provide more breeding ground for the frogs, but that is very unlikely. Instead, Goodhart’s Law, and by extension the economic theory of rational expectations, state that the overall system will optimize to obtain that target in the easiest and most efficient way possible. In this hypothetical scenario, people would simply breed more frogs and release them rather than make indirect changes that correlate weakly to the new target. In fact, the same situation that happened in India with cobras.
In the hypothetical context of frogs, this might not seem so detrimental, but sub-optimal targets are ubiquitous in both business and politics and have wide-ranging consequences. A business who wishes to promote the best individuals might select sales numbers as a target to promote someone to a managerial position—a situation all too recognizable as the Peter Principle. Or, perhaps on a broader scale, certain investments, such as housing mortgage, can be set as the target for safe, profitable investment leading to the familiar 2009 housing market collapse.
In the context of government, educational policy can set test scores as the primary target. This is exactly what happened with the No Child Left Behind policy, which hyperfocused school districts around the country to “teach to the test” rather than provide holistic learning and academic growth. Furthermore, if failure to meet the target results in punishment, then it can throw entire systems into disarray and imbalanced. In the case of education, underfunded and underperforming schools lagged behind even more so than before and stories of cheating scandals flashed across the news.
There is a similar story with immigration, as the target nowadays has shifted from fixing the immigration system to barring immigrants altogether. Such a trend can lead to costly, misguided expenditures attempting to change a target that has little to do with the central problem. The US economy, for example, highly depends on immigrant labor in both labor work and technical positions. Stifling immigration as a whole would lead to grander problems.
At its core, Goodhart’s Law deals with a misappraisal of focus. Many of the problems that data scientist tackle do not have clear cut targets, and even if they seem to do, may have complex ethical implications that span far beyond the target variable. It is essential to remember that most major problems in today’s society do not have simple fixes. One cannot simply turn a measure into a target to solve the fundamental problem so easily without creating new ripples that will also need to be accounted for. This becomes increasingly true the more complex the problem seems and may still be very true at a seemingly simple problem. Therefore, it is always good to retain a grain of skepticism when looking at solutions and target variables.