For a few decades, economists used to imagine how the world works, write down a theory describing their idea, and call it a day. If some statisticians came along and found some support for the theory, well, great! But usually they didn’t, and that was fine too. As one old joke put it, if an idea worked in practice, economists would ask whether it worked in theory.
The key was the explosion of affordable information technology that made it easier to gather and analyze data. By the ’90s, there was such a huge stock of untested theories and such a wealth of new data that it made more sense for young, smart economists to turn their efforts in empirical directions. Unlike in physics, where theory and experiment call for very different skill sets, most economists found they could switch from theory to data relatively easily. Prizes like the prestigious Bates Clark Medal awarded to rising economics stars under age 40 started to flow to people whose work emphasized data.
But there’s a second shift in progress — a sort of Stage 2 of the data revolution in economics. The tools of economists are changing.
The core of economics theory, as it’s practiced today, is based on individual optimization. For example, economists often assume that businesses maximize profits or minimize costs. This is known as a structural model, because economists usually assume that this sort of optimization represents the deep, fundamental structure of the economy, just like everything in your body is made up of atoms and molecules. Comparing this kind of model to data is called structural estimation, and for a while it formed the core of empirical economics.
Structural models
But structural estimation has its limitations. Since structural models are usually very complicated, the answers they give to simple questions — for example, “How many people will lose their jobs if we raise the minimum wage?” — can be very sensitive to the assumptions of the model. Tweak one assumption, and the answer might come out completely wrong.
So in recent years, many economists have been turning to an alternative approach and chucking theory out the window entirely. Instead of a complicated model about optimization and utility functions and blah blah blah, just look for a case where some kind of random change in the economy — a so-called natural experiment — offers a window into some important question. For example, you could study a random influx of refugees to answer the question of how immigration affects local labor markets. You don’t need a complicated theory of how workers and companies behave — all you need is a simple linear model of how X affects Y.
The chief evangelists of this approach are economists Angrist and Jörn-Steffen Pischke. They have called the advent of natural experiments — also called quasi-experimental methods — the “credibility revolution.” And their book about the subject is titled “Mostly Harmless Econometrics.” The implication is that quasi-experimental studies, because they are more humble than structural models, are also less likely to give us the wrong answers to our most important questions.
—
Post from Bloomberg