Skip to main content


New York University and NBER

Discussions of published papers are seldom read. Certainly they are almost never cited. So, how to make good use those Macroeconomics Annual pages? I thought it might be useful to write a user-friendly introduction to the ideas underlying Acemoglu, Akcigit, and Kerr’s very interesting paper: the “macro from microshocks” approach that started with Long and Plosser (1983), which is tightly linked to something I have called the “granular” hypothesis (Gabaix 2011).1

In that view, the primitive shocks to the economy are not mysterious aggregate productivity shocks, or aggregate demand shocks, but understandable shocks to Nokia, Microsoft, demand for Boeing planes, and so forth. This may explain the behavior of significant macroaggregates, and in general be a great, fecund source of insight for macro.

I will explain why I think that this approach is promising; as a preview, the main reasons are as follows.

1. Microshocks may be important to understand the business cycle: rather than “there was a [mysterious] productivity shock,” we can have more concrete and understandable explanations like “there was a strike at General Motors,” “there was large demand for Boeing planes,” “Nokia lost market share,” and so forth. They may also be quite important quantitatively, with some estimates (reviewed below) attributing to them 30–50% of GDP fluctuations.

2. They allow us to understand the origins of volatility.

3. They are a source of (plausible) instruments for macroeconomics—something very rare and precious.

4. They may allow us to understand the nature of “multipliers.”

5. They allow us to trace how a shock propagates in the broad economy: this is what Acemoglu et al. does particularly well.

6. They ought to be useful for predictions.

7. They have a great promise for international transmission, for example, of Fed shocks to the exchange rate, exports, and so forth.

I like Acemoglu et al.’s paper a lot. It has useful analytics and interesting empirics. I trust that it will be imitated by other teams, with other data sets. Let me show how it integrates in this broader context.

How Microshocks Affect Macro Outcomes

I start with a basic question. What is the impact of a micro TFP shock on the macroeconomy? Perhaps surprisingly, one can obtain a clean definite answer to that.

What Is the Impact of a TFP Shock?

To see this, we need some notations. Suppose a general economy with N goods. The representative consumer’s utility is u(c1, …, cN), firm i (or “sector” i—I use the term interchangeably; I think that firms are more concrete, and help the intuition) produces

(1)Gross output of firm i:Yi=eziFi(Li,Ki,(Xij)j=1N)
using labor Li, capital Ki, and inputs Xij =Xij from firm j. The net production of good i goes into consumption:
(2)Net output of good i:YijXji=ci
and the value added of firm i is:
(3)Value added of firm i:Vi=piYijpjXij
while the sales (value of gross output) of firm i are:
(4)Sales of firm i:Si=piYi
In the end, GDP is:
and the resource constraint is: ∑iKi = K, ∑iLi = L.

Now, suppose that there is a TFP shock dzi to firm i, what happens to GDP? We will calculate d ln TFP, which is also d ln Y, when there are constant factors K, L.

One plausible answer might be “we need to know the whole input-output structure of the economy to know that.” This is indeed the impression one gets from reading Long and Plosser (1983), and the literature building directly on it.

However, this plausible answer is not correct.

Another plausible answer would be that it is the shock times value added of firm i.

dlnTFP=?ViYdzi=Value added of firm iGDP×(TFP shock to firm i).
That plausible answer is not correct either.

The correct answer is: the impact is the sales of GDP of the firm times the firm’s TFP shock.

(6)dlnTFP=SitYtdzit=Sales of firm iGDP×(TFP shock to firm i).

Note that you do not need to know the whole input-output matrix for this. It is enough to know the sales of firm i. It summarizes the input-output impacts in one neat, easily observable quantity—the sales of firm i, over GDP. This is Hulten’s (1978) result. I give a compact proof in Gabaix (2011). The quick intuition is as follows. Imagine that there is no reallocation of inputs (capital, labor, intermediary inputs)—this is warranted by the envelope theorem. Then, from (1) only, firm i produces dzi% more of its output Yi, which has a price (social value) pi. So GDP has increased by piYidzi = Sidzi. In practice, there is also a reallocation of inputs, and so on, but that does not matter for welfare, that is, TFP.

Of course, another important puzzle in macro is the source of comovement.2 For that, the input-output structure does matter, as the paper illustrates. The point here is solely that it does not matter to predict GDP (in this most basic frictionless model).

Now, if each of the N firms has a shock, we can contrast the direct impact:

to the total impact keeping factor (K, L) use constant:
(8)(dlnYt)Constant factors=dlnTFP=i=1NSitYtdzit.

Now, if there is flexible factor use, for example, flexible labor supply or capacity utilization, we find:

where μ ≥ 1 is a multiplier (e.g., increased labor supply or capital utilization).3

In the “granular approach,” the shocks to the economy are (9): the underlying economy is not a smooth continuum, but it is made of incompressible “grains” of economic activity (firms, or fine-grained sectors), that affect GDP.

We note another consequence. If shocks are uncorrelated: σYt2=μ2i(Sit/Yt)2var(dzit)=μ2i(Sit/Yt)2σi2, then GDP volatility is:


For instance, if we have equally-sized sectors Sit / Yt = 1 / N and equal volatility,


We shall use these formulas soon.

What Are “Microeconomic Shocks,” Anyway?

This literature started with Long and Plosser (1983), which is a great model. But Long and Plosser had only N = 6 sectors, so some might argue that they did not have “microshocks” in their quantification. Indeed, Dupor (1999) and Horvath (2000) ask: What if you had N = 600 or six million sectors? Wouldn’t volatility go down to 0 very fast? Dupor’s answer is that, as in formula (11), then, you would get a very small volatility, about 1/N. Horvath had a proposal to get rid of the difficulty, based on the notion of sparse input-output matrices, but that potential explanation remained conceptually murky.

One solution was to have huge multipliers. For instance, Jovanovic (1987) proposes a model where the multiplier is μN. That does generate sizable fluctuations. However, empirical multipliers do not seem nearly that big.

Another possibility has been that of local, nonlinear effects: Bak et al. (1993) was a pioneering paper in that vein, and Nirei (2006) further developed the idea. However, they have yet to be confronted more systematically with the data.

In Gabaix (2011), I propose another take. Even with N = 10 million firms/industries, those effects survive. Why? Recall the formula (10) above. With thin-tailed distribution of sizes (e.g., Si / Y = 1 / N, or more generally Si is drawn from a finite-variance distribution):

However, the firm size distribution is fat-tailed, it is closely Zipf distributed: P (S > χ) ∼ k / χζ with ζ′ ≃ 1 (Gabaix 1999, 2009; Axtell 2001). In that case, GDP volatility decays as follows:

The decay is in ln N rather than N. So even with 10 million firms/industries, those effects survive. That is because, plainly, there are big firms (and sectors).

There is a plainer way to see that those effects can be big. Take (10). Empirically, a root-Herfindahl is i=1N(Sit/Yt)25.3%, and a microlevel TFP volatility σ¯i12% (this is actually quite delicate to measure). More tentatively, take μ ≃ 2.6 (see Gabaix 2011 for justifications). Then, the GDP volatility generated by idiosyncratic shocks is:

This is clearly of the right order of magnitude.

Another implication of granularity is to emphasize the potential importance of networks (Acemoglu et al. 2012; Carvalho 2014). Those large firm-level shocks propagate through networks, which allows to trace interesting effects, as in Acemoglu et al.

A terminological note may be useful. Sometimes, authors contrast the “direct impact” (7) to the “network effects” (8). This is fine. But the whole “granular” impact is still (8) and even (9). From the economics, there is nothing particularly sound about the “direct” impact. To put it differently: Networks are an expression of granularity, rather than an alternative to it. If all firms had small sales, the central limit theorem would hold, and idiosyncratic shocks would all wash out—as in (11) and its variants.

Still, the “multipliers” are interesting in themselves. I like the authors’ multiplier calculations (e.g., in Section IV.B). Note that as the sales/value added of the economy is about 2, in general we can expect to find the “production” multiplier (i.e., the one coming simply from the production function) to be about 2. When multipliers are larger, one would like to know more about their origins—see below.

Promises for the Granular Approach

I now list major reasons why this approach might be useful, as shown by Acemoglu et al.4

Granular Shocks May Be Important for Aggregate Shocks

Microshocks may be important to understand the business cycle, or perhaps even more the behavior of other macroaggregates, such as exports. In Gabaix (2011), I quantify that granular shocks account for about 1/3 of GDP volatility. Foerster, Sarte, and Watson (2011) find: “The role of idiosyncratic shocks increased considerably after the mid-1980s, explaining half of the quarterly variation in Industrial Production.” Atalay (2014) finds even bigger effects. Di Giovanni, Levchenko, and Mejean (2014) have great French data. In their finding, “the standard deviation of the firm-specific shocks’ contribution to aggregate sales growth amounts to 80% of the standard deviation of aggregate sales growth in the whole economy.” Carvalho and Grassi (2015) make conceptual progress on those issues too.

More research has been trying to quantify the importance of microeconomic shocks.

Granular Shocks May Allow Us to Understand the Patterns in Macroeconomic Volatility

Di Giovanni and Levchenko (2012) want to understand export fluctuations. They find that the preponderance of large firms and their role in aggregate volatility can help explain two empirical regularities: (a) smaller countries are more volatile, and (b) more open countries are more volatile. In Carvalho and Gabaix (2013), we find that the “fundamental” volatility coming from (10) has good predictive power for actual GDP volatility, and indeed explains the great moderation and its undoing, via changes in the sectoral composition of the US economy.

All in all, the evidence is accumulating, made possible by new, disaggregated data (see also Atalay et al. 2011).

Microshocks Are a Source of Instruments for Macroeconomics

One potentially great importance of granular shocks is that they are a plausible source of instruments in macroeconomics. This is very important, as in business-cycle frequency macro, instruments are woefully rare.

One example I like is Amati and Weinstein (2013). They identify idiosyncratic bank-specific shocks, and trace their impact on investment. In their finding, “We show that these [idiosyncratic] bank supply shocks explain 40% of aggregate loan and investment fluctuations.” This seems like a good way to make progress in seeing the impact of financial shocks.

In general, in future research, the use of idiosyncratic shocks as instruments for macro seems very fecund.

Microshocks Can Allow Us to Better Understand “Multipliers” and Propagation

Those shocks need multipliers, like the μ above. The plainest ones are labor supply and capacity utilization of capital. Other candidates might be the relaxation of credit constraints. Otherwise, fancier ones might be via news, expectations, or imitation. Which ones are more important, and when? It would be great to know.

In my view, that is where Acemoglu et al.’s method and findings might be extended most, empirically and conceptually. The authors find repeated evidence for changes in output, rippling through the network, but we do not quite know if they are quantitatively the ones expected from the Cobb-Douglas model, or if there is something more.

We Can Show a Shock Propagates in the Broad Economy

Another payoff is that we can trace back how shocks propagate in the whole economy, both in time, geographically, and in the network. This is something that Acemoglu et al. do particularly well.

Some other recent papers also do this kind of “tracing.” Barrot and Sauvagnat (2015) use well-identified shocks coming from natural disasters, with careful controls for causality via other, nonaffected suppliers/customers. Similarly, Carvalho et al. (2015) trace the impact of a Japanese earthquake.

I expect to see many more such papers. It would be great if they took up the challenge of tracing exactly why the shock propagates: which one in the previous list of the multipliers μ matters.

In Principle, Microshocks Ought to Be Useful for Predictions

Are they? This is quite underexplored in my opinion. In Gabaix (2011) I find that they (the lagged “granular residuals”) are having extra power, but it would be nice to know if that is true more generally.

They Have Great Promise to Think about the International Transmission of Shocks

One would like to trace the impact of trade shocks, policy changes by central banks, and so forth, in the trade network. Little work on that has been done—though Acemoglu et al.’s example with the Chinese import demand shock is quite interesting. More on this would be welcome. This would also impact asset prices. Indeed, in models of imperfect finance (Gabaix and Maggiori 2015 and its online appendix) demand shocks for currencies also have “network” ripple effects on exchange rates—in theory. Investigating that empirically would be very nice.


In conclusion, this is a very exciting line of research, and Acemoglu et al. is a very useful step forward. For the reasons listed above, I am quite hopeful that we will make much progress on macro by continuing along that line.


In these economies, there is a danger of double counting. The authors do not make any such mistake, of course. But just for clarity, and because the issue is important for welfare, I take some time to illustrate the issue here. Suppose that there are just two sectors. Sector 1 is final consumption and sector 2 is just an intermediary good. There is 1 unit of labor, and we normalize the wage to 1. Production functions are:

so aggregate production function is:
The planner’s problem is:
max L1,L2cs.t.L1+L2=1.
The first-order condition gives: bic / Li = w = 1, so Li = bi, Y = y1 = ez1+bz2, y2 = ez2b2.

We start from zi = 0 initially. Hence, a shock dz2 implies:

but GDP shock is:

Hence, for GDP, you want to calculate the final output, not the sum of all increases in outputs.


Contact: . I thank Daron Acemoglu for helpful comments and Chenxi Wang for editorial assistance. For acknowledgments, sources of research support, and disclosure of the author’s material financial relationships, if any, please see

1. I build on earlier surveys on power laws (Gabaix 2009, 2016).

2. Simple calibrations indicate that, indeed, a large and realistic amount of comovement between sectors does arise from linkages (as in Carvalho and Gabaix 2013, section IV.B), though understanding the precise nature of comovement remains a fairly open question.

3. Note also that one may fear some double counting, as ∑i(Sit / Yt) is greater than one (in practice, it is about 2). But there is none here. However, this is exactly what the economics requires.

4. I leave aside the interesting applications to financial stability already cited in Acemoglu et al. (Allen and Gale 2000; Acemoglu, Ozdaglar, and Tahbaz-Salehi 2015a; Elliott, Golub, and Jackson 2014).