We analyze how academic experts and nonexperts forecast the results of 15 piece-rate and behavioral treatments in a real-effort task. The average forecast of experts closely predicts the experimental results, with a strong wisdom-of-crowds effect: the average forecast outperforms 96 percent of individual forecasts. Citations, academic rank, field, and contextual experience do not correlate with accuracy. Experts as a group do better than nonexperts, but not if accuracy is defined as rank-ordering treatments. Measures of effort, confidence, and revealed ability are predictive of forecast accuracy to some extent and allow us to identify “superforecasters” among the nonexperts.