Muscle soreness and fatigue have the obvious limitation of being subject to placebo effects.
Muscle soreness and fatigue have the obvious limitation of being subject to placebo effects. (Photo: Milles Studio/Stocksy)
Notecard Guide to Fitness

The Ultimate (Evidence-Based) Guide to Recovery

The post-exercise recovery trend has produced mountains of new research. But can you trust the results?

Muscle soreness and fatigue have the obvious limitation of being subject to placebo effects.

Heading out the door? Read this article on the new Outside+ app available now on iOS devices for members! Download the app.

So you want to recover more quickly after your workout, using the latest backed-by-science approach? Great news! According to a new meta-analysis of post-exercise recovery techniques, there are 1,693 studies evaluating the efficacy of various approaches. The problem, unfortunately, is that most of them are crap.

The new analysis, published in Frontiers in Physiology, by a research team led by Olivier Dupuy at the University of Poitiers in France, winnows the candidates down to 99 relatively high-quality studies covering the following techniques: “active recovery, stretching, massage, massage combined with stretching, the use of compression garments, electrostimulation, immersion, contrast water therapy, cryotherapy/cryostimulation, and hyperbaric therapy/stimulation.” And as is so often the case in sports science, what you make of the conclusions will depend on your perspective and expectations.

The first challenge is figuring out what we mean by “faster recovery.” Dupuy and his colleagues looked at several different outcomes, including perceived muscle soreness, perceived fatigue, inflammation (as measured by proxy blood markers like interleukin-6 and C-reactive protein), and muscle damage (as measured by the proxy blood marker creatine kinase). None of these is perfect.

Muscle soreness and fatigue have the obvious limitation of being subject to placebo effects. One of my favorite recovery studies is the Australian experiment finding that ice baths in water at 59 degrees Fahrenheit accelerated recovery compared to tepid baths at 95 degrees—but test subjects found that tepid baths with a special “recovery oil,” a substance they were told was an effective recovery aid, were even better. The catch, of course, is that the recovery oil was simply bath soap. Our expectations dictate our perceived recovery. The blood tests, meanwhile, are difficult to interpret and link to real-world outcomes like athletic performance.

With those caveats in mind, the overall result of the meta-analysis was that active recovery, massage, compression garments, immersion, contrast water therapy, and cryotherapy all had positive effects on perceived muscle soreness. No such luck for electrostimulation, hyperbaric therapy, and the other pretenders. The best results for muscle soreness and fatigue came from massage. The best results for inflammation came from massage and cold exposure.

If you squelch your skepticism for a moment, these results seem to line up reasonably well with the lived experience of athletes. Certainly among the athletes I know, massage and ice baths are recovery priorities (although compression has certainly gained a lot of fans in recent years). The other stuff—electrostim, cryosaunas, and so on—has always seemed more marginal. Of course, it’s possible that massage and ice baths came out on top precisely because athletes already like them. Their preexisting beliefs created the apparent reality.

On that note, one of the interesting details in the new paper is something called a funnel plot, which offers a method of assessing whether a group of studies is skewed by factors like publication bias. If you’re an eager young master’s degree student looking into a new recovery technique, you may only have the time and resources for a small study with, say, a dozen subjects. The results with such a small sample amount to a coin toss: Even if the technique does nothing, you may end up with a strongly positive result—or a strongly negative one—purely by chance. The positive one seems exciting, so you submit it to a journal and share the results with the world. The negative one, on the other hand, suggests that the new technique is a waste of time, so you file the results away in a desk drawer and move on to something else.

The problem with this eminently human sequence of events is that, over time, predominantly positive results get published without being counterbalanced by the negative ones that would reveal, on balance, that the technique does nothing. So you end up with an overly rosy view of whether things work. That’s one of the reasons Stanford University epidemiologist John Ioannidis famously argues that “most published research findings are false.”

The funnel plot offers a way of checking if this is what’s happening. You plot all the studies on a given topic, mapping the results onto a common scale. On the vertical axis, you plot the precision of the studies; the ones with lots of subjects, which presumably give you the most accurate estimate of the “real” effect, go at the top. The smaller, less-accurate ones, go at the bottom. On the horizontal axis, you plot positive effects to the right, negative ones to the left.

If everything is kosher, you’d expect the precise studies at the top of the graph to cluster closely around the average result, while the less-precise studies at the bottom of the graph will tend to scatter more widely with a mix of negative and positive results relative to the average. But here’s the key point: There should be a roughly equal balance between less-precise studies that are unusually negative and unusually positive. That’s equivalent to saying that if you repeatedly flip a coin ten times, the average outcome will be five heads and five tails, but you’ll flip rare outcomes like two heads roughly as often as you flip the opposite rare outcome of two tails. The result will be a bunch of dots that form the shape of a triangle—or an upside-down funnel, which is why it’s called a funnel plot.

So, with that intro, here’s what the funnel plot looks like for all 57 studies that evaluated perceived muscle soreness:

(Frontiers in Physiology)

Uh-oh. You can see right away that the dots don’t conform to a nice funnel shape. It’s hard to see, but the average outcome in all the studies (the vertical line) is slightly positive, to the right of zero. But the most-precise studies all cluster right around zero, while the less-precise studies are almost universally scattered on the positive side of the graph. The worse the study, the more positive it is about the wonderful benefits of the recovery technique it’s testing. If you take out all those massively positive but poor-quality studies, the overall picture looks a lot closer to no benefit at all.

For what it’s worth, you can contrast this to the funnel plot for the studies that used blood tests for markers of inflammation or muscle damage. Here’s what that plot looked like:

(Frontiers in Physiology)

There are certainly a few outliers here that are skewed to the positive side, but there’s much more of a funnel shape.

It’s worth noting that a skewed funnel plot doesn’t always mean publication bias is at work. It could be there are other characteristics that differentiate between high-precision and low-precision studies. Maybe most studies of elite athletes tend to be very small, while most big studies tend to involve sedentary college students with little exercise experience, so they respond differently to recovery protocols. But it’s hard to look at that first funnel plot without concluding that some serious skepticism is called for when evaluating any single recovery study, regardless of how seemingly positive the results are.

Where does this leave us, then? As I said at the top, that depends on your perspective. I’ve heard dozens of explanations for why, say, massage should enhance recovery, from the old (and long debunked) trope of “flushing out lactic acid” to more plausible mechanisms related to the stimulation of cellular repair sensors that respond to stress and strain. But I don’t think we really know what is and isn’t happening during a massage, let alone whether it’s enhancing recovery. The same goes for ice baths and everything else (except, as far as I can tell, sleep and refueling).

On the other hand, I’m not ready to tell athletes they should ditch their ice baths and massages. Those two choices in particular seemed to deliver the best results in the meta-analysis. And more generally, the thing about those placebo-riddled studies showing that various recovery techniques make athletes feel less sore and less fatigued is that, well, they felt less sore and less fatigued. That’s the goal, isn’t it? My own take is that the effects of most of this stuff are marginal at best. I wouldn’t spend a lot of time or money chasing whatever benefits they offer. But if you have a recovery routine that helps you feel better sooner after a big day on the trails, maybe the best advice I can offer is the old saying: Ask me no questions, and I’ll tell you no lies.

My new book, Endure: Mind, Body, and the Curiously Elastic Limits of Human Performance, with a foreword by Malcolm Gladwell, is now available. For more, join me on Twitter and Facebook, and sign up for the Sweat Science email newsletter.

Lead Photo: Milles Studio/Stocksy

When you buy something using the retail links in our stories, we may earn a small commission. We do not accept money for editorial gear reviews. Read more about our policy.