You can find information about building muscle and burning fat here, there and everywhere.
Unfortunately, the media passion for pointless non-stories means that the vast majority of what passes for health and fitness news is either completely meaningless or plain stupid.
Most of it is a complete waste of your time.
They’re actually worse than useless, because they distract you from the things that are really important and mislead people into thinking they’ve discovered something of value.
So, I’ve enlisted the help of master detective Sherlock Holmes to help you separate the good stuff from the junk, future-proof yourself against bullshit and keep you from stumbling along on the fitness myths of the past.
1. DON’T LET USELESS FACTS ELBOW OUT USEFUL ONES
You’ll often see a story in a newspaper or on TV about the health benefits of a certain food. A week later, you’ll read about the dangers of the very same food, with warnings by an expert to stay away from it.
On Monday, vitamin C is being touted as the cure for cancer. By Friday, it causes cancer. It’s easy to become so confused that you end up ignoring it all.
If you want to avoid, as Holmes would put it, the useless facts elbowing out the useful ones, you need to be very careful about the information you pay attention to.
Important differences in the way studies are performed often explains why different trials appear to throw up contradictory results. It’s not because the research “keeps changing all the time” as some people like to think. However, by the time reports of the study reach your favorite newspaper or magazine, these differences have been left out or simply ignored.
Much of the problem is caused by PR companies sending an endless stream of press releases announcing “breakthrough” research to every TV channel, radio station and newspaper in the country.
Press releases are written purposely in a news format. They save journalists the time and trouble of researching the subjects on their own. Entire sections of a press release can be simply “cut and pasted” with little or no editing.
Sometimes as many as half the stories appearing in your favorite newspaper are based solely on press releases. Usually, they’re mixed right in with other stories. Unless you’ve done the research yourself, you won’t be able to tell the difference.
When you see “breakthrough” research being cited, remember that the source could simply be another company trying to sell you something, be it a product or idea.
Even press releases put out by some medical journals exaggerate the importance of findings. In a study of 127 press releases produced by nine journals, many failed to give full statistical information with which to put the findings of the study into full context. Industry funding was acknowledged in only 22% of the studies that had received it.
In many cases, journalists report on research that’s still at an early stage, but present it as though firm conclusions have been drawn.
Scientific conferences are intended to provide a forum for researchers to present new work to colleagues. The presentations represent work in progress, and many projects fail to live up to their early promise. However, press coverage often gives the false impression that the findings are widely accepted.
The main goal of a newspaper or magazine is to sell more copies. They’ll achieve this with a sensational headline that reads something like “vitamin C causes cancer” rather than one that accurately reflects the findings of a study. True scientific breakthroughs are very rare, and progress is often painstakingly slow.
If you see or hear about a study that appears to contradict everything else on the subject, take a deep breath, step back and ask how the findings measure up to previous research on the topic.
One study is not really news. Think of it as a single piece of a jigsaw puzzle. It’s only when you put the pieces together by comparing several studies in a given field that you get an accurate picture of what’s really going on.
2. CONCENTRATE YOURSELF UPON DETAIL
The only real way to cut through information overload and protect yourself from being ripped off by fitness marketing hype is to learn a few of the basics about how to read fitness research.
Armed with a little knowledge, you’ll be able to tell instantly whether a magazine article, TV advertisement or newspaper story on diet and exercise is accurate or not.
Many of the studies that you see referenced in magazines, newspapers and on the Internet are available in the public domain. Some are freely available, while some require a subscription to the journal in which the study was published or a one-off payment to read.
Sifting through the research may seem a daunting task. But it doesn’t have to be. Here are some important questions to ask.
Is the journal any good? Most professional journals are peer reviewed, which means that articles submitted for publication are scrutinized by a panel of experts to see if the information they provide is accurate.
The review process can last many months. During this time, the author may have to revise their article based on feedback from the reviewer. Some studies are rejected altogether.
Research journals are also ranked according to something called their impact factor, which gives you a rough idea about the quality of the publication in question.
However, the fact that a study is published in a peer-reviewed journal with a high impact factor doesn’t mean you should accept the findings without question.
I’ve come across trials in peer-reviewed journals so poorly done that I wonder if anyone ever bothered to actually read them. Rather than being peer reviewed, some papers appear to have been “pal reviewed” by a buddy of one of the study authors. And some journals seem so desperate for new material that they’d probably publish your shopping list.
Dr. Ranjit Kumar Chandra, for example, published a study in the journal Nutrition claiming that his patented multi-vitamin formula could reverse memory problems in people over the age of 65.
However, the same study had been previously submitted to the British Medical Journal and subsequently rejected after a review by a statistical expert, who stated that the study had “all the hallmarks of having been completely invented.”
Dr. Richard Smith, the editor of the British Medical Journal at the time, said scientists who reviewed the paper had found the methods and statistical findings so unlikely that they wondered whether the study had actually been done.
During Chandra’s divorce trial it came to light he had about 120 bank accounts spread around the world, mostly in tax havens, housing $2 million. It’s believed most of the money came from studies he was paid to conduct but failed to complete.
Rod Whiteley has put together a great presentation on the subject of how to lie in sports medicine using statistics, which you can watch in the video below (and for those who don’t get sarcasm, he’s being sarcastic).
Always keep your skeptical hat on, no matter how reputable the source of information might first appear.
Was the research funded by a company with a financial interest in the outcome? Every study has to be paid for by somebody. Just because a trial has been funded by a company with a vested interest in the outcome doesn’t mean you should ignore it. However, be very cautious if the research is sponsored, especially if drugs or supplements are involved.
A five-year investigation into the inner workings of the National Institutes of Health by David Willman, a Pulitzer Prize winning reporter for the Los Angeles Times, reveals that more than $2.5 million of drug company consulting fees has been paid to top officials and scientists who oversee the clinical trials of drugs.
And that’s just the tip of the iceberg.
In the worst case of scientific fakery to come to light in two decades, a researcher who worked at the University of Vermont admitted that he fabricated data in 17 applications for federal grants to make his work seem more promising. This helped him win nearly $3 million in government funding.
And according to an article in The Observer, pharmaceutical giants hire ghostwriters to produce articles — then put doctors’ names on them. Many articles written by so-called independent academics may have been penned by writers working for agencies that receive huge sums from drug companies to plug their products.
What was the group of the test subjects? Were they old or young, trained or untrained?
Some strength-training studies involving sedentary or elderly groups show large gains in strength. Several hundred percent in some cases. This may sound impressive. But it may have been experienced by individuals whose strength in the leg press improved from 10 pounds to 30 pounds. At 30 pounds, they’re still very weak when compared with younger people.
If you take an overweight beginner and get them to eat less and exercise more, they’re going to make rapid progress no matter what type of program they follow. But that doesn’t mean that the results will apply to someone with a few years of training under their belt who wants to go from “good” to “great” shape.
Was it properly controlled? Studies involving food supplements, sports drinks, energy bars and other performance aids should be double-blind and placebo-controlled.
A placebo is a “fake” supplement used to reduce the influence of faith and belief in a treatment on the results of a study. As you’ll see in the video below, the placebo effect can be very powerful indeed.
Double-blinded means that neither the researcher nor the test subject knows which supplement or drug they are getting. If either person knows, it can have a big influence on the results.
What type of study was it? There are three types of experiments that are usually done to evaluate the potential link between diet and health.
The first of these are metabolic studies where researchers have complete control over a subjects diet for days or week at a time. These are used to determine if a certain nutrient, food or diet affects certain biomarkers, such as cholesterol levels.
The problem here is that these trials rarely mimic real life and usually aren’t long enough to tell us how diet is affecting health on a long term basis.
At the other end of the experimental spectrum are observational studies, where large numbers of healthy subjects are recruited and what they eat for months or years is recorded.
Unfortunately this sort of study usually has a myriad of confounding factors, which aren’t always measured. Put simply, these are hidden factors that vary between groups and which scientists may attribute to other variables being measured.
The link between diet and heart disease, for example, is always a controversial one. Large studies may show an association or link between various nutrients in the diet and an increase or decrease in the risk of heart disease.
But association does not mean causation.
For example, some studies show that people watching TV for more than four hours each day are more likely to be obese than those watching TV for less than one hour.
Does this mean that the TV is making you fat?
Are broadcasters secretly implanting subliminal messages in their programs compelling you to eat more?
Does your TV emit a special form of radiation developed in a top-secret government laboratory designed to create more compliant and docile citizens by turning them into couch potatoes?
If I were a conspiracy theorist, the answer would probably be yes.
But it’s far more likely that watching TV replaces physical activity. And it’s this drop in physical activity, rather than the TV itself, which is responsible for the weight gain.
In other words, there is an association between the hours spent watching TV and the amount of weight gained. But one is not necessarily causing the other.
In the video below, doctor and epidemiologist Ben Goldacre shows you the ways in which evidence can be distorted, from the blindingly obvious nutrition claims to the very subtle tricks of the pharmaceutical industry.
There are statistical adjustments you can apply to a study if a confounder is measured. However, not all confounding variables are measured, or even identified.
Finally, we have the “gold standard” randomized intervention trial. In these studies, which are usually extremely expensive and, as a consequence, very rare, one group of subjects is asked to change some aspect of their diet, such as eating less fat, or more fruit (intervention group). The other group is told to carry on as normal (control group).
At the end of the trial, the results are analyzed to see if there is any difference in the number of cases of a certain disease in the intervention group compared to the control group.
One of the big problems here is compliance. At the beginning of the study, subjects in the intervention group are usually very conscientious about their diet. But as the study progresses they tend to slip.
Control subjects, meanwhile, may change their diets voluntarily in response to health messages and over a period of time there is convergence between the two groups.
To get meaningful data about the long-term effect of the diet, the study needs to last as long as possible. But the longer the study goes on the less difference there is between the two groups.
Changing one dietary ingredient also leads to a change in another. If your subjects cut back on their fat intake, for example, without making any other changes to their diet, their calorie intake will go down and they’ll lose weight.
So even if you did find that this group had a reduction in a disease biomarker, you don’t really know why. Was it because they reduced their intake of fat, cut back on their calorie intake, or lost weight?
And if you tell your subjects to keep their weight and fat intake stable by replacing the fat with another nutrient, such as protein, you don’t know if it was the decrease in fat or the increase in protein that was responsible for the results.
Professor Oliver, professor emeritus of cardiology at the University of Edinburgh, argues that it’s “virtually impossible to design and conduct an adequate dietary trial. The alteration of one dietary ingredient invariably leads to a change in another or to other changes in lifestyle, and it’s often difficult to assume that the effects of a prescribed dietary change are solely related to the diet under test.”
How are the results presented? Something else to consider is the fact that most studies only report the average results for a group of people. Why is this a problem?
Simply looking at the average amount of muscle gained or fat lost can mask large differences in individual results. It’s a number that’s easily skewed by a few people whose results lie outside the normal range.
Here’s a figure that shows the chance in VO2max in subjects taking part in a laboratory-based endurance-training program for 20 weeks.
The average increase in VO2max was 384 mL of oxygen. But the responses varied from no change to an increase of more than 1,000 mL of oxygen per minute.
Some researchers have begun grouping subjects into high, medium and low responders, or even publishing the results for each individual subject. This still won’t tell us why certain individuals get better results than everyone else. But it is a step in the right direction.
3. FOLLOW FACT WHEREVER IT MAY LEAD
Knowledge is constantly changing and evolving. So when you’re trawling through research journals, there’s a good chance that you’ll come across data that challenges your current opinions.
Most people will choose to ignore it, or simply reinterpret the evidence in a way that supports their initial beliefs.
This is because of a phenomenon known as confirmation bias, one of the well-researched flaws in our mental machinery. Confirmation bias means that we are more likely to pay more attention to things that confirm our existing beliefs than those that don’t. We also tend to ignore, or place less importance, on information that contradicts something that we already believe to be true.
Our natural tendency is to pay the most attention to people who agree with us. It feels good to have our opinions reflected back to us. As a result, we spend most of our time searching for information that supports our existing views.
There’s nothing wrong with changing your mind simply because you’re following fact wherever it leads you. Accepting that you were wrong simply means that you know more today than you did yesterday.
As Muhammad Ali once observed, “The man who views the world at 50 the same as he did at 20 has wasted 30 years of his life.”
SEE ALSO: THE FLAT BELLY CHEAT SHEET
If you want less flab and more muscle when you look down at your abs (or where they should be), check out The Flat Belly Cheat Sheet.
It's a “cut the waffle and just tell me what to do” PDF that tells you exactly how to get rid of belly fat. To get a copy of the cheat sheet sent to you, please click or tap here to enter your email address.