The World Turned Upside Down: The Second Low-Carbohydrate Revolution (30 page)

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
2.18Mb size Format: txt, pdf, ePub
ads

Caveat Lector

A scientific paper is also selling.
Scientific papers are rarely
just data. They have an idea that they are trying to sell. Teach and
sell are
the two things you do in science. The reader has to be an educated
consumer
however. As a consumer, you should be suspicious of overselling. One
good indicator
is the use of value judgements as if they were scientific terms.
"Healthy" (or
"healthful") is not a scientific term.

If a study describes a diet as
"healthy," it is almost
guaranteed to be a flawed study.  If we knew which diets were
"healthy,"
we wouldn't have an obesity epidemic. A good example is the paper by
Appel
[86]
on the DASH diet
which has as its main conclusion:

"In
the setting of a
healthful diet, partial substitution of carbohydrate with either
protein or
monounsaturated fat can further lower blood pressure, improve lipid
levels, and
reduce estimated cardiovascular risk."

It's hard to know how healthful the
original diet could have
been if removing carbohydrate improved everything. In addition, not
only was
this about a "carbohydrate-rich diet used in the DASH trials" but it is
" ...
currently advocated in several scientific reports." Another red flag is
when
they tell you how widely accepted their idea is.

Understatement  is
good.  One of the more famous
is from the classic Watson & Crick paper of 1953
[87]
in which they proposed the DNA double helix structure. They said "It
has not
escaped our notice that the specific pairing we have postulated
immediately
suggests a possible copying mechanism for the genetic
material."  A study
that contains the word "healthy" is an infomercial. A paper that says
it is
"evidence-based" is patting itself on the back.

Looking for the figures.

Presentation in graphic form usually
means the author wants
to explain things to you, rather than snow you. The
Watson-Crick 
paper cited above had the diagram of the double-helix, which
essentially
became the symbol of modern biology.  It was drawn by Odile,
Francis
Crick's wife, who is described as being famous for her nudes, only one
of which
I could find on the internet. Odile's original DNA structure, however,
is still
widely use in textbooks.

Figure
16-2
.
The
Original DNA Structure from Watson and Crick and Nude by Odile Crick

What's wrong with the
literature? What's
right?

What's right is that there are many
published papers in the
nutritional literature that are informative, creative and generally
conform to
the standards of good science. Naturally, like any scientific
literature, most
of the papers are pretty much routine, of specialized interest or, as
described
in one of the choices on the checklist for referees when you review
manuscripts: "of interest to other workers in the field." It is not the
mediocre papers but rather a surprising number of really objectionable
papers.
The medical literature is full of papers bordering on fraud, or at
least,
misrepresentation. There are many papers that are full of fundamental
errors
and total lack of judgement in interpretation. Worse, many give you the
feeling
that somebody is going to get hurt because of bad medical advice that
follows
from the misinterpretations.

Most work in most fields, more or
less by
definition, is
mediocre. But unlike the preponderance of detail that makes
run-of-the-mill
papers so boring, papers in medical nutrition make drastic claims about
saving
hundreds of thousands of lives by scaling up to everybody, a result
that had
hardly any effect to begin with. If a conclusion is shaky, applying it
to the
whole population is in violation of the definition of zero: X x 0 = 0,
X real.

It is difficult to face the fact that
so
much of the medical
literature is published by people who are not trained in science, which
means
they don't know the game, they haven't seen much of it and don't know
it when
they see it. Now, there is no real training in science. You can learn
techniques but it's not about cyclotrons, it's about ideas. There is no
reason
why a physician can't do real scientific thinking but, at the same
time, there
is no reason why they can. An MD degree is not a guarantee of any
expertise
outside your area of specialty.

The irony is that the practice of
medicine
can be highly
scientific. Differential diagnosis and the experience in recommending
the right
drug is the kind of things that are part of scientific disciplines. The
same
physician who will intuitively solve a medical mystery, though, will
assume
that, for scientific research, things are different, that there are
somehow
arbitrary rules and that brute force application of statistics will
tell you
whether what you did is true.

Chapters to follow detail the
failures,
ranging from the slightly
inaccurate – "association does not imply causality" (sometimes it does
and
sometimes it doesn't) to the idiotic – you must do intention-to-treat;
if you
assign people to take a drug and they don't take it, you have to
include their
data with those who did. ("If you only report compliers you introduce
bias").

And then there are "levels of
evidence,"
arbitrary rules that get
incorporated into tables, the top of which is always some kind of "gold
standard." The odd thing about levels of scientific evidence is that
nobody in
any physical science would recognize them. They are, in fact, the
creation of
people who are trying to do science but don't know it when they see it,
fundamentally amateurs who have arbitrary rules along the lines of the
apocryphal story about Mozart:

A man comes to Mozart and wants to
become a
composer.  Mozart says that they have to study theory
for a couple of
years, that they should study orchestration and become
proficient at the
piano, and goes on like this.  Finally, the man says "but you
wrote your
first symphony when you were 8 years old."  Mozart says "Yes,
but I didn't
ask anybody."

So what are the mistakes? I will list
these and each of the
following chapters will discuss them. Some are general kinds of
mistaken
notions although most are misapplications of normally acceptable
practice,
particularly statistics.

Chapter
17
:
Observations
generate hypotheses
.
Observational studies test hypotheses. Association implies causality.
(Sometimes).

Chapter
18
: Red meat and the
new puritans. Studies in relative risk.

Chapter
19
: Crimson slime.
Making Americans afraid of red meat.

Chapter
20
: Uses and misuses
of group statistics. Bill Gates walks into a bar...

Chapter
21
: The
Oslo-diet-heart study. The credit it deserves.

Chapter
22
: The seventh egg.
More statistical shenanigans vs common sense.

Chapter
23
: Intention-to-treat.

Chapter
24
. The fiend that
lies like truth. Summary on how to read the medical literature

Summary

The book
PDQ
Statistics
gives us the Golden Rule: in a
scientific paper, "the
onus is on the author to convey" (and the reader has the right to
expect) "an
accurate impression of what the data look like using graphs or standard
measures." To understand research in nutrition or, really, any science,
one has
to be prepared to question the "experts." If a paper does not adhere to
this
Golden Rule and is too quick to begin "the statistical shenanigans," it
"should
be viewed from the outset with considerable suspicion."

The bottom line is that you have to
expect real
communication from the author of a scientific paper. The problem, for
many
people, is believing that the best and the brightest are at fault.
But it
is not hard to find examples of experts making mistakes. The low
standards in
nutrition mean that you are substantially on your own. There is help.
The next
few chapters give you some principles and things to look for.

 

 

 

Chapter
17

Observational
studies, Association, causality.

"...789 deaths were reported in
Doll and Hill's original
cohort. Thirty-six of these were attributed to lung cancer. When these
lung
cancer deaths were counted in smokers versus non-smokers, the
correlation
virtually sprang out: all thirty-six of the deaths had occurred in
smokers. The
difference between the two groups was so significant that Doll and Hill
did not
even need to apply complex statistical metrics to discern it. The trial
designed to bring the most rigorous statistical analysis to the cause
of lung
cancer barely required elementary mathematics to prove his point."

Siddhartha Mukherjee

The
Emperor of All
Maladies
.

Scientists don't like philosophy of
science. It is not just
that pompous phrases like "hypothetico-deductive systems" are such a
turn-off
but that we rarely recognize descriptions in philosophy articles as
what we
actually do. In the end, there is no definition of science any more
than there are
definitions for music or literature. Scientists have different styles
and it is
hard to generalize about actual scientific behavior. Research is a
human
activity and precisely because it puts a premium on creativity, it
defies
categorization. As the physicist Steven Weinberg put it, echoing
Justice
Stewart on pornography:

"There is no logical formula that
establishes a sharp
dividing line between a beautiful explanatory theory and a mere list of
data,
but we know the difference when we see it – we demand a simplicity and
rigidity
in our principles before we are willing to take them seriously
[88]
."

We know that what we see in the
current state of nutrition
is not it. This forces us to consider what science really is – what is
it that
makes nutritional medical literature so bad? If we can identify some
principles, maybe we can penetrate the mess and see how it could be
fixed. One
frequently stated principle is that "observational studies only
generate
hypotheses." There is the related idea that "association does not imply
causality," usually cited by those authors who want you to believe that
the
association that they found does imply causality.

These ideas are not exactly right or,
at least, they
insufficiently recognize that scientific experiments are not so easily
wedged
into categories like "observational studies." The principles are also
widely
invoked by bloggers and critics to discredit the continuing stream of
observational studies that make an association between the favored
targets,
eggs, red meat, sugar-sweetened soda, which can be "tied to" prevalence
of some
metabolic disease or cancer. In most cases, the original studies are
getting
what they deserve but the bills of indictment are not accurate and it
would be
better not to cite absolute statements of scientific principles. It is
not
simply that these studies are observational studies but rather that
they are
bad
observational studies
and, in many cases, the associations that they find are so weak that
the study
really constitutes an argument for a
lack
of causality. On the assumption that good experimental practice and
interpretation could be even roughly defined, I laid out in my blogpost
a few
principles that I thought were a better representation, if you can make
any
generalization, of what actually goes on in science:

Observations
generate hypotheses.

Observational
studies
tes
t
hypotheses.

Associations
do not
necessarily
imply causality.

In
some
sense, all science is associations.

Only mathematics is axiomatic
(starts from absolute
assumptions).

If you notice that kids who eat a lot
of candy seem to be
fat, or even if you notice that you yourself get fat eating candy, that
is an
observation
.
From this
observation, you might come up with the hypothesis that sugar causes
obesity.
An observation generates hypotheses. A
test
of your hypothesis would be to carry out an
observational
study
. For example, you might try to see if
there is an association
between sugar consumption and incidence of obesity. There are different
ways of
doing this – the simplest epidemiologic approach is simply to compare
the
history of the eating behavior of individuals (insofar as you can get
it) with
how fat they are. When you do this comparison you are
testing your
hypothesis
.

You must remember that there are an
infinite number of other
things, meat consumption, TV hours, distance from the French bakery,
Grandfather's waist circumference that you could have measured as an
independent variable. You have a hypothesis that it was candy. What
about all
the others? Mike Eades described falling asleep as a child by trying to
think
of everything in the world. You just can't test them all. As Einstein
put it
"your theory determines the measurement you make."  If you
found
associations with everything, would anything be causal?

Associations can
predict causality.

In fact, association can be strong
evidence for causation
and frequently an association can provide support for, if not absolute
proof,
of the idea to be tested. Hypotheses generate observational studies,
not the
other way around. A correct statement is that association does not
necessarily
imply
causation. In some sense, all science is observation and association.
Even
thermodynamics, that most mathematical and absolute of sciences, rests
on
observation. As soon as somebody builds a perpetual motion machine that
works,
it's all over.

Biological mechanisms, or perhaps all
scientific theories, are
never proved. By analogy with a court of law, you cannot be found
innocent,
only not guilty. That is why excluding a theory is stronger than
showing
consistency. The grand epidemiological study of macronutrient intake vs
diabetes and obesity, that is, all of the data on what Americans ate in
the
last forty years, shows that increasing carbohydrate is associated with
increased calories even under conditions where fruits and vegetables
also go up
and fat, if anything goes down. The data on dietary consumption and
disease in
the whole population describes an observational study but it is strong
because
it gives support to a
lack
of
causal effect of increased carbohydrate and decreased fat on positive
outcome.
In science, eliminating or disproving a theory is always stronger than
showing
consistency. The continued
failure
of the large expensive random controlled studies to show any benefit
from a
reduction in dietary total or saturated fat is the kicker. It is now
clear that
prospective experiments (where you pick the population first and see
how people
do on your variable of interest) have shown in the past, and will
undoubtedly
continue to show, the same negative outcome. But will anybody give up
on
saturated fat? In a court of law, if you are found not guilty of child
abuse,
people may still not let you move into their neighborhood. My point
here is
that saturated fat should never have been indicted in the first place.

An association will tell you about
causality if 1) the
association is strong and 2) if there is a plausible underlying
mechanism and
3) if there is not a more plausible explanation. The often cited
correlation
between cardiovascular disease and number of TV sets does not imply
causality
because, although principle 1 is observed, there is no logical direct
underlying mechanism. Countries with a lot of TV sets have modern life
styles
that may predispose to cardiovascular disease. TV does not cause CVD.
Interestingly, in CVD, where there has been so many papers published,
where
there is so much medical interest, it is not obvious that we yet have a
good
underlying mechanism.

Re-inventing the wheel.
Me and Bradford Hill.

This chapter is a re-working of a
blogpost that I published
in 2013. The post included the previous paragraphs where I tried to lay
out a
few principles for dealing with the kind of observational studies that
you see
in the scientific literature. I was speaking off the top of my head,
trying to
describe the logic that scientists use in interpreting data. It was an
obvious
description of what is done in practice. I didn't think it was
particularly
original and, again, I don't think that there are any hard and fast
principles
in science. When I described what I had written to my colleague Gene
Fine, his
response was "aren't you re-inventing the wheel?" He meant that
Bradford Hill,
pretty much the inventor of modern epidemiology, had already
established these
and a couple of others as principles. Gene cited
The Emperor of
All Maladies
[89]
,
an outstanding book on the history of cancer. I had,
in fact, read
Emperor
on his recommendation. I remembered Bradford Hill and the description
of the
evolution of the ideas of epidemiology, population studies and random
controlled trials. The story is also told in James LeFanu's
The Rise and
Fall of Modern Medicine
[90]
,
another captivating history of
medicine.

I thought of these as general
philosophical ideas, rather
than as grand scientific principles. Perhaps it is that we're just used
to it,
but saying that an association has to be very strong to imply causality
is
common sense and not in the same ballpark with the Pythagorean Theorem.
It's
something that you might say over coffee or in response to somebody's
blog.
Being explicit about it turns out to be very important but, like much
in
philosophy of science, it struck me as not of great intellectual
import. It all
reminded me of learning, in grade school, that the Earl of Sandwich had
invented the sandwich and thinking "this is an invention?" Woody Allen
thought
the same thing and wrote the history of the sandwich. He recorded the
Earl's
early failures – "In 1741, he places bread on bread with turkey on top.
This
fails. In 1745, he exhibits bread with turkey on either side. Everyone
rejects
this except David Hume."

In fact, Hill's principles are
important even if they do
seem obvious. Simple ideas are not always accepted. The concept of the
random
controlled trial (RCT), randomly assign people to the drug or behavior
that
you're testing or to a group that is the control, obvious to us now,
was hard
won. Proving that any particular environmental factor – diet, smoking,
pollution or toxic chemicals – was the cause of a disease and that, by
reducing
that factor, the disease could be prevented, turned out to be a very
hard sell,
especially to physicians whose view of disease may have been strongly
colored
by the idea of an infective agent.

The Rise
and Fall of Modern Medicine
describes Bradford
Hill's two important
contributions
[90]
. He
demonstrated that tuberculosis
could be cured by a combination of two drugs, streptomycin and PAS (
para
-aminosalicylic
acid). Even more important, he showed that tobacco causes lung cancer.
Hill was
Professor of Medical Statistics at the London School of Hygiene and
Tropical
Medicine but was not formally trained in statistics and, like many of
us,
thought of proper statistics as simply applied common sense.
Ironically, an
early near-fatal case of tuberculosis prevented formal medical
education. His
first monumental accomplishment was, in fact, to demonstrate how
tuberculosis
could be cured with the streptomycin-PAS combination. In 1941, Hill and
his
co-worker Richard Doll undertook a systematic investigation of the risk
factors
for lung cancer. His eventual success was accompanied by a description
of the
principles that allow you to say when association can be taken as
causation.

Wiki says: "in 1965, built upon the
work of
Hume
and
Popper
,
Hill suggested several aspects of
causality in medicine and biology..." but his approach was not formal –
he
never referred to his principles as criteria – he recognized them as
common
sense behavior and his 1965 presentation to the Royal Society of
Medicine is a
remarkably sober, intelligent document. Although described as an
example of an
article that, as here, has been read more often in quotations and
paraphrases,
it is worth reading the original even today.
http://epiville.ccnmtl.columbia.edu/assets/pdfs/Hill_1965.pdf

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
2.18Mb size Format: txt, pdf, ePub
ads

Other books

Christine by Stephen King
Bondage Wedding by Tori Carson
A Tall Tail by Charles Stross
Apprentice by Eric Guindon
Teresa Medeiros - [FairyTale 02] by The Bride, the Beast