Friday, April 29, 2011

Is Sugar Toxic?

April 13, 2011

Is Sugar Toxic?

On May 26, 2009, Robert Lustig gave a lecture called “Sugar: The Bitter Truth,” which was posted on YouTube the following July. Since then, it has been viewed well over 800,000 times, gaining new viewers at a rate of about 50,000 per month, fairly remarkable numbers for a 90-minute discussion of the nuances of fructose biochemistry and human physiology.
Lustig is a specialist on pediatric hormone disorders and the leading expert in childhood obesity at the University of California, San Francisco, School of Medicine, which is one of the best medical schools in the country. He published his first paper on childhood obesity a dozen years ago, and he has been treating patients and doing research on the disorder ever since.
The viral success of his lecture, though, has little to do with Lustig’s impressive credentials and far more with the persuasive case he makes that sugar is a “toxin” or a “poison,” terms he uses together 13 times through the course of the lecture, in addition to the five references to sugar as merely “evil.” And by “sugar,” Lustig means not only the white granulated stuff that we put in coffee and sprinkle on cereal — technically known as sucrose — but also high-fructose corn syrup, which has already become without Lustig’s help what he calls “the most demonized additive known to man.”
It doesn’t hurt Lustig’s cause that he is a compelling public speaker. His critics argue that what makes him compelling is his practice of taking suggestive evidence and insisting that it’s incontrovertible. Lustig certainly doesn’t dabble in shades of gray. Sugar is not just an empty calorie, he says; its effect on us is much more insidious. “It’s not about the calories,” he says. “It has nothing to do with the calories. It’s a poison by itself.”
If Lustig is right, then our excessive consumption of sugar is the primary reason that the numbers of obese and diabetic Americans have skyrocketed in the past 30 years. But his argument implies more than that. If Lustig is right, it would mean that sugar is also the likely dietary cause of several other chronic ailments widely considered to be diseases of Western lifestyles — heart disease, hypertension and many common cancers among them.
The number of viewers Lustig has attracted suggests that people are paying attention to his argument. When I set out to interview public health authorities and researchers for this article, they would often initiate the interview with some variation of the comment “surely you’ve spoken to Robert Lustig,” not because Lustig has done any of the key research on sugar himself, which he hasn’t, but because he’s willing to insist publicly and unambiguously, when most researchers are not, that sugar is a toxic substance that people abuse. In Lustig’s view, sugar should be thought of, like cigarettes and alcohol, as something that’s killing us.
This brings us to the salient question: Can sugar possibly be as bad as Lustig says it is?
It’s one thing to suggest, as most nutritionists will, that a healthful diet includes more fruits and vegetables, and maybe less fat, red meat and salt, or less of everything. It’s entirely different to claim that one particularly cherished aspect of our diet might not just be an unhealthful indulgence but actually be toxic, that when you bake your children a birthday cake or give them lemonade on a hot summer day, you may be doing them more harm than good, despite all the love that goes with it. Suggesting that sugar might kill us is what zealots do. But Lustig, who has genuine expertise, has accumulated and synthesized a mass of evidence, which he finds compelling enough to convict sugar. His critics consider that evidence insufficient, but there’s no way to know who might be right, or what must be done to find out, without discussing it.
If I didn’t buy this argument myself, I wouldn’t be writing about it here. And I also have a disclaimer to acknowledge. I’ve spent much of the last decade doing journalistic research on diet and chronic disease — some of the more contrarian findings, on dietary fat, appeared in this magazine —– and I have come to conclusions similar to Lustig’s.
The history of the debate over the health effects of sugar has gone on far longer than you might imagine. It is littered with erroneous statements and conclusions because even the supposed authorities had no true understanding of what they were talking about. They didn’t know, quite literally, what they meant by the word “sugar” and therefore what the implications were.
So let’s start by clarifying a few issues, beginning with Lustig’s use of the word “sugar” to mean both sucrose — beet and cane sugar, whether white or brown — and high-fructose corn syrup. This is a critical point, particularly because high-fructose corn syrup has indeed become “the flashpoint for everybody’s distrust of processed foods,” says Marion Nestle, a New York University nutritionist and the author of “Food Politics.”
This development is recent and borders on humorous. In the early 1980s, high-fructose corn syrup replaced sugar in sodas and other products in part because refined sugar then had the reputation as a generally noxious nutrient. (“Villain in Disguise?” asked a headline in this paper in 1977, before answering in the affirmative.) High-fructose corn syrup was portrayed by the food industry as a healthful alternative, and that’s how the public perceived it. It was also cheaper than sugar, which didn’t hurt its commercial prospects. Now the tide is rolling the other way, and refined sugar is making a commercial comeback as the supposedly healthful alternative to this noxious corn-syrup stuff. “Industry after industry is replacing their product with sucrose and advertising it as such — ‘No High-Fructose Corn Syrup,’ ” Nestle notes.
But marketing aside, the two sweeteners are effectively identical in their biological effects. “High-fructose corn syrup, sugar — no difference,” is how Lustig put it in a lecture that I attended in San Francisco last December. “The point is they’re each bad — equally bad, equally poisonous.”
Refined sugar (that is, sucrose) is made up of a molecule of the carbohydrate glucose, bonded to a molecule of the carbohydrate fructose — a 50-50 mixture of the two. The fructose, which is almost twice as sweet as glucose, is what distinguishes sugar from other carbohydrate-rich foods like bread or potatoes that break down upon digestion to glucose alone. The more fructose in a substance, the sweeter it will be. High-fructose corn syrup, as it is most commonly consumed, is 55 percent fructose, and the remaining 45 percent is nearly all glucose. It was first marketed in the late 1970s and was created to be indistinguishable from refined sugar when used in soft drinks. Because each of these sugars ends up as glucose and fructose in our guts, our bodies react the same way to both, and the physiological effects are identical. In a 2010 review of the relevant science, Luc Tappy, a researcher at the University of Lausanne in Switzerland who is considered by biochemists who study fructose to be the world’s foremost authority on the subject, said there was “not the single hint” that H.F.C.S. was more deleterious than other sources of sugar.
The question, then, isn’t whether high-fructose corn syrup is worse than sugar; it’s what do they do to us, and how do they do it? The conventional wisdom has long been that the worst that can be said about sugars of any kind is that they cause tooth decay and represent “empty calories” that we eat in excess because they taste so good.
By this logic, sugar-sweetened beverages (or H.F.C.S.-sweetened beverages, as the Sugar Association prefers they are called) are bad for us not because there’s anything particularly toxic about the sugar they contain but just because people consume too many of them.
Those organizations that now advise us to cut down on our sugar consumption — the Department of Agriculture, for instance, in its recent Dietary Guidelines for Americans, or the American Heart Association in guidelines released in September 2009 (of which Lustig was a co-author) — do so for this reason. Refined sugar and H.F.C.S. don’t come with any protein, vitamins, minerals, antioxidants or fiber, and so they either displace other more nutritious elements of our diet or are eaten over and above what we need to sustain our weight, and this is why we get fatter.
Whether the empty-calories argument is true, it’s certainly convenient. It allows everyone to assign blame for obesity and, by extension, diabetes — two conditions so intimately linked that some authorities have taken to calling them “diabesity” — to overeating of all foods, or underexercising, because a calorie is a calorie. “This isn’t about demonizing any industry,” as Michelle Obama said about her Let’s Move program to combat the epidemic of childhood obesity. Instead it’s about getting us — or our children — to move more and eat less, reduce our portion sizes, cut back on snacks.
Lustig’s argument, however, is not about the consumption of empty calories — and biochemists have made the same case previously, though not so publicly. It is that sugar has unique characteristics, specifically in the way the human body metabolizes the fructose in it, that may make it singularly harmful, at least if consumed in sufficient quantities.
The phrase Lustig uses when he describes this concept is “isocaloric but not isometabolic.” This means we can eat 100 calories of glucose (from a potato or bread or other starch) or 100 calories of sugar (half glucose and half fructose), and they will be metabolized differently and have a different effect on the body. The calories are the same, but the metabolic consequences are quite different.
The fructose component of sugar and H.F.C.S. is metabolized primarily by the liver, while the glucose from sugar and starches is metabolized by every cell in the body. Consuming sugar (fructose and glucose) means more work for the liver than if you consumed the same number of calories of starch (glucose). And if you take that sugar in liquid form — soda or fruit juices — the fructose and glucose will hit the liver more quickly than if you consume them, say, in an apple (or several apples, to get what researchers would call the equivalent dose of sugar). The speed with which the liver has to do its work will also affect how it metabolizes the fructose and glucose.
In animals, or at least in laboratory rats and mice, it’s clear that if the fructose hits the liver in sufficient quantity and with sufficient speed, the liver will convert much of it to fat. This apparently induces a condition known as insulin resistance, which is now considered the fundamental problem in obesity, and the underlying defect in heart disease and in the type of diabetes, type 2, that is common to obese and overweight individuals. It might also be the underlying defect in many cancers.
If what happens in laboratory rodents also happens in humans, and if we are eating enough sugar to make it happen, then we are in trouble.
The last time an agency of the federal government looked into the question of sugar and health in any detail was in 2005, in a report by the Institute of Medicine, a branch of the National Academies. The authors of the report acknowledged that plenty of evidence suggested that sugar could increase the risk of heart disease and diabetes — even raising LDL cholesterol, known as the “bad cholesterol”—– but did not consider the research to be definitive. There was enough ambiguity, they concluded, that they couldn’t even set an upper limit on how much sugar constitutes too much. Referring back to the 2005 report, an Institute of Medicine report released last fall reiterated, “There is a lack of scientific agreement about the amount of sugars that can be consumed in a healthy diet.” This was the same conclusion that the Food and Drug Administration came to when it last assessed the sugar question, back in 1986. The F.D.A. report was perceived as an exoneration of sugar, and that perception influenced the treatment of sugar in the landmark reports on diet and health that came after.
The Sugar Association and the Corn Refiners Association have also portrayed the 1986 F.D.A. report as clearing sugar of nutritional crimes, but what it concluded was actually something else entirely. To be precise, the F.D.A. reviewers said that other than its contribution to calories, “no conclusive evidence on sugars demonstrates a hazard to the general public when sugars are consumed at the levels that are now current.” This is another way of saying that the evidence by no means refuted the kinds of claims that Lustig is making now and other researchers were making then, just that it wasn’t definitive or unambiguous.
What we have to keep in mind, says Walter Glinsmann, the F.D.A. administrator who was the primary author on the 1986 report and who now is an adviser to the Corn Refiners Association, is that sugar and high-fructose corn syrup might be toxic, as Lustig argues, but so might any substance if it’s consumed in ways or in quantities that are unnatural for humans. The question is always at what dose does a substance go from being harmless to harmful? How much do we have to consume before this happens?
When Glinsmann and his F.D.A. co-authors decided no conclusive evidence demonstrated harm at the levels of sugar then being consumed, they estimated those levels at 40 pounds per person per year beyond what we might get naturally in fruits and vegetables — 40 pounds per person per year of “added sugars” as nutritionists now call them. This is 200 calories per day of sugar, which is less than the amount in a can and a half of Coca-Cola or two cups of apple juice. If that’s indeed all we consume, most nutritionists today would be delighted, including Lustig.
But 40 pounds per year happened to be 35 pounds less than what Department of Agriculture analysts said we were consuming at the time — 75 pounds per person per year — and the U.S.D.A. estimates are typically considered to be the most reliable. By the early 2000s, according to the U.S.D.A., we had increased our consumption to more than 90 pounds per person per year.
That this increase happened to coincide with the current epidemics of obesity and diabetes is one reason that it’s tempting to blame sugars — sucrose and high-fructose corn syrup — for the problem. In 1980, roughly one in seven Americans was obese, and almost six million were diabetic, and the obesity rates, at least, hadn’t changed significantly in the 20 years previously. By the early 2000s, when sugar consumption peaked, one in every three Americans was obese, and 14 million were diabetic.
This correlation between sugar consumption and diabetes is what defense attorneys call circumstantial evidence. It’s more compelling than it otherwise might be, though, because the last time sugar consumption jumped markedly in this country, it was also associated with a diabetes epidemic.
In the early 20th century, many of the leading authorities on diabetes in North America and Europe (including Frederick Banting, who shared the 1923 Nobel Prize for the discovery of insulin) suspected that sugar causes diabetes based on the observation that the disease was rare in populations that didn’t consume refined sugar and widespread in those that did. In 1924, Haven Emerson, director of the institute of public health at Columbia University, reported that diabetes deaths in New York City had increased as much as 15-fold since the Civil War years, and that deaths increased as much as fourfold in some U.S. cities between 1900 and 1920 alone. This coincided, he noted, with an equally significant increase in sugar consumption — almost doubling from 1890 to the early 1920s — with the birth and subsequent growth of the candy and soft-drink industries.
Emerson’s argument was countered by Elliott Joslin, a leading authority on diabetes, and Joslin won out. But his argument was fundamentally flawed. Simply put, it went like this: The Japanese eat lots of rice, and Japanese diabetics are few and far between; rice is mostly carbohydrate, which suggests that sugar, also a carbohydrate, does not cause diabetes. But sugar and rice are not identical merely because they’re both carbohydrates. Joslin could not know at the time that the fructose content of sugar affects how we metabolize it.
Joslin was also unaware that the Japanese ate little sugar. In the early 1960s, the Japanese were eating as little sugar as Americans were a century earlier, maybe less, which means that the Japanese experience could have been used to support the idea that sugar causes diabetes. Still, with Joslin arguing in edition after edition of his seminal textbook that sugar played no role in diabetes, it eventually took on the aura of undisputed truth.
Until Lustig came along, the last time an academic forcefully put forward the sugar-as-toxin thesis was in the 1970s, when John Yudkin, a leading authority on nutrition in the United Kingdom, published a polemic on sugar called “Sweet and Dangerous.” Through the 1960s Yudkin did a series of experiments feeding sugar and starch to rodents, chickens, rabbits, pigs and college students. He found that the sugar invariably raised blood levels of triglycerides (a technical term for fat), which was then, as now, considered a risk factor for heart disease. Sugar also raised insulin levels in Yudkin’s experiments, which linked sugar directly to type 2 diabetes. Few in the medical community took Yudkin’s ideas seriously, largely because he was also arguing that dietary fat and saturated fat were harmless. This set Yudkin’s sugar hypothesis directly against the growing acceptance of the idea, prominent to this day, that dietary fat was the cause of heart disease, a notion championed by the University of Minnesota nutritionist Ancel Keys.
A common assumption at the time was that if one hypothesis was right, then the other was most likely wrong. Either fat caused heart disease by raising cholesterol, or sugar did by raising triglycerides. “The theory that diets high in sugar are an important cause of atherosclerosis and heart disease does not have wide support among experts in the field, who say that fats and cholesterol are the more likely culprits,” as Jane E. Brody wrote in The Times in 1977.
At the time, many of the key observations cited to argue that dietary fat caused heart disease actually support the sugar theory as well. During the Korean War, pathologists doing autopsies on American soldiers killed in battle noticed that many had significant plaques in their arteries, even those who were still teenagers, while the Koreans killed in battle did not. The atherosclerotic plaques in the Americans were attributed to the fact that they ate high-fat diets and the Koreans ate low-fat. But the Americans were also eating high-sugar diets, while the Koreans, like the Japanese, were not.
In 1970, Keys published the results of a landmark study in nutrition known as the Seven Countries Study. Its results were perceived by the medical community and the wider public as compelling evidence that saturated-fat consumption is the best dietary predictor of heart disease. But sugar consumption in the seven countries studied was almost equally predictive. So it was possible that Yudkin was right, and Keys was wrong, or that they could both be right. The evidence has always been able to go either way.
European clinicians tended to side with Yudkin; Americans with Keys. The situation wasn’t helped, as one of Yudkin’s colleagues later told me, by the fact that “there was quite a bit of loathing” between the two nutritionists themselves. In 1971, Keys published an article attacking Yudkin and describing his evidence against sugar as “flimsy indeed.” He treated Yudkin as a figure of scorn, and Yudkin never managed to shake the portrayal.
By the end of the 1970s, any scientist who studied the potentially deleterious effects of sugar in the diet, according to Sheldon Reiser, who did just that at the U.S.D.A.’s Carbohydrate Nutrition Laboratory in Beltsville, Md., and talked about it publicly, was endangering his reputation. “Yudkin was so discredited,” Reiser said to me. “He was ridiculed in a way. And anybody else who said something bad about sucrose, they’d say, ‘He’s just like Yudkin.’ ”
What has changed since then, other than Americans getting fatter and more diabetic? It wasn’t so much that researchers learned anything particularly new about the effects of sugar or high-fructose corn syrup in the human body. Rather the context of the science changed: physicians and medical authorities came to accept the idea that a condition known as metabolic syndrome is a major, if not the major, risk factor for heart disease and diabetes. The Centers for Disease Control and Prevention now estimate that some 75 million Americans have metabolic syndrome. For those who have heart attacks, metabolic syndrome will very likely be the reason.
The first symptom doctors are told to look for in diagnosing metabolic syndrome is an expanding waistline. This means that if you’re overweight, there’s a good chance you have metabolic syndrome, and this is why you’re more likely to have a heart attack or become diabetic (or both) than someone who’s not. Although lean individuals, too, can have metabolic syndrome, and they are at greater risk of heart disease and diabetes than lean individuals without it.
Having metabolic syndrome is another way of saying that the cells in your body are actively ignoring the action of the hormone insulin — a condition known technically as being insulin-resistant. Because insulin resistance and metabolic syndrome still get remarkably little attention in the press (certainly compared with cholesterol), let me explain the basics.
You secrete insulin in response to the foods you eat — particularly the carbohydrates — to keep blood sugar in control after a meal. When your cells are resistant to insulin, your body (your pancreas, to be precise) responds to rising blood sugar by pumping out more and more insulin. Eventually the pancreas can no longer keep up with the demand or it gives in to what diabetologists call “pancreatic exhaustion.” Now your blood sugar will rise out of control, and you’ve got diabetes.
Not everyone with insulin resistance becomes diabetic; some continue to secrete enough insulin to overcome their cells’ resistance to the hormone. But having chronically elevated insulin levels has harmful effects of its own — heart disease, for one. A result is higher triglyceride levels and blood pressure, lower levels of HDL cholesterol (the “good cholesterol”), further worsening the insulin resistance — this is metabolic syndrome.
When physicians assess your risk of heart disease these days, they will take into consideration your LDL cholesterol (the bad kind), but also these symptoms of metabolic syndrome. The idea, according to Scott Grundy, a University of Texas Southwestern Medical Center nutritionist and the chairman of the panel that produced the last edition of the National Cholesterol Education Program guidelines, is that heart attacks 50 years ago might have been caused by high cholesterol — particularly high LDL cholesterol — but since then we’ve all gotten fatter and more diabetic, and now it’s metabolic syndrome that’s the more conspicuous problem.
This raises two obvious questions. The first is what sets off metabolic syndrome to begin with, which is another way of asking, What causes the initial insulin resistance? There are several hypotheses, but researchers who study the mechanisms of insulin resistance now think that a likely cause is the accumulation of fat in the liver. When studies have been done trying to answer this question in humans, says Varman Samuel, who studies insulin resistance at Yale School of Medicine, the correlation between liver fat and insulin resistance in patients, lean or obese, is “remarkably strong.” What it looks like, Samuel says, is that “when you deposit fat in the liver, that’s when you become insulin-resistant.”
That raises the other obvious question: What causes the liver to accumulate fat in humans? A common assumption is that simply getting fatter leads to a fatty liver, but this does not explain fatty liver in lean people. Some of it could be attributed to genetic predisposition. But harking back to Lustig, there’s also the very real possibility that it is caused by sugar.
As it happens, metabolic syndrome and insulin resistance are the reasons that many of the researchers today studying fructose became interested in the subject to begin with. If you want to cause insulin resistance in laboratory rats, says Gerald Reaven, the Stanford University diabetologist who did much of the pioneering work on the subject, feeding them diets that are mostly fructose is an easy way to do it. It’s a “very obvious, very dramatic” effect, Reaven says.
By the early 2000s, researchers studying fructose metabolism had established certain findings unambiguously and had well-established biochemical explanations for what was happening. Feed animals enough pure fructose or enough sugar, and their livers convert the fructose into fat — the saturated fatty acid, palmitate, to be precise, that supposedly gives us heart disease when we eat it, by raising LDL cholesterol. The fat accumulates in the liver, and insulin resistance and metabolic syndrome follow.
Michael Pagliassotti, a Colorado State University biochemist who did many of the relevant animal studies in the late 1990s, says these changes can happen in as little as a week if the animals are fed sugar or fructose in huge amounts — 60 or 70 percent of the calories in their diets. They can take several months if the animals are fed something closer to what humans (in America) actually consume — around 20 percent of the calories in their diet. Stop feeding them the sugar, in either case, and the fatty liver promptly goes away, and with it the insulin resistance.
Similar effects can be shown in humans, although the researchers doing this work typically did the studies with only fructose — as Luc Tappy did in Switzerland or Peter Havel and Kimber Stanhope did at the University of California, Davis — and pure fructose is not the same thing as sugar or high-fructose corn syrup. When Tappy fed his human subjects the equivalent of the fructose in 8 to 10 cans of Coke or Pepsi a day — a “pretty high dose,” he says —– their livers would start to become insulin-resistant, and their triglycerides would go up in just a few days. With lower doses, Tappy says, just as in the animal research, the same effects would appear, but it would take longer, a month or more.
Despite the steady accumulation of research, the evidence can still be criticized as falling far short of conclusive. The studies in rodents aren’t necessarily applicable to humans. And the kinds of studies that Tappy, Havel and Stanhope did — having real people drink beverages sweetened with fructose and comparing the effect with what happens when the same people or others drink beverages sweetened with glucose — aren’t applicable to real human experience, because we never naturally consume pure fructose. We always take it with glucose, in the nearly 50-50 combinations of sugar or high-fructose corn syrup. And then the amount of fructose or sucrose being fed in these studies, to the rodents or the human subjects, has typically been enormous.
This is why the research reviews on the subject invariably conclude that more research is necessary to establish at what dose sugar and high-fructose corn syrup start becoming what Lustig calls toxic. “There is clearly a need for intervention studies,” as Tappy recently phrased it in the technical jargon of the field, “in which the fructose intake of high-fructose consumers is reduced to better delineate the possible pathogenic role of fructose. At present, short-term-intervention studies, however, suggest that a high-fructose intake consisting of soft drinks, sweetened juices or bakery products can increase the risk of metabolic and cardiovascular diseases.”
In simpler language, how much of this stuff do we have to eat or drink, and for how long, before it does to us what it does to laboratory rats? And is that amount more than we’re already consuming?
Unfortunately, we’re unlikely to learn anything conclusive in the near future. As Lustig points out, sugar and high-fructose corn syrup are certainly not “acute toxins” of the kind the F.D.A. typically regulates and the effects of which can be studied over the course of days or months. The question is whether they’re “chronic toxins,” which means “not toxic after one meal, but after 1,000 meals.” This means that what Tappy calls “intervention studies” have to go on for significantly longer than 1,000 meals to be meaningful.
At the moment, the National Institutes of Health are supporting surprisingly few clinical trials related to sugar and high-fructose corn syrup in the U.S. All are small, and none will last more than a few months. Lustig and his colleagues at U.C.S.F. — including Jean-Marc Schwarz, whom Tappy describes as one of the three best fructose biochemists in the world — are doing one of these studies. It will look at what happens when obese teenagers consume no sugar other than what they might get in fruits and vegetables. Another study will do the same with pregnant women to see if their babies are born healthier and leaner.
Only one study in this country, by Havel and Stanhope at the University of California, Davis, is directly addressing the question of how much sugar is required to trigger the symptoms of insulin resistance and metabolic syndrome. Havel and Stanhope are having healthy people drink three sugar- or H.F.C.S.-sweetened beverages a day and then seeing what happens. The catch is that their study subjects go through this three-beverage-a-day routine for only two weeks. That doesn’t seem like a very long time — only 42 meals, not 1,000 — but Havel and Stanhope have been studying fructose since the mid-1990s, and they seem confident that two weeks is sufficient to see if these sugars cause at least some of the symptoms of metabolic syndrome.
So the answer to the question of whether sugar is as bad as Lustig claims is that it certainly could be. It very well may be true that sugar and high-fructose corn syrup, because of the unique way in which we metabolize fructose and at the levels we now consume it, cause fat to accumulate in our livers followed by insulin resistance and metabolic syndrome, and so trigger the process that leads to heart disease, diabetes and obesity. They could indeed be toxic, but they take years to do their damage. It doesn’t happen overnight. Until long-term studies are done, we won’t know for sure.
One more question still needs to be asked, and this is what my wife, who has had to live with my journalistic obsession on this subject, calls the Grinch-trying-to-steal-Christmas problem. What are the chances that sugar is actually worse than Lustig says it is?
One of the diseases that increases in incidence with obesity, diabetes and metabolic syndrome is cancer. This is why I said earlier that insulin resistance may be a fundamental underlying defect in many cancers, as it is in type 2 diabetes and heart disease. The connection between obesity, diabetes and cancer was first reported in 2004 in large population studies by researchers from the World Health Organization’s International Agency for Research on Cancer. It is not controversial. What it means is that you are more likely to get cancer if you’re obese or diabetic than if you’re not, and you’re more likely to get cancer if you have metabolic syndrome than if you don’t.
This goes along with two other observations that have led to the well-accepted idea that some large percentage of cancers are caused by our Western diets and lifestyles. This means they could actually be prevented if we could pinpoint exactly what the problem is and prevent or avoid that.
One observation is that death rates from cancer, like those from diabetes, increased significantly in the second half of the 19th century and the early decades of the 20th. As with diabetes, this observation was accompanied by a vigorous debate about whether those increases could be explained solely by the aging of the population and the use of new diagnostic techniques or whether it was really the incidence of cancer itself that was increasing. “By the 1930s,” as a 1997 report by the World Cancer Research Fund International and the American Institute for Cancer Research explained, “it was apparent that age-adjusted death rates from cancer were rising in the U.S.A.,” which meant that the likelihood of any particular 60-year-old, for instance, dying from cancer was increasing, even if there were indeed more 60-years-olds with each passing year.
The second observation was that malignant cancer, like diabetes, was a relatively rare disease in populations that didn’t eat Western diets, and in some of these populations it appeared to be virtually nonexistent. In the 1950s, malignant cancer among the Inuit, for instance, was still deemed sufficiently rare that physicians working in northern Canada would publish case reports in medical journals when they did diagnose a case.
In 1984, Canadian physicians published an analysis of 30 years of cancer incidence among Inuit in the western and central Arctic. While there had been a “striking increase in the incidence of cancers of modern societies” including lung and cervical cancer, they reported, there were still “conspicuous deficits” in breast-cancer rates. They could not find a single case in an Inuit patient before 1966; they could find only two cases between 1967 and 1980. Since then, as their diet became more like ours, breast cancer incidence has steadily increased among the Inuit, although it’s still significantly lower than it is in other North American ethnic groups. Diabetes rates in the Inuit have also gone from vanishingly low in the mid-20th century to high today.
Now most researchers will agree that the link between Western diet or lifestyle and cancer manifests itself through this association with obesity, diabetes and metabolic syndrome — i.e., insulin resistance. This was the conclusion, for instance, of a 2007 report published by the World Cancer Research Fund and the American Institute for Cancer Research — “Food, Nutrition, Physical Activity and the Prevention of Cancer.”
So how does it work? Cancer researchers now consider that the problem with insulin resistance is that it leads us to secrete more insulin, and insulin (as well as a related hormone known as insulin-like growth factor) actually promotes tumor growth.
As it was explained to me by Craig Thompson, who has done much of this research and is now president of Memorial Sloan-Kettering Cancer Center in New York, the cells of many human cancers come to depend on insulin to provide the fuel (blood sugar) and materials they need to grow and multiply. Insulin and insulin-like growth factor (and related growth factors) also provide the signal, in effect, to do it. The more insulin, the better they do. Some cancers develop mutations that serve the purpose of increasing the influence of insulin on the cell; others take advantage of the elevated insulin levels that are common to metabolic syndrome, obesity and type 2 diabetes. Some do both. Thompson believes that many pre-cancerous cells would never acquire the mutations that turn them into malignant tumors if they weren’t being driven by insulin to take up more and more blood sugar and metabolize it.
What these researchers call elevated insulin (or insulin-like growth factor) signaling appears to be a necessary step in many human cancers, particularly cancers like breast and colon cancer. Lewis Cantley, director of the Cancer Center at Beth Israel Deaconess Medical Center at Harvard Medical School, says that up to 80 percent of all human cancers are driven by either mutations or environmental factors that work to enhance or mimic the effect of insulin on the incipient tumor cells. Cantley is now the leader of one of five scientific “dream teams,” financed by a national coalition called Stand Up to Cancer, to study, in the case of Cantley’s team, precisely this link between a specific insulin-signaling gene (known technically as PI3K) and tumor development in breast and other cancers common to women.
Most of the researchers studying this insulin/cancer link seem concerned primarily with finding a drug that might work to suppress insulin signaling in incipient cancer cells and so, they hope, inhibit or prevent their growth entirely. Many of the experts writing about the insulin/cancer link from a public health perspective — as in the 2007 report from the World Cancer Research Fund and the American Institute for Cancer Research — work from the assumption that chronically elevated insulin levels and insulin resistance are both caused by being fat or by getting fatter. They recommend, as the 2007 report did, that we should all work to be lean and more physically active, and that in turn will help us prevent cancer.
But some researchers will make the case, as Cantley and Thompson do, that if something other than just being fatter is causing insulin resistance to begin with, that’s quite likely the dietary cause of many cancers. If it’s sugar that causes insulin resistance, they say, then the conclusion is hard to avoid that sugar causes cancer — some cancers, at least — radical as this may seem and despite the fact that this suggestion has rarely if ever been voiced before publicly. For just this reason, neither of these men will eat sugar or high-fructose corn syrup, if they can avoid it.
“I have eliminated refined sugar from my diet and eat as little as I possibly can,” Thompson told me, “because I believe ultimately it’s something I can do to decrease my risk of cancer.” Cantley put it this way: “Sugar scares me.”
Sugar scares me too, obviously. I’d like to eat it in moderation. I’d certainly like my two sons to be able to eat it in moderation, to not overconsume it, but I don’t actually know what that means, and I’ve been reporting on this subject and studying it for more than a decade. If sugar just makes us fatter, that’s one thing. We start gaining weight, we eat less of it. But we are also talking about things we can’t see — fatty liver, insulin resistance and all that follows. Officially I’m not supposed to worry because the evidence isn’t conclusive, but I do.
Gary Taubes (gataubes@gmail.com) is a Robert Wood Johnson Foundation independent investigator in health policy and the author of “Why We Get Fat.” Editor: Vera Titunik (v.titunik-MagGroup@nytimes.com).

Sunday, April 24, 2011

Are the Gospels Mythical?

Are the Gospels Mythical?
From the earliest days of Christianity, the Gospels' resemblance to certain myths has been used as an argument against Christian faith. When pagan apologists for the official pantheism of the Roman empire denied that the death-and-resurrection myth of Jesus differed in any significant way from the myths of Dionysus, Osiris, Adonis, Attis, etc., they failed to stem the rising Christian tide. In the last two hundred years, however, as anthropologists have discovered all over the world foundational myths that similarly resemble Jesus' Passion and Resurrection, the notion of Christianity as a myth seems at last to have taken hold—even among Christian believers.

Beginning with some violent cosmic or social crisis, and culminating in the suffering of a mysterious victim (often at the hands of a furious mob), all these myths conclude with the triumphal return of the sufferer, thereby revealed as a divinity. The kind of anthropological research undertaken before World War II—in which theorists struggled to account for resemblances among myths—is regarded as a hopeless “metaphysical” failure by most anthropologists nowadays. Its failure seems, however, not to have weakened anthropology's skeptical scientific spirit, but only to have weakened further, in some mysterious way, the plausibility of the dogmatic claims of religion that the earlier theorists had hoped to supersede: if science itself cannot formulate universal truths of human nature, then religion—as manifestly inferior to science—must be even more devalued than we had supposed.

This is the contemporary intellectual situation Christian thinkers face as they read the Scriptures. The Cross is incomparable insofar as its victim is the Son of God, but in every other respect it is a human event. An analysis of that event—exploring the anthropological aspects of the Passion that we cannot neglect if we take the dogma of the Incarnation seriously—not only reveals the falsity of contemporary anthropology's skepticism about human nature. It also utterly discredits the notion that Christianity is in any sense mythological. The world's myths do not reveal a way to interpret the Gospels, but exactly the reverse: the Gospels reveal to us the way to interpret myth.

Jesus does, of course, compare his own story to certain others when he says that his death will be like the death of the prophets: “The blood of all the prophets shed since the foundation of the world may be required of this generation, from the blood of Abel to the blood of Zechariah” (Luke 11:50-51). What, we must ask, does the word like really mean here? In the death most strikingly similar to the Passion—that of the Suffering Servant in Isaiah, chapters 52–53—a crowd unites against a single victim, just as similar crowds unite against Jeremiah, Job, the narrators of the penitential psalms, etc. In Genesis, Joseph is cast out by the envious crowd of his brothers. All these episodes of violence have the same all-against-one structure.

Since John the Baptist is a prophet, we may expect his violent death in the New Testament to be similar, and indeed John dies because Herod's guests turn into a murderous crowd. Herod himself is as inclined to spare John's life as Pilate is to spare Jesus'—but leaders who do not stand up to violent crowds are bound to join them, and join them both Herod and Pilate do. Ancient people typically regarded ritual dancing as the most mimetic of all arts, solidifying the participants of a sacrifice against the soon to be immolated victim. The hostile polarization against John results from Salome's dancing—a result foreseen and cleverly engineered by Herodias for exactly that purpose.

There is no equivalent of Salome's dancing in Jesus' Passion, but a mimetic or imitative dimension is obviously present. The crowd that gathers against Jesus is the same that had enthusiastically welcomed him into Jerusalem a few days earlier. The sudden reversal is typical of unstable crowds everywhere: rather than a deep-seated hatred for the victim, it suggests a wave of contagious violence.

Peter spectacularly illustrates this mimetic contagion. When surrounded by people hostile to Jesus, he imitates their hostility. He obeys the same mimetic force, ultimately, as Pilate and Herod. Even the thieves crucified with Jesus obey that force and feel compelled to join the crowd. And yet, I think, the Gospels do not seek to stigmatize Peter, or the thieves, or the crowd as a whole, or the Jews as a people, but to reveal the enormous power of mimetic contagion—a revelation valid for the entire chain of murders stretching from the Passion back to “the foundation of the world.” The Gospels have an immensely powerful reason for their constant reference to these murders, and it concerns two essential and yet strangely neglected words, skandalon and Satan.

The traditional English translation of stumbling block is far superior to timid recent translations, for the Greek skandalon designates an unavoidable obstacle that somehow becomes more attractive (as well as repulsive) each time we stumble against it. The first time Jesus predicts his violent death (Matthew 16:21-23), his resignation appalls Peter, who tries to instill some worldly ambition in his master: Instead of imitating Jesus, Peter wants Jesus to imitate him. If two friends imitate each other's desire, they both desire the same object. And if they cannot share this object, they will compete for it, each becoming simultaneously a model and an obstacle to the other. The competing desires intensify as model and obstacle reinforce each other, and an escalation of mimetic rivalry follows; admiration gives way to indignation, jealousy, envy, hatred, and, at last, violence and vengeance. Had Jesus imitated Peter's ambition, the two thereby would have begun competing for the leadership of some politicized “Jesus movement.” Sensing the danger, Jesus vehemently interrupts Peter: “Get behind me, Satan, you are a skandalon to me.”

The more our models impede our desires, the more fascinating they become as models. Scandals can be sexual, no doubt, but they are not primarily a matter of sex any more than of worldly ambition. They must be defined in terms not of their objects but of their obstacle/model escalation—their mimetic rivalry that is the sinful dynamics of human conflict and its psychic misery. If the problem of mimetic rivalry escapes us, we may mistake Jesus' prescriptions for some social utopia. The truth is rather that scandals are such a threat that nothing should be spared to avoid them. At the first hint, we should abandon the disputed object to our rivals and accede even to their most outrageous demands; we should “turn the other cheek.”

If we choose Jesus as our model, we simultaneously choose his own model, God the Father. Having no appropriative desire, Jesus proclaims the possibility of freedom from scandal. But if we choose possessive models we find ourselves in endless scandals, for our real model is Satan. A seductive tempter who suggests to us the desires most likely to generate rivalries, Satan prevents us from reaching whatever he simultaneously incites us to desire. He turns into a diabolos (another word that designates the obstacle/model of mimetic rivalry). Satan is skandalon personified, as Jesus makes explicit in his rebuke of Peter.

Since most human beings do not follow Jesus, scandals must happen (Matthew 18:7), proliferating in ways that ought to endanger the collective survival of the human race—for once we understand the terrifying power of escalating mimetic desire, no society seems capable of standing against it. And yet, though many societies perish, new societies manage to be born, and quite a few established societies manage to find ways to survive or regenerate. Some counterforce must be at work, not powerful enough to terminate scandals once and for all, and yet sufficient to moderate their impact and keep them under some control.

This counterforce is, I believe, the mythological scapegoat—the sacrificial victim of myth. When scandals proliferate, human beings become so obsessed with their rivals that they lose sight of the objects for which they compete and begin to focus angrily on one another. As the borrowing of the model's object shifts to the borrowing of the rival's hatred, acquisitive mimesis turns into a mimesis of antagonists. More and more individuals polarize against fewer and fewer enemies until, in the end, only one is left. Because everyone believes in the guilt of the last victim, they all turn against him—and since that victim is now isolated and helpless, they can do so with no danger of retaliation. As a result, no enemy remains for anybody in the community. Scandals evaporate and peace returns—for a while.

Society's preservation against the unlimited violence of scandals lies in the mimetic coalition against the single victim and its ensuing limited violence. The violent death of Jesus is, humanly speaking, an example of this strange process. Before it begins, Jesus warns his disciples (and especially Peter) that they will be “scandalized” by him (Mark 14:27). This use of skandalizein suggests that the mimetic force at work in the all-against-one violence is the same violence at work in mimetic rivalries between individuals. In preventing a riot and dispersing a crowd, the Crucifixion is an example of cathartic victimization. A fascinating detail in the gospel makes clear the cathartic effects of the mimetic murder—and allows us to distinguish them from the Crucifixion's Christian effects.

At the end of his Passion account, Luke writes, “And Herod and Pilate became friends with each other that very day, for before this they had been at enmity with each other” (23:12). This reconciliation outwardly resembles Christian communion—since it originates in Jesus' death—and yet it has nothing to do with it. It is a cathartic effect rooted in the mimetic contagion.

Jesus' persecutors do not realize that they influence one another mimetically. Their ignorance does not cancel their responsibility, but it does lessen it: “Father, forgive them,” Jesus cries, “for they know not what they do” (Luke 23:34). A parallel statement in Acts 3:17 shows that this must be interpreted literally. Peter ascribes to ignorance the behavior of the crowd and its leaders. His personal experience of the mimetic compulsion that possesses crowds prevents him from regarding himself immune to the violent contagion of victimization.

The role of Satan, the personification of scandals, helps us to understand the mimetic conception of the Gospels. To the question How can Satan cast out Satan? (Mark 3:23), the answer is unanimous victimization.

On the one hand, Satan is the instigator of scandal, the force that disintegrates communities; on the other hand, he is the resolution of scandal in unanimous victimization. This trick of last resort enables the prince of this world to rescue his possessions in extremis, when they are too badly threatened by his own disorder. Being both a principle of disorder and a principle of order, Satan is truly divided against himself.

The famous portrayal of the mimetic murder of John the Baptist occurs—in both Mark and Matthew—as a curious flashback. By beginning with an account of Herod's eager seizing hold of the rumor of John's resurrection, and only then going back in time to narrate John's death, Mark and Matthew reveal the origin of Herod's compulsive belief in his own decisive participation in the murder. The evangelists give a fleeting but precious example of mythic genesis—of the ordering power of violence, of its ability to found culture. Herod's belief is vestigial, to be sure, but the fact that two Gospels mention it confirms, I think, the evangelical authenticity of the doctrine that grounds mythology in mimetic victimization.

Modern Christians are often made uncomfortable by this false resurrection that seems to resemble the true one, but Mark and Matthew obviously do not share their embarrassment. Far from downplaying the similarities, they attract our attention to them, much as Luke attracts our attention to the resemblance between Christian communion and the unholy reconciliation of Herod and Pilate as a result of Jesus' death. The evangelists see something very simple and fundamental that we ourselves should see. As soon as we become reconciled to the similarities between violence in the Bible and myths, we can understand how the Bible is not mythical—how the reaction to violence recorded in the Bible radically differs from the reaction recorded in myth.

Beginning with the story of Cain and Abel, the Bible proclaims the innocence of mythical victims and the guilt of their victimizers. Living after the widespread promulgation of the gospel, we find this natural and never pause to think that in classical myths the opposite is true: the persecutors always seem to have a valid cause to persecute their victims. The Dionysiac myths regard even the most horrible lynchings as legitimate. Pentheus in the Bacchae is legitimately slain by his mother and sisters, for his contempt of the god Dionysus is a fault serious enough to warrant his death. Oedipus, too, deserves his fate. According to the myth, he has truly killed his father and married his mother, and is thus truly responsible for the plague that ravages Thebes. To cast him out is not merely a permissible action, but a religious duty.

Even if they are not accused of any crime, mythical victims are still supposed to die for a good cause, and their innocence makes their deaths no less legitimate. In the Vedic myth of Purusha, for instance, no wrongdoing is mentioned—but the tearing apart of the victim is nonetheless a holy deed. The pieces of Purusha's body are needed to create the three great castes, the mainstay of Indian society. In myth, violent death is always justified.

If the violence of myths is purely mimetic—if it is like the Passion, as Jesus says—all these justifications are false. And yet, since they systematically reverse the true distribution of innocence and guilt, such myths cannot be purely fictional. They are lies, certainly, but the specific kind of lie called for by mimetic contagion—the false accusation that spreads mimetically throughout a disturbed human community at the climax when scandals polarize against the single scapegoat whose death reunites the community. The myth-making machine is the mimetic contagion that disappears behind the myth it generates.

There is nothing secret about the justifications espoused by myths; the stereotypical accusations of mob violence are always available when the search for scapegoats is on. In the Gospels, however, the scapegoating machinery is fully visible because it encounters opposition and no longer operates efficiently. The resistance to the mimetic contagion prevents the myth from taking shape. The conclusion in the light of the Gospels is inescapable: myths are the voice of communities that unanimously surrender to the mimetic contagion of victimization.

This interpretation is reinforced by the optimistic endings of myths. The conjunction of the guilty victim and the reconciled community is too frequent to be fortuitous. The only possible explanation is the distorted representation of unanimous victimization. The violent process is not effective unless it fools all witnesses, and the proof that it does, in the case of myths, is the harmonious and cathartic conclusion, rooted in a perfectly unanimous murder.

We hear nowadays that, behind every text and every event, there are an infinite number of interpretations, all more or less equivalent. Mimetic victimization makes the absurdity of this view manifest. Only two possible reactions to the mimetic contagion exist, and they make an enormous difference. Either we surrender and join the persecuting crowd, or we resist and stand alone. The first way is the unanimous self- deception we call mythology.

The second way is the road to the truth followed by the Bible.

Instead of blaming victimization on the victims, the Gospels blame it on the victimizers. What the myths systematically hide, the Bible reveals.

This difference is not merely “moralistic” (as Nietzsche believed) or a matter of subjective choice; it is a question of truth. When the Bible and the Gospels say that the victims should have been spared, they do not merely “take pity” on them. They puncture the illusion of the unanimous victimization that foundational myths use as a crisis-solving and reordering device of human communities.

When we examine myths in the light of the Gospels, even their most enigmatic features become intelligible. Consider, for example, the disabilities and abnormalities that seem always to plague mythical heroes. Oedipus limps, as do quite a few of his fellow heroes and divinities. Others have only one leg, or one arm, or one eye, or are blind, hunchbacked, etc. Others still are unusually tall or unusually short. Some have a disgusting skin disease, or a body odor so strong that it plagues their neighbors. In a crowd, even minor disabilities and singularities will arouse discomfort and, should trouble erupt, their possessors are likely to be selected as victims. The preponderance of cripples and freaks among mythical heroes must be a statistical consequence of the type of victimization that generates mythology. So too the preponderance of “strangers”: in all isolated groups, outsiders arouse a curiosity that may quickly turn to hostility during a panic. Mimetic violence is essentially disoriented; deprived of valid causes, it selects its victims according to minuscule signs and pseudo-causes that we may identify as preferential signs of victimization.

In the Bible, the false or insignificant causes of mythical violence are effectively dismissed in the simple and sweeping statement, They hated me without a cause(John 15:25), in which Jesus quotes and virtually summarizes Psalm 35—one of the “scapegoat psalms” that literally turns the mob's mythical justifications inside out. Instead of the mob speaking to justify violence with causes that it perceives as legitimate, the victim speaks to denounce the causes as nonexistent.

To explicate archaic myths, we need only follow the method Jesus recommends and substitute this without cause for the false mythical causes.

In the Byzantine Empire, I understand, the Oedipus tragedy was read as an analogue of the Christian Passion. If true, those early anthropologists were approaching the right problem from the wrong end. Their reduction of the Gospels to an ordinary myth snuffed the evangelical light with mythology.

In order to succeed, one must illuminate the obscurity of myth with the intelligence of the Gospels.

If unanimous victimization reconciles and reorders societies in direct proportion to its concealment, then it must lose its effectiveness in direct proportion to its revelation. When the mythical lie is publicly denounced, the polarization of scandals is no longer unanimous and the social catharsis weakens and disappears. Instead of reconciling the community, the victimization must intensify divisions and dissensions.

These disruptive consequences should be felt in the Gospels and, indeed, they are. In the Gospel of John, for instance, everything Jesus does and says has a divisive effect. Far from downplaying this fact, the author repeatedly draws our attention to it. Similarly, in Matthew 10:34, Jesus says, “I have not come to bring peace, but a sword.” If the only peace humanity has ever enjoyed depends on unconscious victimization, the consciousness that the Gospels bring into the world can only destroy it.

The image of Satan-“a liar and the father of lies” (John 8:44)—also expresses this opposition between the mythical obscuring and the evangelical revealing of victimization. The Crucifixion as a defeat for Satan, Jesus' prediction that Satan “is coming to an end” (Mark 3:26), implies less an orderly world than one in which Satan is on the loose. Instead of concluding with the reassuring harmony of myths, the New Testament opens up apocalyptic perspectives, in the synoptic Gospels equally with the Book of Revelation. To reach “the peace that surpasseth all understanding,” humanity must give up its old, partial peace founded on victimization—and a great deal of turmoil can be expected. The apocalyptic dimension is not an alien element that should be purged from the New Testament in order to “improve” Christianity, it is an integral part of revelation.

Satan tries to silence Jesus through the very process that Jesus subverts. He has good reasons to believe that his old mimetic trick should still produce, with Jesus as victim, what it has always produced in the past: one more myth of the usual type, a closed system of mythical lies. He has good reasons to believe that the mimetic contagion against Jesus will prove irresistible once again and that the revelation will be squelched.

Satan's expectations are disappointed. The Gospels do everything that the Bible had done before, rehabilitating a victimized prophet, a wrongly accused victim. But they also universalize this rehabilitation. They show that, since the foundation of the world, the victims of all Passion-like murders have been victims of the same mimetic contagion as Jesus. The Gospels make the revelation complete. They give to the biblical denunciation of idolatry a concrete demonstration of how false gods and their violent cultural systems are generated. This is the truth missing from mythology, the truth that subverts the violent system of this world. If the Gospels were mythical themselves, they could not provide the knowledge that demythologizes mythology.

Christianity, however, is not reducible to a logical scheme. The revelation of unanimous victimization cannot involve an entire community—else there would be no one to reveal it. It can only be the achievement of a dissenting minority bold enough to challenge the official truth, and yet too small to prevent a near-unanimous episode of victimization from occurring. Such a minority, however, is extremely vulnerable and ought normally to be swallowed up in the mimetic contagion. Humanly speaking, the revelation is an impossibility.

In most biblical texts, the dissenting minority remains invisible, but in the Gospels it coincides with the group of the first Christians. The Gospels dramatize the human impossibility by insisting on the disciples' inability to resist the crowd during the Passion (especially Peter, who denies Jesus three times in the High Priest's courtyard). And yet, after the Crucifixion—which should have made matters worse than ever—this pathetic handful of weaklings suddenly succeeds in doing what they had been unable to do when Jesus was still there to help them: boldly proclaim the innocence of the victim in open defiance of the victimizers, become the fearless apostles and missionaries of the early Church.

The Resurrection is responsible for this change, of course, but even this most amazing miracle would not have sufficed to transform these men so completely if it had been an isolated wonder rather than the first manifestation of the redemptive power of the Cross. An anthropological analysis enables us to say that, just as the revelation of the Christian victim differs from mythical revelations because it is not rooted in the illusion of the guilty scapegoat, so the Christian Resurrection differs from mythical ones because its witnesses are the people who ultimately overcome the contagion of victimization (such as Peter and Paul), and not the people who surrender to it (such as Herod and Pilate). The Christian Resurrection is indispensable to the purely anthropological revelation of unanimous victimization and to the demythologizing of mythical resurrections.

Jesus' death is a source of grace not because the Father is “avenged” by it, but because Jesus lived and died in the manner that, if adopted by all, would do away with scandals and the victimization that follows from scandals. Jesus lived as all men should live in order to be united with a God Whose true nature he reveals.

Obeying perfectly the anti-mimetic prescriptions he recommends, Jesus has not the slightest tendency toward mimetic rivalry and victimization. And he dies, paradoxically, because of this perfect innocence. He becomes a victim of the process from which he will liberate mankind. When one man alone follows the prescriptions of the kingdom of God it seems an intolerable provocation to all those who do not, and this man automatically designates himself as the victim of all men. This paradox fully reveals “the sin of the world,” the inability of man to free himself from his violent ways.

During Jesus' life, the dissenting minority of those who resist the mimetic contagion is really limited to one man, Jesus himself—who is simultaneously the most arbitrary victim (because he deserves his violent death less than anyone else) and the least arbitrary victim (because his perfection is an unforgivable insult to the violent world). He is the scapegoat of choice, the lamb of God whom we all choose unconsciously even when not aware of choosing any victim.

When Jesus dies alone, abandoned by his apostles, the persecutors are unanimous once again. Were the Gospels trying to tell a myth, the truth Jesus had tried to reveal would then be buried once and for all and the stage would be set for the triumphal revelation of the mythological victim as the divine source of the reordering of society through the “good” scapegoating violence that puts an end to the bad mimetic violence that had threatened the society.

If such a death-and-resurrection myth is not what happens this time—if Satan in the end is foiled—the immediate cause is a sudden burst of courage in the disciples. But the strength for that did not come from themselves. It visibly flows from the innocent death of Jesus. Divine grace makes the disciples more like Jesus, who had announced before his death that they would be helped by the Holy Spirit of truth. This is one reason, I believe, the Gospel of John calls the Spirit of God theParaclete, a Greek word that simply means the lawyer for the defense, the defender of the accused before a tribunal. The Paraclete is, among other things, the counterpart of the Accuser: the Spirit of Truth who gives the definitive refutation of the satanic lie. That is why Paul writes, in 1 Corinthians 2:7-8: “We impart a secret and hidden wisdom of God. . . . None of the rulers of this age understood this; for if they had, they would not have crucified the Lord of glory.”

The true Resurrection is based not on the mythical lie of the guilty victim who deserves to die, but on the rectification of that lie, which comes from the true God and which reopens channels of communication mankind itself had closed through self-imprisonment in its own violent cultures. Divine grace alone can explain why, after the Resurrection, the disciples could become a dissenting minority in an ocean of victimization—could understand then what they had misunderstood earlier: the innocence not of Jesus alone but of all victims of all Passion-like murders since the foundation of the world.





Rene Girard is the Andrew B. Hammond Professor Emeritus of French Language, Literature, and Civilization at Stanford University. His many books includeViolence and the Sacred and Things Hidden Since the Foundation of the World.