*
Why does our society view the medical establishment as the supreme authority on what our priorities should be and how we should live our lives? Hope for cure of all illness, based on medicine’s clear success in dealing with injuries and infectious disease, is understandable. But there is far more to the public’s faith in it than that. It’s more than mere wishful thinking, too, although that certainly contributes.
I suspect one factor is that we’re conditioned from infancy to rely on authority figures for our well-being—conditioned by the fact that we’re a species with a long childhood, during which care from adults is essential to survival. In former centuries, experience with medicine was less apt than it is today to inspire trust in doctors as substitute all-knowing parents. But physicians, who were once the social equals of their patients, assumed an authoritative role with the advent of hospital medicine.
There is, to be sure, a present trend toward rejection of doctors’ authority. Many observers have remarked on this as a distinct change from the attitudes that prevailed in the first half of the twentieth century. However, it applies mainly to the authority of individual doctors, in the sense that they’re no longer placed on pedestals and are often suspected of incompetence or financial greed. People are now quick to seek advice from a sequence of doctors and/or alternative practitioners when dissatisfied with the help they get. Yet the belief that medical providers of one sort or another can help, that they’re the ultimate source of advice to which humans have access, seems stronger than ever.
Personally, I believe that neither medicine’s past success nor conditioned dependence on its authority is enough to account for the fanatic devotion our society has to the concept of medical care. It has always seemed to me that such devotion is comparable to a religion. Some years ago, when I began to investigate these issues, I imagined I might write an essay setting forth that hypothesis—though I knew it would be regarded as heresy. But I soon found that it’s far from an original idea. Many writers have argued that medicine has become a religion; the more one examines the analogy, the more obvious it appears to be.
As faith in traditional religions declined, and people stopped turning to the clergy with their problems, they turned to doctors and psychotherapists; this much is common knowledge. Less widely recognized is the fact that instead of the secularization of society that’s generally assumed, we got medicalization. Our culture is not without a creed. Its creed is a list of questionable assumptions about the scientific validity of modern medicine, plus some others that are not related to science.
Chief among these other assumptions is the moral authority accorded to medicine. In our society, it’s not just thought wise to defer to medical judgment—it’s considered virtuous. Dissent on the part of a patient, or worse, the refusal to become a patient, is an offense against common mores; the admonishment “You ought to see a doctor” implies reproach, and rebellion is seen as scandalous. Failure to follow medically-endorsed health recommendations is viewed as sin; it has reached the point where ordinary foods once found on everyone’s table are referred to in recipe columns as “sinful.” The primary emotion now felt by people in connection with their health is not satisfaction or dissatisfaction, but guilt. A person who values quality of life over hypothetical length of life is looked upon not merely as foolish, but as irresponsible. Acceptance of unpleasant treatments or lifestyles, whether or not these can rationally be expected to impart benefit, is considered commendable, while rejection of them is regarded as error—as if, perhaps, doing penance for ill health might exorcise it, or at least ward society against its spread.
It is, of course, an ancient belief that sickness is punishment for sin. But today, that belief has been explicitly rejected, to be replaced by a feeling that poor health is in itself a moral failing. We strive not to be good, but to be well. Physician Arthur Barsky, in Worried Sick, writes, “As people in the past sought to lead a ‘religious life,’ we now seek the healthy lifestyle. Attaining a sense of wellness is like attaining a state of grace. In a sense, we’ve replaced the religious quest for the salvation of our souls with the secular quest for the salvation of our bodies.” Quoting Marshall Becker, he continues, “Health promotion . . . is a new religion, in which we worship ourselves, attribute good health to our devoutness, and view illness as just punishment for those who have not yet seen the way.”
There is irony in the fact that some criticize our society for abandoning the concept of moral responsibility and replacing it with the idea that offenders are “sick,” for in fact, sickness has indeed become the equivalent of sinfulness—and treatment the equivalent of repentance—not merely in the case of criminals, but with regard to all who display physical or behavioral deviance from an ideal “norm.” The sick person is required to submit to the authority of the establishment, just as sinners once were. His or her private judgments are not accorded any more respect than they were by the most dogmatic of churches. To be sure, treatment, except for certain “mental illnesses,” is not quite compulsory (yet) unless crime is committed; we don’t subject patients to force (though we do sometimes induce them to undergo torture for the sake of alleged salvation.) But the source of pressure to conform is identical: preserve society’s faith at all costs! We may say our society is faithless, that it has lost its sense of common purpose; but that’s not true. We worship Perfect Health and the medical establishment is its priesthood.
None of this is meant to suggest insincerity on the part of doctors. Most doctors are motivated by genuine desire to help their fellow human beings. But so were most priests, even within intolerant religious establishments. They sincerely believed that sinners would be saved by deference to ecclesiastical authority, and were sure that nonconformists would end up in hell. Similarly, doctors believe that people’s ultimate welfare depends on adherence to the dictates of health dogma and that disregard of these will result in suffering or early death. They trust the medical science they have been taught, as clergy trusted the theological hierarchy; they’re convinced of the efficacy of its rituals. Significantly, this conviction comes from faith—fewer such rituals than we suppose are backed by objective science.
It’s not really surprising that this faith has developed. Early death was once common, so common that people didn’t question it—but they had the hope of heaven. Most of them paid a great deal of attention to doing what they were told would assure them of heaven. But belief in a literal heaven has faded. To today’s majority, religious rituals, if meaningful at all, have symbolic rather than practical value. Is it any wonder that in an era where medicine can defy death, the goal of maximum lifespan—pursued by often-slavish devotion to ritualistic health practices—has taken heaven’s place?
People have always sought direction in their search for salvation and have been willing to give up a lot for it. Thus they’ve welcomed the medicalization of our society, assuming the role of patients even when they don’t feel sick and going for checkups as they once would have gone to confession. Craving absolution, they now subject themselves to medical authority not just during illness, but at birth, death, and throughout all phases of their lives.
Nor is that the only way in which medicine assumes the role of religion. To many, it is the prime source of mystery and awe. Once, people sought cures at shrines and temples, as some do even today; to look to an arcane source for healing is a deep-seated human impulse. Anthropologist/ physician Melvin Konner writes that in many cultures doctors still keep secrets from their patients, as they used to in our own, and that only in theory does modern medicine’s focus on science dispel, rather than cultivate, mystery. “In reality the mystery remains, but it has a different location: it lives on the frontiers of technology . . . doctors and patients stand together, grateful and humbled, before what almost seem to be technological gods.” In their book The Healing Brain Robert Ornstein and David Sobel quote a description by psychiatrist Jerome Frank of “how a great teaching hospital might look to an anthropologist from Mars who is studying healing shrines in America.” This beautifully illustrates the medical/religious analogy.
I suspect that the idolization of medicine and its technology is a trend that won’t change in the twenty-first century, if it ever does, though many assumptions of today’s medical establishment will be altered. Personally, I tend to get emotional about it, as heretics often do, and my dystopian science fiction novel Stewards of the Flame deals with a society in which this trend has been carrried to its logical conclusion, in some respects to reductio ad absurdum lengths. Yet I don’t mean to say that it’s bad in all ways. As Ornstein and Sobel point out, it may serve to mobilize “the faith that heals” in those who don’t share my skepticism.
My point in raising the issue is simply to explain how we got into a situation where public perceptions conflict with an objective approach to the findings of science. There has been, and still is, a lot of resistance to those findings, especially in the case of evidence that medicine can’t magically perfect us. In my opinion, the religious character of our medical outlook is a principal reason for it.
*
Just as political and financial factors once shaped the doctrine of authoritative legally-established religions, they have profound effects on the dogma promulgated by health authorities today. Our medical-industrial establishment (a term more accurate than mere “medical establishment,” if more clumsy) is exceeded in size only by our military-industrial establishment, and it is no more efficient or high-minded. It encompasses powerful commercial interests and a huge government bureaucracy. And although as a general rule, I believe the profit motive leads to benefits for the public, I do not think this is true with medicine. In the case of other technologies, we can judge as individuals how much we’ll gain from them; we can buy or not buy a product, as we choose. Not so with medical technology. We are told what we need not just by advertising we can tune out, but by authorities we’re told to trust. We’re in no position to evaluate their pronouncements, let alone associated media hype. A good deal is now said about how commercialization of health care has led to corners being cut in provision of services, but the opposite problem is older and more pervasive.
Medical research is expensive. A lot of it is paid for, directly or indirectly, by pharmaceutical firms; the rest is funded by government grants and/or by organizations that appeal directly to the public, and thus have an interest in public perceptions. The results of such research aren’t of mere scientific interest; they impact the economy—suppliers, laboratories, and even the food industry, in addition to medical providers and the drug companies. They influence the prestige and power structure of institutions. In some cases, research projects affect local interests to the extent that politicians lobby for them. The insurance industry, despite reluctance to pay for specific services, nevertheless benefits from a situation in which health insurance is a necessity of life. Furthermore, the amount spent yearly on medical testing now exceeds that spent on pharmaceuticals, and much of it is paid on behalf of people who are not even sick. It’s hardly surprising if from all this, a policy of maximum consumption of “care” emerges, and medical interventionism prevails over any suggestion that we need not all be constant consumers of it. People want “only the best” when it comes to health care, and are led to assume, not necessarily wisely, that “best” means “most.”
The influence of pharmaceutical companies on health care standards is too well known, and too widely criticized, for me to go into here—there are countless books directed to the public that present the details. Briefly, constant lobbying results in legislation that keeps drug prices artificially high; the results of tests for drug safety and efficacy are not always as valid as they appear; scandal involving government regulation of medical drugs is by no means rare; new expensive drugs are aggressively marketed as “improved” replacements for older ones that work just as well if not better; and worst of all, drugs are widely promoted for conditions that don’t require medication and in many cases would not even be considered “illness” if no such promotion had been done. To be sure, some of the immense profit derived from selling such drugs funds the development of those needed by people who are really sick. But at what cost to society? Since drugs are not profitable unless taken by large numbers of people, the result has been that a great many—and their doctors—have been led to believe that they need ongoing medication, to prevent future illness if for no other reason. They have also been conditioned from childhood onward to think that any form of discomfort should be dealt with by ingesting chemicals and that doing so is virtuous; is it any wonder that so many abuse drugs, or turn to illegal ones for relief from mere unhappiness or boredom?
I suspect, too, that there are additional reasons for government promotion of so-called preventative medicine. If it leads to only a minor statistical decrease in “risk factor” diseases—or can be made to seem as if it will—such propaganda is politically advantageous, even if it harms more individuals than it helps (which is more often the case than is generally known). The media attention to “prevention” makes it look as if the government is Doing Something about diseases for which cures haven’t appeared. Moreover, the concept sounds good; it even sounds like the cost-saver it’s asserted to be (although the General Accounting Office determined in 1987 that “preventive” medicine costs more in the long run than treating only people who get sick, something insurance companies have always known.) Perhaps the bureaucrats have been taken in by the way it sounds. On the other hand . . .
Government involvement in medical care has been on the rise ever since public funds were first devoted to it, and has increased exponentially since the establishment of Medicare and Medicaid. Further intrusion on personal health autonomy can be expected if coverage of medical care costs is broadened—what the government pays for, it controls, and to bureaucrats, any opportunity to control is welcome. Surprisingly few warnings about the consequences are coming from people who ordinarily are deeply concerned about protection of individual liberty; on the contrary, virtually any extension of government authority is viewed with approval if it is deemed justifiable as a health measure. I think the situation I portrayed in Stewards of the Flame is not too far-fetched.
I suppose all this makes me sound like a cynic—which, in other areas of inquiry, I usually am not. I ask myself, sometimes, why I have never shared society’s trust in medical authority, why, even as a child, I reacted strongly against well-meant attempts to interfere with people’s bodies. It struck me as an unwarranted invasion of privacy. The purpose of medicine, I felt, was to relieve suffering, not to concern itself with “norms.” Perhaps it was simply that I value individuality over conformity. Or perhaps it was that by nature I’m an intellectual rebel, inclined to doubt anything I’m told I should believe. Many people react that way to religion, but I was never urged to accept a formal religion; perhaps for me too, in a negative sense, the medical establishment took its place.
*
Although commercial interests have a powerful role in determining the perceived status of an individual’s health, the influence of social factors is even stronger. Like any other religion, medicine defines the need for reformation in terms of deviance from what is currently seen as normal.
In The Limits of Medicine Edward Golub points out that “our perception of when we are well or ill is defined by time and culture.” He suggests that the gnarled hands of arthritis were once viewed not as disease, but as the natural condition of those lucky enough to live to old age. In general, however, atypical conditions are not seen in a positive way. Rather, variations from idealized “norms” are arbitrarily declared to be signs of sickness. Furthermore, as was observed by a medical news publication, “Politicians discovered long ago that a good way to pass the buck is to get a problem labelled as a disease.” The medical establishment, as the agent of society, is given sole power not only to cure illness, but to define it—and the definitions, upheld by its authority, both reflect and reinforce society’s prejudices. Consider these examples:
1. In the 1850s the desire of slaves to escape from their masters was labeled a mental disease, at least in the South, and was given the scientific-sounding name of “drapteomania.”
2. Throughout the second half of the nineteenth century, the normal emotional problems of women were blamed on their reproductive organs and were frequently treated by gynecological surgery.
3. In the early twentieth century, identification of “undesirable” traits as disease appropriate for extirpation by physicians was not limited to Nazis. The eugenics movement in America resulted in 20,000 involuntary sterilizations, and was lauded by respected authorities as an advance in medical science. Though focused on the “feeble-minded,” it also targeted “drug fiends” and “drunkards”; even “shiftlessness” and “pauperism” were assumed to be medical conditions of genetic origin.
4. Frontal lobotomy was performed on thousands of “disturbed” patients, even children, in the 1940s despite growing criticism of its devastating effects. So great was its popularity that its inventor shared the 1949 Nobel Prize in Medicine.
5. From the 1920s through the 1950s, intact tonsils were viewed as unhealthy and tonsillectomy for children was virtually routine among families who could afford it; it was said that this would prevent colds. I myself was an unwilling victim of this practice in 1940, although a controlled study as early as the 1930s showed that children without tonsils have no fewer colds than others. The fact that colds aren’t serious enough to justify surgery, which seemed apparent to me at the age of six, was not recognized during that era.
6. Until the late 1950s, enlarged thymus glands found in infants were called “status thymicolymphaticus” and treated with irradiation. Eventually, such thymus glands were recognized as normal—but children so treated had an increased risk of thyroid cancer and the girls had an increased risk of breast cancer as well. This is only one example of instances in which an unwarranted “disease” label has resulted in unalloyed physical harm.
7. Homosexuality was listed as a disorder by the American Psychiatric Association until 1973; only the gay rights movement brought about a change in its designation.
8. During the second half of the twentieth century the natural event of menopause was medicalized to the extent that some spoke of older women’s loss of estrogen as a “deficiency disease”; doctors advocated hormone replacement therapy even while conceding that the “deficiency” is universal. Then in 2002 a major study revealed that hormone replacement leads to an increased risk of cancer and heart disease. Today it is generally advised only for women with abnormally serious menopausal symptoms.
9. Obesity is currently defined as a disease and is often aggressively treated, despite lack of any proof that fatness per se is harmful. There are statistical associations between fatness and some diseases, but in most cases no evidence that it is a causal factor. Furthermore, there are associations between fatness and decreased risk for other diseases.
10. There is considerable concern among critics of gene therapy about the possibility that other ways in which individuals naturally differ, or variations leading to minor impairments and/or mere risks, will be erroneously perceived as health defects once genes associated with them can be identified. This has already happened in the case of some carriers of recessive genes.
11. Biochemical treatment of children who are not sick, but merely atypical, is already common. For example, growth hormone produced through genetic engineering is given to those who are short, and amphetamines are routinely prescribed for the “overactive.” Both these practices have drawn heavy criticism, but they’re on the increase and further biotechnical developments will lead to additional ones.
12. Conditions identified as alleged risk factors for disease are increasingly viewed as diseases in themselves and are believed to require treatment, often at the cost of quality of life, despite the fact that most individuals with these conditions don’t die at a significantly younger age than average. The mere potential for illness (which exists, in one form or another, in all of us) is thus no longer distinguished from real illness. It remains to be seen whether the routine prescription of statins to lower cholesterol will prove to be one of medicine’s tragic mistakes; giving powerful drugs to a large percentage of the population without knowledge of their long-term effects strikes some people as a prescription for disaster.
13. Today, normal emotions such as anxiety and depression are considered illnesses even in people who are not incapacitated by them, and are given ominous-sounding diagnostic designations. Use of drugs to suppress them is not only condoned, but promoted as virtuous; people who refuse to “get help” are looked down upon. In general, unhappy or withdrawn people are viewed as less healthy than “well-adjusted” ones, whether or not they display physical symptoms.
14. Addictions are now generally considered diseases; in fact, to question this label is to be thought moralistic and unenlightened—although the most effective ways of dealing with them aren’t of medical origin. Increasingly, co-dependency is also looked upon as sickness, for one reason, because, in one observer’s words, that creates “a whole new class of billable patients.”
15. Antisocial and/or deviant behavior has always been branded “mental illness”; the extent to which this is a valid concept is highly controversial. In the former Soviet Union even dissident opinion was viewed as sickness, while at the other extreme, psychiatrist Thomas Szasz argues that there is no such thing as a real mental illness and that schizophrenics are merely victims of political oppression—a view strongly endorsed by the “Mad Rights” movement among mental patients who feel that as long as they aren’t harming anyone they should not be considered ill. The position now prevailing among psychiatrists is that atypical mental functioning is of biochemical origin; increasingly, it is treated with drugs that some experts call “chemical lobotomy.”
16. It is presently unacceptable to list “old age” as the cause of death on a legal death certificate; doctors are required to pinpoint a specific disease. Some feel this is unrealistic, since organs normally fail in old age, often at about the same time. The result is first, that “cures” for such “disease” are expected, and second, that even a person who has lived a long and healthy life can’t avoid “succumbing to disease” at its end.
What all these examples have in common is the fact that they involve people who would never consider themselves sick and in need of medical care if the medical establishment, and/or society, did not so decree. I do not mean to suggest that people in those categories don’t ever need advice and support (though to call helpful counseling “medical” is at best a means of getting medical insurance to pay for it—a benefit that may or may not outweigh the psychological cost of being viewed as diseased). My point is that all such “sickness” labels are value judgments, and depend on attitudes toward human diversity. Medicine has been assumed to know what’s pathological as distinguished from what’s merely variable, but that isn’t something science can determine.
We might say that the ability of a person to function adequately would be a reasonable criterion. But even that depends on culture; is near-sightedness pathological, when in our era, many of us get along fine with glasses? Increasingly, the disabled have been protesting against the idea that they are defective, and are demanding cultural accommodations that will facilitate their functioning as full members of society. Thus the problem, perhaps, lies not in the definition of “defective,” but in the very concept of a defect; and this problem will grow as more and more is learned about the human genome.
There is, to be sure, a common-sense definition of illness on which we can all agree as a minimum: it’s the state where a person is truly suffering—not just assumed to be suffering—and/or is unable to engage in his or her regular activities (and where the problem couldn’t be solved by elimination of social prejudice). If this were the criterion for medicine’s involvement, there would be little difficulty in determining its purview. But, in our medicalized age, medicine’s aspirations go far beyond the relief of suffering and incapacity. As indicated above, they extend to the enforcement of social conformity.
Athough intolerance toward minorities has supposedly become taboo in our society, in the case of health-related issues the opposite trend is accelerating. Mike Fitzpatrick, in a 2004 article in The British Journal of General Practice, points out, “Resistance to medicalisation was a common theme of the anti-psychiatry, gay liberation and feminist movements of the 1970s. But [now] . . . radical campaigners are more likely to demand medical recognition of conditions such as ME or Gulf War Syndrome than oppose it as pathologising and stigmatising. Whereas, in the past, the pressure for medicalisation came from the medical profession or the government, now it also comes from below, from society itself.”
Moreover, the larger aim of medicine—strongly supported by the public—is now to prevent illness; in pursuit of this aim, it brands individuals believed to be “at risk” as already ill, often subjecting them to far more suffering, both physical and mental, than they would otherwise experience, not to mention the ongoing financial burden and inconvenience. “Early detection” is encouraged, though it often uncovers conditions that might never have developed unpleasant symptoms. Is a symptomless illness really an illness? Do factors that may or may not contribute to one’s eventual death qualify as disease? It is society, not science, that says so; after all, we will all die of something, sometime, and all physical predispositions may eventually be detectable.
Given the fact that ideas about health are indeed social rather than derivable from objective science, we may ask if there is any such thing as “perfect health.” Obviously, there is not. Nor is there any such thing as “fitness,” another concept widely abused today—too seldom do we ask, “fitness for what?” Fitness depends on environment. It’s meaningless to say that most of us aren’t “fit enough” to survive in the environment our ancestors did, since that’s not the environment we live in. Then too, as the term is used in evolutionary theory, fitness means reproductive fitness: not ability to survive to old age, but the ability to get genes into succeeding generations. In our culture, this hasn’t much correlation with illness or absence thereof in adulthood. We may associate general fitness with health, but it’s circular to define them in terms of each other. Nor does it make sense to define health as the physical condition characterizing humans of prehistoric times, when not only were lifestyles unlike ours, but hardly anyone lived past thirty.
Furthermore, many of the factors that we assume constitute health are self-contradictory. As is explained in Randolph Nesse and George Williams’ book Why We Get Sick,, evolution leads to tradeoffs. Some apparent illnesses are actually the body’s defenses against worse illnesses. Some are the result of genes that also have beneficial effects. In the psychological realm, anxiety is an indispensable part of the ability to avoid danger. We can no more say that there is “something wrong” with people for being sick or unhappy sometimes than we can say there’s something wrong with members of racial minorities.
So if health isn’t an objective concept—if there is no standard by which some people can be called normal and others abnormal, and if sickness is inevitable and, in some situations, even adaptive—why do we ask medicine to make us perfect? This is an impossible goal, one leading to intolerance as well as a continuous feeling of guilt on the part of everyone who accepts medicine’s authority. No matter how advanced our science becomes, it cannot be attained. Thus, there are limits to what health care can accomplish, and they are not determined by technological skill, let alone compliance with medically-advised behavior.
Yet as scientific knowledge increases, leading to the availability of tests that detect more and more about individual biochemistry, the tendency to blame people for their physical condition will grow. It will be said, as it often is already, that they should “take responsibility for their own health,” meaning not that they should be willing to abide by the consequences of their personal choices, but that they should adhere to the pronouncements of the medical establishment—or worse, the government—about how they ought to behave. Once illness was viewed as misfortune, and before that, as punishment for sin; now the pendulum is swinging back and it’s being attributed to sin again. As always, dogmatic authorities stand ready to absolve those who repent and accept the penance prescribed. Whether or not they do, if their genetic constitution happens to be such that statistically-based advice would be damaging to them, they will be out of luck. Nonconformists, whatever their reason for resistance, will be persecuted and, quite possibly, taxed.
But change may be on the horizon. Ironically in view of the historic trend toward medicalization, criticisms of current practices are beginning to be voiced—not by the public, but by medical professionals. At the time my novel Stewards of the Flame was written, the belief that many standard medical tests and treatments are unnecessary or even harmful was rare enough to be called heresy, and very little was published expressing that point of view. Since then, increased attention has been paid to the harm caused by medical overtreatment and a number of books and articles about it have appeared, many of them by doctors.
Moreover, it has been pointed out that a great deal of unnecessary testing and treatment is done because patients demand it, sometimes against their physician's best judgment. Although it is still common for doctors and hospitals to urge tests, procedures, and drugs merely because they are conventionally, if erroneously, viewed as beneficial—and in some cases, because they are financially profitable—the fundamental problem is the outlook of society as a whole rather than that of the medical establishment. Hopefully, the escalating cost of medical care will wake people up if nothing else does. Society will never be able to afford treatment for all those who need it as long as so much is spent on unnecessary and sometimes-harmful care for those who do not.
Copyright 1995 by Sylvia Engdahl
All rights reserved.
This essay is included in my ebook The Future of Being Human and Other Essays and in my book Selected Essays on Enchantress from the Stars and More.