About one in five American adults (18.6%) has a mental illness in any given year, according to recent statistics from the National Institute of Mental Health. This statistic has been widely reported with alarm and concern. It’s been used to back up demands for more mental health screening in schools, more legislation to forcibly treat the unwilling, more workplace psychiatric interventions, and more funding for the mental health system. And of course, personally, whenever we or someone we know is having an emotional or psychological problem, we now wonder, is it a mental illness requiring treatment? If one in five of us have one….
But what NIMH quietly made disappear from its website is the fact that this number actually represented a dramatic drop. “An estimated 26.2 percent of Americans ages 18 and older — about one in four adults — suffer from a diagnosable mental disorder in a given year,” the NIMH website can still be found to say in Archive.org’s Wayback Machine. Way back, that is, in 2013.
A reduction in the prevalence of an illness by eight percent of America’s population—25 million fewer victims in one year—is extremely significant. So isn’t that the real story? And isn’t it also important that India recently reported that mental illnesses affect 6.5% of its population, a mere one-third the US rate?
And that would be the real story, if any of these statistics were even remotely scientifically accurate or valid. But they aren’t. They’re nothing more than manipulative political propaganda.
Pharmaceutical companies fund the tests
First, that 18.6% is comprised of a smaller group who have “serious” mental illness and are functionally impaired (4.1%), and a much larger group who are “mildly to moderately” mentally ill and not functionally impaired by it. Already, we have to wonder how significant a lot of these “mental illnesses” are, if they don’t at all impair someone’s functioning.
NIMH also doesn’t say how long these illnesses last. We only know that, sometime in the year, 18.6% of us met criteria for a mental illness of some duration. But if some depressions or anxieties last only a week or month, then it’s possible that at any time as few as 1-2% of the population are mentally ill. That’s a much less eye-popping number that, critics like Australian psychiatrist Jon Jureidini argue, is more accurate.
But even that number may be overblown. That’s because these national-level statistics come from surveys of the general population using mental health screening questionnaires that produce extremely high “false positive” rates.
Virtually all of the screening tools have been designed by people, institutions or companies that profit from providing mental health treatments. The Kutcher Adolescent Depression Scale, for example, will “find” mental illnesses wrongly about seven times as often as it finds them correctly. The screening tool’s author, psychiatrist Stan Kutcher, has taken money from over a dozen pharmaceutical companies. He also co-authored the massively influential study that promoted the antidepressant Paxil as safe and effective for depression in children – a study which, according to a $3 billion US Justice Department settlement with GlaxoSmithKline, had actually secretly found that Paxil was ineffective and unsafe for children. Similarly, the widely used PHQ-9 and GAD-7 adult mental health questionnaires were created by the pharmaceutical company Pfizer.
This year’s NIMH numbers came from population surveys conducted by the Substance Abuse and Mental Health Services Administration (SAMHSA) and National Survey on Drug Use, which included the Kessler-6 screening tool as a central component — the author of which, Ronald C. Kessler, has received funding from numerous pharmaceutical companies. How misleading is the Kessler-6? It has just six questions. “During the past 30 days, about how often did you feel: 1) nervous, 2) worthless, 3) hopeless, 4) restless or fidgety, 5) that nothing could cheer you up, or 6) that everything was an effort?” For each, responses range from “none of the time” to “all of the time.” If you answer that for “some of the time” over the past month you felt five of those six emotions, then that’s typically enough for a score of 15 and a diagnosis of mild to moderate mental illness. That may sound like the Kessler-6 is a fast way to diagnose as “mentally ill” a lot of ordinary people who are really just occasionally restless, nervous, despairing about the state of the world, and somewhat loose in how they define “some of the time” in a phone survey.
And indeed, that’s exactly what it is.
How 80% accuracy leads to 20 times as much mental illness
Under optimal conditions, the best mental health screening tools like the Kessler-6 have sometimes been rated at a sensitivity of 90% and specificity of 80%. Sensitivity is the rate at which people who have a disease are correctly identified as ill. Specificity is the rate at which people who don’t have a disease are correctly identified as disease-free. Many people assume 90% sensitivity and 80% specificity mean that a test will be wrong around 10-20% of the time. But the accuracy depends on the prevalence of the illness being screened for. So for example if you’re trying to find a few needles in a big haystack, and you can distinguish needles from hay with 90% accuracy, how many stalks of hay will you wrongly identify as needles?
The answer is: A lot of hay. With a 10% prevalence rate of mental illnesses among 1,000 people, any online screening tool calculator can be used to help show that of the 100 who are mentally ill, we will identify 90 of them. Not too bad. However, at 80% specificity, of the 900 who are well, 180 will be wrongly identified as mentally ill. Ultimately, then, our test will determine that 270 people out of 1,000 are mentally ill, nearly tripling the mental illness rates we started with to 27%. And if mental illnesses are less prevalent, the performance of the test is mathematically worse: When only 10 in 1,000 are mentally ill, our test will determine that over twenty times that many are.
Mental illness diagnosing is a scientific bottomless pit
This is a common problem with most medical screening tests. They are typically calibrated to miss as few ill people as possible, but consequently they also then scoop up a lot of healthy people who become anxious or depressed while getting subjected to lots of increasingly invasive follow-up tests or unnecessary, dangerous treatments. That’s why even comparably much more reliable tests like mammography, cholesterol measuring, annual “physicals,” and many other screening programs are coming under increasing criticism.
The designers of mental health screening tools acknowledge all this in the scientific literature, if not often openly to the general public. As explained deep in their report, SAMHSA tried to compensate for the Kessler-6’s false positive rates; however, the main method they used was to give a sub-sample of their participants a Standard Clinical Interview for DSM Disorders (SCID).
SCID is the “gold standard” for diagnosing mental illnesses in accordance with the Diagnostic and Statistical Manual of Mental Disorders, SAMHSA stated. In fact, SCID simply employs a much larger number of highly subjective questions designed to divide people into more specific diagnoses. For example, the SCID asks if there’s ever been “anything that you have been afraid to do or felt uncomfortable doing in front of other people, like speaking, eating, or writing.” Answering “yes” puts you on a fast path to having anxiety disorder with social phobia. Have you ever felt “like checking something several times to make sure that you’d done it right?” You’re on your way to an obsessive compulsive disorder diagnosis.
That’s why SCID actually isn’t any more reliable than the Kessler-6, according to Ronald Kessler. He should know; Harvard University’s Kessler is author of the Kessler-6 and co-author of the World Health Organization’s popular screening survey, the World Mental Health Composite International Diagnostic Interview (WMH-CIDI). In their scientific report on the development of the WMH-CIDI, Kessler’s team explained that they simply abandoned the whole idea of trying to create a mental health screening tool that was “valid” or “accurate.”
The underlying problem, they wrote, is that, unlike with cancer, there’s no scientific way to definitively determine the absence of any mental illnesses and thereby verify the accuracy of a screening tool. “As no clinical gold standard assessment is available,” Kessler et al wrote, “we adopted the goal of calibration rather than validation; that is, we asked whether WMH-CIDI diagnoses are ‘consistent’ with diagnoses based on a state-of-the-art clinical research diagnostic interview [the SCID], rather than whether they are ‘correct’.” Essentially, creating an impression of scientific consensus between common screening and diagnostic tools was considered to be more important than achieving scientific accuracy with any one of them.
And where that “consensus” lies has shifted over time. Until the 1950s, it wasn’t uncommon to see studies finding that up to 80% of Americans were mentally ill. Throughout the ’90s, NIMH routinely assessed that 10% of Americans were mildly to seriously mentally ill. In 2000, the US Surgeon General’s report declared that the number was 20%, and the NIMH that year doubled its reported prevalence rates, too. In recent years, NIMH was steadily pushing its rate up to a high of 26.2%, but changed it several months ago to 18.6% to match the latest SAMHSA rate.
Suicide and mental illness and other influential sham statistics
Yet as a society we don’t seem to care that there’s a scientific bottomless pit at the heart of all mental illness statistics and diagnosing. One example which highlights how ridiculously overblown and yet influential such epidemiological statistics have become is the claim that, “Over 90% of people who commit suicide are mentally ill.” This number is frequently pumped by the National Alliance on Mental Illness, American Foundation for Suicide Prevention, American Psychiatric Association, and the National Institute of Mental Health, and it has dominated public policy discussions about suicide prevention for years.
The statistic comes from “psychological autopsy” studies. Psychological autopies involve getting friends or relatives of people who committed suicide to complete common mental health screening questionnaires on behalf of the dead people.
As researchers in the journal Death Studies in 2012 exhaustively detailed, psychological autopsies are even less reliable than mental health screening tests administered under normal conditions. Researchers doing psychological autopsies typically don’t factor in false positive rates. They don’t account for the fact that the questions about someone’s feelings and thoughts in the weeks leading up to suicide couldn’t possibly be reliably answered by someone else, and they ignore the extreme biases that would certainly exist in such answers coming from grieving friends and family. Finally, the studies often include suicidal thinking as itself a heavily weighted sign of mental illness—making these studies’ conclusions rarely more than tautology: “Suicidal thinking is a strong sign of mental illness, therefore people who committed suicide have a strong likelihood of having been mentally ill.”
Unfortunately, there is immense political significance to framing suicidal feelings and other psychological challenges this way, if not any substantive scientific significance. These alleged high rates of mental illness are becoming increasingly influential when we discuss policy questions with respect to issues as diverse as prison populations, troubled kids, pregnant and postpartum women, the homeless, gun violence, and the supposed vast numbers of untreated mentally ill. They draw attention, funding and resources into mental health services and treatments at the expense of many other, arguably more important factors in people’s overall psychological wellness that we could be working on, such as poverty, social services, fragmented communities, and declining opportunities for involvement with nature, the arts, or self-actualizing work. At the individual level, we all become more inclined to suspect we might need a therapist or pill for our troubles, where before we might have organized with others for political change.
And that reveals what the real purpose behind many of these statistics is: To change our attitudes and political positions. They are public relations efforts coming from extremely biased sources.
The politics of “mental illness”
Why is 18.6% the going rate of mental illnesses in America? SAMHSA’s report takes many pages to explain all the adjustments they made to arrive at the numbers they did. However, it’s easy to imagine why they’d avoid going much higher or lower. If SAMHSA scored 90% of us as mentally ill, how seriously would we take them? Conversely, imagine if they went with a cut-off score that determined only 0.3% were mentally ill, while the rest of us were just sometimes really, really upset. How would that affect public narratives on America’s mental health “crisis” and debates about the importance of expanding mental health programs?
However well-meaning, the professional mental health sector develops such statistics to create public concern and support for their positions, to steer people towards their services, and to coax money out of public coffers. These statistics are bluffs in a national game of political poker. The major players are always pushing the rates as high as possible, while being careful not to push them so high that others skeptically demand to see the cards they’re holding. This year, 18.6% is the bet.