More Is More: Measuring the Prevalence of Clostridium difficile-Associated Disease in Hospitals: An Expert Interview With William R. Jarvis, MD and Medscape
03/2009
Clostridium difficile infection (CDI) is the most frequent cause of healthcare-associated infectious diarrhea in industrialized countries; it affects more than 300,000 hospitalized patients yearly in the United States alone. C difficile-associated disease (CDAD) can range from uncomplicated diarrhea to sepsis and even death. C difficile transmission occurs primarily in healthcare facilities via the fecal-oral route following transient contamination of the hands of healthcare workers and patients and contamination of the patient-care environment. The rate and severity of CDAD are increasing. This trend may be the result of changes in the epidemiology of C difficile (which may arise from changes in antimicrobial use, other drug-prescribing practices, or infection-control practices) and/or a new strain of C difficile that appears to produce greater quantities of toxins A and B, is more resistant to fluoroquinolones, and is associated with higher rates of morbidity and mortality. To date, all of the large epidemiologic studies of CDAD are incidence studies. Susan L. Smith, MN, PhD, Scientific Director of Medscape Infectious Diseases, interviewed William R. Jarvis, MD, about the first national prevalence study of CDAD in US hospitals. This study will be published in the American Journal of Infection Control in March 2009. Dr. Jarvis is an emeritus professor at Emory University School of Medicine in Atlanta, Georgia.
Medscape: Please describe the published studies to date examining the incidence or prevalence of CDI.
Dr. Jarvis: Several studies have looked at the incidence of CDAD. A study frequently referenced is by Lennox Archibald and coworkers, published in the Journal of Infectious Diseases in 2004[3] when I was at the Centers for Disease Control and Prevention (CDC). We analyzed data from the CDC's National Nosocomial Infections Surveillance (NNIS) system from 1987 to 2001. At that time, the NNIS system was no longer collecting hospital-wide data, so we focused on intensive care unit (ICU) component. We found that in hospitals with 500 or more beds, the rate of CDAD increased from about 2.8 to 3.0 per 10,000 patient-days in 1986 to about 5.5 episodes per 10,000 patient-days in 2001. So there was a significant increase in the rate of CDAD during that time, but again it was limited to episodes in ICU patients only.
There are a number of limitations of the NNIS system (now the National Healthcare Safety Network). First, NNIS includes data from only a small number of US hospitals. At the time of our analysis, only about 350 hospitals were participating, but only a fraction of those (about 211) were reporting ICU data. So only about 211 hospitals out of the more than 5000 hospitals in the United States were participating in NNIS. Second, the NNIS sample is a nonrandom (convenience) sample and it is biased towards large teaching hospitals. Third, as I mentioned before, hospital-wide component reporting was discontinued a number of years ago, so there are really no hospital-wide incidence data. Fourth, there are no data from non-acute-care facilities or acute-care facilities that had 100 beds or fewer. And finally, as with any surveillance system, there is considerable variability in surveillance intensity, compliance with applying the definitions, and even the microbiologic methods used to identify CDAD patients. However, it was the best and only prospective healthcare-associated infection incidence data available at the time to do such analyses.
Another study, also a CDC study and conducted by Dr. Clifford McDonald and co-workers, was published in Emerging Infectious Diseases in 2006. They took a different approach. Rather than looking at the NNIS data, they analyzed National Hospital Discharge Survey (NHDS) data for CDAD as either a first-listed diagnosis or any diagnosis for the period 1996 through 2003. The rate for any diagnosis in 1996 was about 30 cases per 100,000 population, and by 2003 it was about 60 cases per 100,000 population. It's important to realize that Archibald and co-workers used patient-days in the hospital as the denominator, whereas in this study, general US population was the denominator. Again, there are some limitations to using hospital discharge data: First, the NHDS database includes only about 475 acute-care hospitals; there are no data from Veterans Administration hospitals, military hospitals, or non-acute-care hospitals. Second, it has been well documented that healthcare-associated infections (HAIs) are grossly underreported in hospital discharge data. This database is totally dependent on medical-record reviewers and medical-record coders accurately coding for CDAD using international classification of diseases (ICD)-9 codes to capture CDAD in this administrative database.
Several studies have looked at the validity of using administrative data, such as ICD-9 codes to capture either HAIs in general or CDAD in particular. A study done in a Veterans Administration hospital found that less than 50% of the CDAD cases were captured with ICD-9 coding. Another study from Washington University (St. Louis, Missouri) found that administrative data were comparable to active prospective surveillance data, but they also found 2 types of errors: CDAD patients who were not reported, and patients reported as having CDAD who did not.
With ICD-9 codes there is no standardization of definitions for infections. In contrast to the NNIS system, for which there are specific definitions (and the infection-control personnel are very familiar with these), there is a lot of variability in how medical-record reviewers apply ICD-9 codes, particularly for HAIs. It has been shown that infection preventionists, who conduct active surveillance, detect HAIs much better than medical-records personnel do retrospectively.
Subsequent to the study by McDonald and coworkers, the Agency for Healthcare Research and Quality conducted a similar subsequent analysis of the same database for the period 1995 to 2005. They found that the rate of CDAD had increased from about 35 cases per 10,000 discharges (they used cases per 10,000 discharges vs cases per 100,000 population) to 78.6 per 10,000 discharges by 2006. So the case rate more than doubled during that 12-year time period.
Subsequently, a study by Zilberberg and coworkers[8] was published in Emerging Infectious Diseases. They used an established database, the National Inpatient Survey (NIS), which is similar to the NHDS but includes data from a larger number of hospitals and a larger number of states. They looked at data from 2000 to 2005 and found that for all patients, the rate had not increased as dramatically as some of these other studies indicated, but the CDAD rate in patients 65-85 years of age and patients 85 years and older had increased dramatically. They found that in 2000 there were about 134,000 cases of CDAD and that by 2005 there were 291,000 cases.
Some additional points are important in terms of the NHDS and the NIS. First, the limitations of the NHDS that I mentioned earlier, in particular the total dependence on ICD-9 coding, is also true for the NIS. The NHDS, as I also mentioned, includes about 475 acute-care facilities; the NIS includes a larger number of facilities from a larger number of states. During the period 2000-2005, the number of participating states increased, so there is the issue of hospitals coming in and hospitals going out. Second, the NHDS has the capacity to capture the first 7 discharge diagnoses, so if CDAD is listed as a diagnosis beyond that (ie, 10th or 12th), the database will not capture it. And although the capability of the NIS was expanded to capture about 15 discharge diagnoses, the average number of diagnoses actually captured during this study period was only 5. The capture rate varies from state to state; some capture more discharge diagnoses than others.
Medscape: Would you tell us about a CDAD prevalence survey that you recently conducted?
Dr. Jarvis: Because the NNIS system data were relatively limited to ICU patients, and because the different surveys I mentioned were based on incidence data and limited in the ways that I described, we felt that it was important to conduct a different type of survey to get a different picture of CDAD. Rather than doing an incidence study, we decided to do a prevalence study. And with that, we went to the Association for Professionals in Infection Control, which is the largest infection-control organization, with approximately 10,000 infection preventionists or infection-control personnel at hospitals in the United States. We asked them to choose 1 day during a 15-week period that was convenient for them and to tell us how many inpatients were at their facility and how many inpatients had CDAD, again on that 1 day only. We did not want them to do any additional testing; we only wanted them to look at their microbiology, infection control, or other records (such as antimicrobial use) to identify current inpatients with CDAD.
Medscape: Why did you do a prevalence study vs another incidence study?
Dr. Jarvis: There are several reasons. Incidence studies usually capture data over a longer period of time, usually measured in years. The CDC's NNIS system collects a limited amount of information over a long period of time, similar to the NHDS and the NIS. In other words, people are asked to collect data, usually for years, and so the trade-off is that you are very limited in the amount of data that you can capture. For example, within the NNIS, NHDS, or NIS, there is relatively little, if any, information about the hospitals (other than size and location), treatment of the patient, patient risk factors for CDAD, or outcomes other than possibly death.
We wanted to collect a greater amount of data over a shorter period of time. One of the benefits of a prevalence survey is that you can ask much more detailed questions. In fact, our survey questionnaire was 6 or 7 pages long. We asked information about the hospitals, what their CDAD infection-control practices were, what they used for environmental cleaning, what kind of isolation protocols they used for CDAD, what kind of ICUs they had, and what HAI surveillance they performed. We asked about antimicrobial stewardship programs and the different elements of those programs. Then we asked questions to capture information about CDAD. Among those patients identified, we collected extensive information about exposures that those patients had, what treatment they received, and their outcomes. If you tried to capture all of that data in either the CDC NNIS system, NHDS, or NIS, it would be so overwhelmingly burdensome that no one would be willing to do it. So it's a very different type of study. One is capturing a narrower amount of information over a long period of time, and the other is capturing a more extensive amount of information over a short period of time.
Medscape: A total of 648 hospitals responded to your survey. How does this compare with the distribution of US acute-care facilities?
Dr. Jarvis: This is an important question because, as I mentioned, the NNIS system is a convenient sample of a relatively small number of hospitals and is biased toward large academic centers. There are fewer data available on who is reporting to the NHDS or NIS. We looked at the hospitals that were participating in our survey and then went to the American Hospital Association (AHA) database, but unfortunately this database lags behind several years. However, there is not a very big change in the number or geographic distribution of US hospitals from year to year. We looked at 2 things: the hospital size as measured by the number of beds and the geographic distribution of the surveys.
As I mentioned earlier, the NNIS system (before it became the CDC National Healthcare Safety Network ) did not include hospitals with fewer than 100 beds. Because the average US hospital has fewer than 100 beds, we wanted to see how well we were capturing information from those hospitals. When we examined hospital respondents by their number of beds, we found that the AHA database had a larger number of facilities with 6-24 beds (7.3% of the AHA sample vs 1.8% of our sample). For 25-29 beds, the AHA sample reported 19.8% vs 6.5% of our sample; and for 50-99 beds, the AHA sample reported 21.8% vs 10.2% of our sample. So as you can see, hospitals with fewer than 100 beds account for almost 50% of US hospitals, whereas in our survey, less than 20% of the hospitals were in this category. As a result, if you look at hospitals that had 100-199, 200-299, 300-399, 400-499, or 500 and more beds, we had larger percentages, but often they were not that different from the AHA hospital percentages. For example, 22.9% of the AHA sample was 100-199 beds vs 26.8% in our sample. In general, we found that our sample included significantly fewer respondent hospitals with fewer than 100 beds vs the AHA sample: 18.5% vs 48.9%, respectively. In contrast, we had a larger proportion of hospitals with 100 beds or more: 81.5% vs 51.1% for AHA. In general, however, we captured data from hospitals in all size categories.
Next, we looked at the geographic region distribution of our and the AHA surveys. The AHA divides the country into 9 different census divisions: New England, Mid-Atlantic, South Atlantic, East North Central, East South Central, West North Central, West South Central, Mountain, and Pacific. We found that our respondents represented a smaller percentage of hospitals in the West North Central, Mountain, and Pacific divisions compared with the AHA's, but that we had the same distribution as the AHA for the South Atlantic, East South Central, and West South Central divisions. We had a slightly higher percentage of respondents in the New England, Mid-Atlantic, and North Central divisions compared with the AHA. The lowest percentage we had in any division was 4.9% (in the Mountain Division) compared with AHA's lowest percentage of 4.2% (New England) in any region. The largest percentage we had in any division was 19.1% (in the East North Central division) compared with AHA's largest percentage of 15.1% (South Atlantic and East North Central Divisions). What we've found doing this type of comparison with the AHA -- which has not been done with the NNIS, NHDS, or NIS participants -- is that we have a very good cross-section of all types of hospitals by bed size and geographic regions of the country.
Medscape: Can you tell us more about the types of facilities that responded to your survey?
Dr. Jarvis: In terms of hospital characteristics, we found that medical school-affiliated hospitals accounted for 26.5% of our respondents and 24.4% were reported as being tertiary-care centers. The median number of licensed beds was 224 and the mean number of inpatients on the day that each survey was conducted was 171. The hospitals had a median of 16 ICU beds. We also asked them about room design because that can have an influence on infection-control practices. The median number of private rooms was 50 and the median number of semiprivate rooms was 15. We also asked what percentage of their rooms are designed for 3 patients or more, and the median was zero. So the majority of facilities have a room design, whether it be private or semiprivate, that should facilitate the correct procedures for isolation of a patient infected with C difficile.
Medscape: Were C difficile rates increasing or decreasing at the respondent hospitals in your survey?
Dr. Jarvis: That is a very important question, because if you look at the CDC's multidrug-resistant organism (MDRO) guidelines published in 2006, infection-control recommendations were divided into 2 tiers. The first tier addresses basic infection-control practices for acute-care facilities. The second tier is for facilities where the MDRO (C difficile, methicillin-resistant Staphylococcus aureus, multidrug-resistant Acinetobacter species, etc.) rate is not decreasing. If that is the case, hospitals should advance from tier 1 to tier 2, which means that they need to apply more aggressive infection-control measures. We asked the hospitals to look at their C difficile incidence rates for the last 3 years and tell us whether they were increasing, stable, or decreasing. Forty-one percent reported that their C difficile infection rates were increasing and 41% said that they were remaining stable. In other words, 82% of the hospitals reported that their C difficile infection rates were not decreasing and, therefore, would have to move to tier 2 for more effective control of CDAD. In contrast, 18% of hospitals said that their C difficile-associated infection rate was decreasing. We think this is very important; it shows that the majority of US hospitals fall into tier 2 of the MDRO guidance and need to apply more aggressive infection-control measures if they're going to control CDAD.
Medscape: What prevalence rate for C difficile did you find?
Dr. Jarvis: Of the 648 hospitals that participated, 1443 C difficile-positive patients were reported. On the days that the point-prevalence surveys were conducted, the total number of patients was 110,550. If you look at AHA data, that is about 20% of the US inpatient population, giving us a point-prevalence rate of 13.1 per 1000 inpatients. We also found that there was a fair amount of geographic variability when we categorized the rates per 1000 inpatients as follows: 20 or more cases, 15 to < 20 cases, 12.5 to < 15 cases, 10 to < 12.5 cases, 7 to < 10 cases, and 0-7 cases. The highest CDAD rate for 20 or more cases per 1000 inpatients was in Rhode Island, where the rate was 28.9. The CDAD rate in another New England state, Maine, was 23.8. The rates for Michigan, Arkansas, and Kentucky were 22.7, 26.7, and 21.8, respectively. So 5 states had CDAD rates of greater than 20 cases per 1000. In contrast, the lowest rates tended to be in states that generally have small populations, such as Idaho. Other states with lower CDAD rates were Nebraska, Iowa, Louisiana, Mississippi, and North Carolina. We did not see geographic clustering of very high rates of CDAD but rather variation across the country.
Medscape: What infection-control precautions were your respondent hospitals using for patients with C difficile?
Dr. Jarvis: Again, an important question, because CDAD is controlled through a variety of measures, including rapid detection, placing the patient in contact isolation, improving antimicrobial use, and making sure that you have very good environmental cleaning. We asked about the infection-control measures used in 2 different settings: ICU and non-ICU. We were happy to see that in 91.3% of ICUs and 92.2% of non-ICU settings, contact isolation was used for CDAD patients; in 8.3% of ICUs and 7.1% of non-ICU settings, contact isolation-plus was used. This means that almost 99% of the hospitals were using contact isolation or more. Less than 1% reported using "standard precautions," which are not recommended for CDAD, and none reported using no isolation at all.
Placing patients in contact isolation is being done correctly in terms of what isolation precautions they're using. A few questions come up that we really don't know the answers to: When are they placing CDAD patients in contact isolation? Are they waiting until a patient develops diarrhea, gets a test, and then (hours or days later) that patient is moved into contact isolation after the test result is returned? And when are they moving those patients out of isolation? There have been several studies showing that CDAD patients continue to contaminate the environment for 2-3 days after symptoms resolve. How frequently are CDAD patients being removed from isolation as soon as their symptoms resolve? We also do not know how well environmental cleaning is being done and, very important, how compliant healthcare workers are with the contact precautions. You can place the patient in a private room, or you can cohort the patient and require that a gown and gloves be worn and that hand hygiene be practiced by all healthcare workers going into that room and touching the patient or contaminated environment. But healthcare worker compliance determines whether it works or not. Although we did not capture that type of data, we were happy to see that only a very small proportion of the respondents were not using the appropriate contact-isolation precautions.