PDF file Download PDF Article

Published: 13 February 2014

Vancomycin-resistant enterococci in hospitals

John Ferguson

Hunter New England Health
Tel: +61 2 4921 4444
Email: john.ferguson@hnehealth.nsw.gov.au

Key messages
  1. Surveillance is not designed to establish causality. It is an appropriate design to flag the magnitude of healthcare-associated infections (HAIs) when correct analysis is applied to the data.

  2. The distributions of rare events like HAIs are often overdispersed and difficult to predict. Reports should include monthly and quarterly HAI counts presented in charts only and annual HAI reported as rates. Use funnel plots for comparing annual performance between different size facilities. Establish thresholds from Poisson or negative binomial regression models.

  3. An alternative to HAI surveillance is HAI-related process surveillance such as surveillance of the important elements of infection prevention bundles.

  4. There should be an Internet data repository for online data analysis with public access to HAI data.

  5. The public should have access to a single source for national and state reports.

This paper provides an overview of the history of epidemiological activities in Australia at state and national levels to monitor healthcare-associated infections (HAIs), an examination of the pitfalls of surveillance as an epidemiological design for causality of HAIs and the attempts at correcting them, the ease of web access for information about statewide programs and reports and a look into the future of HAI epidemiology.

History of the epidemiology of HAI in Australia

I never guess, it is a shocking habit, destructive to the logical faculty. Sherlock Holmes in the Sign of Four.

The epidemiology of causal organisms for HAI in Australia would have been unknown until Phyllis Rountree’s first analysis of the distribution of Staphylococcus aureus susceptibility patterns in a joint publication in 19471. In 2000, laboratories commenced providing annual samples of Staphylococcus aureus isolates, from inpatients and outpatients, to the Australian Group for Antimicrobial Resistance (AGAR) to establish the proportion that were methicillin-sensitive Staphylococcus aureus (MSSA) and methicillin-resistant Staphylococcus aureus (MRSA) and their antibiograms2. The epidemiology of Clostridium difficile (CDI) in Australian hospital patients was first described in 19833 and surveillance and reporting is now mandatory for public hospitals.

The first national epidemiological study of the magnitude of all types of HAIs in 28,643 patients from 269 public hospitals was undertaken in 1984 using international standardised definitions4. Thereafter, and for more than a decade, our understanding of the epidemiology of HAIs in Australia remained a tightly guarded secret within individual hospitals. The New South Wales (NSW) Department of Health commissioned the first attempt at a statewide surveillance program using internationally standardised definitions in 1998 and pilot testing continued until 20005. Over the next few years individual statewide programs were established: Queensland (QLD)6 in 2000; Victoria (VIC)7 in 2002; and Western Australia (WA)8 with voluntary surveillance in 2005 and mandatory surveillance in 2007. The Northern Territory (NT) has been conducting healthcare-associated Staphylococcus aureus bloodstream infection (SABSI) and selective surgical site infection (SSI) surveillance, informally, for the past 20 years at the hospital level and, recently, data have been provided to NT Health (personal communication, Tain Gardiner, NT Health). Only major hospitals in South Australia (SA) have been reporting healthcare-associated bloodstream infections (BSI) and clinically significant MRSA isolates since 1997 and recently activities have been expanded statewide with the addition of selected SSI and central line-associated bacteraemia (CLABSI)9. Then in 2009 national surveillance commenced with the establishment of the Australian Commission for Quality and Safety in Healthcare (ACQSH), which mandated that all public hospitals report incidence of SABSI, and in 2011 required data to include CDI and CLABSI10.

The current state-based surveillance programs (Table 1) illustrate the commonalities across the individually focused programs that address their specific patient risk groups. Definitions that use different time to onset for deep/organ space SSI and the questions about the ability for auditors to apply the current CLABSI definition11 to achieve a standardised case definition may influence the validity of between hospital comparisons nationally. The pitfall of the CLABSI definition is associated with one of the criteria for interpreting a positive blood culture as significant, namely: ‘a common skin contaminant is cultured from at least 1 blood sample, and the physician institutes appropriate antimicrobial therapy’; this places undue weight on the clinician’s diagnosis in the presence of an ambiguous blood culture result. I suspect if refunding the costs associated with CLABSIs from public hospitals is withdrawn as it has been in the United States and national antibiotic stewardship is implemented there will be a reduction in the influence of this criterion. Standardisation of case definitions and case detection across Australia is not insurmountable and, even without improvements to HAI definitions, national surveillance could commence immediately if a moratorium on state comparison was introduced during the initial phasing of data sharing.

Table 1. Components of healthcare associated infection surveillance in public healthcare facilities by State and Territories.
Click to zoom

However, more important hurdles for a meaningful national surveillance program include the lack of collection of important extrinsic and intrinsic determinants that may act as confounders, representative patient sampling and improved analysis. Without improvements to surveillance the data should not be used to point to differences between hospitals, as there are inherent pitfalls to surveillance.

Pitfalls of HAI surveillance

Information is not knowledge. Albert Einstein, Physicist.

In the words of the ACQSH – surveillance is undertaken with a certainty that with mandatory and ‘authoritative’ analysis, one can make comparisons between hospitals of similar level and provide ‘an evidence-based’ with which to direct ‘public health action for better health outcomes’10. Yet, epidemiologists understand that surveillance may not provide information that is sufficiently robust to enable sound comparisons between quarterly surveillance reports even within a single hospital or evidence of causation. Surveillance is a basic study design that can only flag the possibility of change. In fact change may be due to random fluctuation rather than a true increase or decrease in the epidemiology of HAI. To understand the limitations of the results we will examine the epidemiological pitfalls of surveillance.

Historically, epidemiology was developed for and by public health professionals to study common disease distribution and causes of diseases in the population. Surveillance is a lesser epidemiological design to measure the changes in the trends of a disease in the total population or sentinel groups12. Healthcare facilities may passively survey their patient populations for multiresistant organisms (MROs), such MRSA or vancomycin-resistant enterococci (VRE) or key HAIs, such as SABSI or CDI and many others, via positive laboratory results reported by pathology. In the main, healthcare facilities undertake sentinel surveillance where the same surgical procedure or at-risk groups are used to establish trends in rates of specific HAI such as SSI associated with total knee or hip replacement or coronary artery bypass graft (CABG) surgery. Surveillance for HAI can be undertaken periodically, but to improve the level of evidence about trends it is preferable to use continuous surveillance. A pitfall of sentinel surveillance in healthcare facilities is the following of different patients over different time periods to establish changes in rates between reporting quarters without collecting data to establish whether the change in rates is associated with a change in the intrinsic (patient) determinant or the extrinsic (hospital) determinant (risk factors) for infection. This lack of concurrent measurement of intrinsic determinants is a distinguishing and limiting feature of the current approach to surveillance of HAI.

Nonetheless, the strength of the surveillance design is its simplicity and its use of: (i) a standardised method for data collection; (ii) reliable and standardised definitions (often using diagnostic test results that increase the validity of a case); and (iii) the collection of few variables (the variable of interest and one or two risk factors). The ranking of study designs for the ability to provide high-level evidence for causation has been painstakingly evaluated for the effect of common methodological shortfalls associated with each epidemiological study design13. Surveillance does not rank as a study design for testing associations between potential determinants of infections and the outcome of infection. Surveillance design is referred to as an observational study design, and as such should be principally used to measure changes in magnitude of infection not cause and, as such, indicates the potential for trends in the transmission patterns of HAI when undertaken in similar patient groups who experience similar exposures to intrinsic and extrinsic determinants of infection over the surveillance periods being compared.

Study design for causality

When examining the complex association of causality for HAI, higher level study designs (experimental designs including randomised control trial and pseudo randomised control trial) will produce estimates of association with a higher degree of certainty than estimates produced from lesser designs (observational designs including non randomised control trials, cohort, case-control and interrupted time series with a control group). The higher-level designs attempt to a priori or anteriorly control for the effect of confounding (a distortion of the estimates of HAI by other causal or proxy causal factors) during the enrolment and randomisation of patients. The lesser designs, especially surveillance and time series are mostly reliant on a posteriori testing for the presence of confounders and attempt to adjust for confounding during analysis. Consider a select patient group under routine surveillance, for example, patients undergoing a CABG procedure, of whom a proportion may have diabetes. There are three rules of confounding (Fig. 1) and confounding (from the effect of diabetes) will distort the risk estimates of HAI if: (i) patients with uncontrolled diabetes are unequally distributed into the extrinsic determinant group (e.g. surgeons or surveillance periods); (ii) uncontrolled diabetes has a direct causal link with HAI or is a proxy risk factor for HAI; and (iii) uncontrolled diabetes is not a determinant of interest. The effect of uncontrolled diabetes, during CABG procedures, on the rate of HAI will not be controlled with a posterior stratification if uncontrolled diabetes status is not collected.

Figure 1. The three rules for confounding for the outcome of interest.
Click to zoom

How is confounding controlled and is this approach appropriate for Australia?

Not everything that can be counted counts, and not everything that counts can be counted. Albert Einstein, Physicist.

The potential for uncontrolled confounding is a fundamental design flaw of all observational designs including surveillance. A priori control of confounding occurs during randomisation and is superior to a posteriori attempts to control confounding during analysis. A posteriori control through analysis is undertaken for only those potential confounders collected along with the primary risk factors, for example, type of procedure. Stratifying HAI rates by National Nosocomial Infections Surveillance (NNIS) Risk Index score14 was an a posteriori attempt to adjust for the effect of American Society of Anesthesiologists (ASA) scorei, duration of procedure and degree of contamination.

Controlling for the three NNIS risk index factors14 in SSI rates has had varying success in Australia5,1517. Extended duration of procedure, a high ASA score and contamination of the surgical site must be common enough, across surgical patients to enable successful discrimination of different HAI rates between the three NNIS risk levels. During the first pilot testing of standardised definition in NSW in the late 1990s the different NNIS risk indices lacked discrimination for many procedures and it was determined that the burden of collecting these data was not warranted5. The inability for duration of procedure past the 75th percentile and ASA scores to discriminate risk was due to the homogeneity of these factors across many procedures5,1517. The duration of procedure established by NNIS were longer in NSW for some procedures (e.g. CABG), and shorter for others (e.g. knee replacement). Colorectal surgery was the only procedure where the SSI rate was significantly influenced by the risk factors that contribute to the NNIS Risk Index5. The variable planned versus emergency/unplanned also did not discriminate risk differences. The NNIS index did not discriminate risk well for commonly surveyed procedures collected for the QLD surveillance program, such as partial hip, revision total hip, total knee, revision total knee replacements, femoro-popliteal bypass graft and CABG17. An unknown proportion of HAI will always be due to unaccounted confounding. One reason for the variability of success in controlling confounding is that Australian patients are served by small to medium-size healthcare facilities. As such, the samples of surgical procedures are small, preventing successful controlling of confounders.

In 2009 the National Healthcare Safety Network (NHSN) surveillance system moved from stratifying HAI data by specific risk wards and by the NNIS index to providing standardised infection rates (SIR) for deep/organ space SSI and CLABSI18. SSI SIRs are calculated for procedures using multiple logistic regression models for each procedure using the observed number of HAIs and the predicted number of infections in all facilities for the reporting period while adjusting for multiple key risk factors for the specific procedure. A SSI SIR >1 indicates more infections are observed than expected. Control for confounding using multiple regression analysis was used back in 1984 but this was a one-off project and applied to a large dataset4. The SSI SIR approach is a further extension of the NNIS risk factors specific to procedure: age, gender, trauma, body mass index (BMI), anesthesia, ASA score, duration of procedure, endoscope, medical school affiliation, hospital bed size, wound class and emergency. Procedures not included in the SSI SIR are those where duration is <5 minutes or extensively prolonged (75th percentile in minutes + [five times the interquartile range in minutes]) suggesting that either intrinsic or extrinsic risks are uncontrollable during the procedure and HAI may not be preventable. The benefit of collecting a similarly extensive number of potential confounders may not be warranted in Australia.

When is confounding not controlled?

Controlling by modelling or stratifying infection rates by specific risk determinants for each procedure will not provide rates that discriminate between different patient groups if the procedure is performed infrequently and the expected probability of infection is low. Adjusting multiple confounders, such as the NNIS risk index or NHSN approach, is not appropriate for individual hospitals because a local dataset for a quarterly reporting period will never be sufficiently large to result in meaningful rates. At the local hospital level HAI are count data and frequency varies greatly (referred to as overdisperison) within a single month. Individual healthcare facilities, collecting data for surgical procedures associated with low SSI rates that are infrequently performed, cannot reliably produce a rate; if the 95% confidence intervals (CI) around the SSI rate include estimates that span below and above the threshold, the rate should be considered unreliable. For example, the SSI threshold of <1% for cardiovascular procedures – the SSI rate (0.97%, 95% CI 0.38–2.5%) established for 400 procedures over 3 months – is unreliable. The margin of error reflected in the 95% CI spans below and above the <1% threshold. Consequently, small samples for procedures with low SSI rates should not be estimated more frequently than annually at the individual hospital level.

The aim of surveillance data is primarily to indicate the possibility of a problem; however, the current method of aggregating all catheter-days to establish a single rate for CLABSI has a major methodological pitfall19. The majority of CLABSIs develop in patients with dwell time >9 days, whereas the majority of patients are admitted to NSW intensive care units for <9 days19. This means that the CLABSI rate as currently estimated will not reflect the epidemiology in the majority of low risk patients. Consequently, the rate cannot reflect the magnitude of infection in a small proportion of high-risk patients whose dwell time is >9 days and are the major contributors to the CLABSI rate. To develop appropriate infection prevention strategies for high-risk patients, rates for prolonged dwell-times must be separated from the close to zero rate associated with shorter dwell-times20.

Analyses for count data

The goal is to turn data into information, and information into insight. Carly Fiorina, Former CEO of Hewlett-Packard.

The problem of unreliable information on HAI obtained from small samples may be compounded by the application of parametric statistics to overdispersed data that renders comparisons between the quarterly report and an annual rate within the facility useless. Small but real changes in the HAI rate over a quarter may be missed or increases in HAI may be due to random fluctuation. When count data are overdispersed, such as for catheter-associated urinary tract infection (CAUTI), CLABSI, ventilator-associated pneumonia (VAP) or SABSI, and comparisons are made using small samples against a national threshold, then the appropriate approach may be zero-inflated models (for datasets with excessive zeros because most patients do not acquire a HAI), negative binomial or Poisson regression models using patient- or device-days or total number of procedures as the offset or denominator variable21. Monthly and quarterly counts of HAI should be plotted using methods that ‘smooth out’ excessive rate fluctuations, such as cumulative sum (CUSUM) control charts for SSI and Shewhart control chart or exponentially weighted moving average (EWMA) for bloodstream infections and MROs22. When comparing rates between different size healthcare facilities use analyses that are less reactive to random fluctuation, such as funnel plots or Bayesian analysis23.

Reporting of infrequently performed procedures and overdispersed counts of any HAI should: (i) include 95% CI to illustrate whether the margin of error includes estimates that are below and above the threshold and, when this occurs; (ii) only calculate annual rates; (iii) attempt annual comparisons of HAI against other institutions using funnel plots; (iv) for quarterly analysis plot the counts of SSI, bloodstream infections and MROs on charts with a pre-determined threshold established from a binomial or Poisson model or a chosen hospital target as the threshold; and (v) for longitudinal comparisons use Poisson, or negative binomial regression models.

Quarterly charts and annual rates with 95% CI should only be used to indicate or flag the possibility of changes in incidence to hospital boards. Visual displays, charts, of rare events against a realistic threshold will provide better insight than an unreliable rate.

Process surveillance

For facilities where surgical procedures are performed infrequently, device utilisation rate is low or collecting data on potential confounders is difficult, then process surveillance is appropriate (personal experience as World Health Organization Advisor to China and Malaysia). Process surveillance – for example, pre-surgical or urinary catheter device prophylaxis and early removal of intravascular devices – provide information that allows for immediate correction with an immediate and direct impact on HAI. Given the success of bundling infection control strategies24,25 for rare HAIs, process surveillance associated with the implementation of the individual elements of the bundle, could be an alternative to measuring a rare HAI. Surveillance of process indictors also becomes more attractive than HAI surveillance as length of stay decreases with a concomitant increase in developing HAI post-discharge rather than during hospitalisation. Currently, hand hygiene compliance rates (www.Myhospital.gov.au) are viewed as having a direct causal link with the level of SABSI but should be thought of as a patient safety process indicator until the reliability of both SABSI and hand hygiene data improve enough to enable a strong causal link to be proven26.

Access to HAI rates

Support for public access to publically funded data collection is not disputed in Australia27. Yet, without knowing the exact web address, accessing the correct website for state surveillance programs (Table 1) and reports was difficult for all programs with the exception of Victoria, which has a dedicated website and impressive readily accessible reports. Queensland’s surveillance program was readily located but reports, with the exception of SABSI, were not. The current websites for surveillance programs and reports are:

  1. Victoria: http://www.vicniss.org.au/

  2. Queensland: http://www.health.qld.gov.au/chrisp/

  3. NSW: http://www.health.nsw.gov.au/professionals/hai/Pages/default.aspx

  4. Western Australia: https://mail.unsw.edu.au/owa/...nfection_unit.pm

  5. South Australia: http://www.health.sa.gov.au/INFECTIONCONTROL/Default.aspx?tabid=147

  6. Tasmania: http://www.dhhs.tas.gov.au/peh/...ention_and_control_unit

  7. Northern Territory Department of Health: does not currently have a dedicated HAI surveillance website.

Data should be readily accessible to researchers, academics and the public. The MyHospital website provides SABSI and hand hygiene compliance data for individual healthcare facilities. However, data have been aggregated and are not available as numerator and denominator data for calculating a margin of error for each rate (i.e. to estimate reliability). Interested parties must access each hospital and the website does not provide state or area health service level data.

An epidemiological wish list for the future

When we have all data online it will be great for humanity. It is a prerequisite to solving many problems that humankind faces. Robert Cailliau, Belgian informatics engineer and computer scientist and co-developer of the World Wide Web.

Electronic banking is here and has been, in the main, safe. Encrypted internet databases would bring the future of data sharing between hospitals closer. HAI data should be deposited monthly via encrypted internet transfer to a central database for standardised analysis and rapid feedback. Automated online real-time charts and intra- and inter-hospital comparisons using analysis for overdispersed count data would remove the need for hospital epidemiologist statisticians.

Public hospital HAI data have been collected with public funding. The cost of data collection within private hospitals is passed onto the consumer. Therefore, ethics dictate that there must be a central repository providing quarterly and annual data (in the form of numerators and denominators) available at anytime to the public who pay for the data collection. The future challenge for meaningful HAI epidemiology will be in its adaption to shorter length of stay, hospital in the home, over-representation of the elderly and the comorbidities associated with living longer. Geospatial mapping, used in the study of human movement, defense, environmental science, parasitology and many other applications, could one day assist infection prevention staff to visualise and locate hotspots of HAI. Future HAI surveillance will benefit from greater input from a variety of professions bringing together new methods of data collection and analysis.


The author thanks Dr Rod Givney, Dr Denis Spelman and Ms Sandra Berenger for reading the manuscript and providing useful feedback.


John Ferguson is a Microbiologist and Infectious Diseases Physician with Hunter New England Health and a Conjoint Associate Professor with the University of Newcastle and University of New England. His interests include healthcare-associated infection and antibiotic resistance and stewardship. He was on the Writing Group for the National Antibiotic Guidelines for 12 years and is now Chair of the Healthcare-associated Infection Advisory Committee at the Australian Commission on Safety and Quality in Healthcare. He is currently Director, Infection Prevention and Control for Hunter New England Health. He provides support for undergraduate and postgraduate teaching for the University of Papua New Guinea and the National Academy of Medical Sciences in Nepal.

RSS Free subscription to our email Contents Alert. Or register for the free RSS feed.