The PEWS Score: Can an Algorithm Predict Worsening Illness in a Hospitalized Child?

By Jeff Russ, MD, PhD – Pediatric/Child Neurology Resident, UCSF

Dr. Jeff Russ

A major task of any pediatric ward provider is to regularly assess a patient’s appearance, vital signs, labs, and risk factors, and integrate these data into a cohesive clinical picture to determine the patient’s acuity and potential need for intervention. This can be especially challenging on busy services or night shifts, where, for example, nurses may divide their time among up to four patients, and a single physician may care for 10–20 patients. Particularly with children, a lot can change between sporadic assessments, making it difficult to triage acuity.

One potentially helpful metric gaining popularity is the Pediatric Early Warning System (PEWS) Score. Based largely on the Brighton scoring system published in 20051, the PEWS and related tools comprise a class of real-time, data aggregating, predictive systems that aim to integrate objective patient data (heart rate, oxygen saturation, creatinine, etc), with subjective nursing assessments (e.g. patient behavior or capillary refill time) to calculate a score that identifies acutely ill patients at risk for deterioration. Intuitively, this tool seems like a welcome decision support measure, especially on busy services.

However, the evidence behind PEWS Scores is still limited, and their development, standardization, and validation are still in their infancy.

First, there is no universally accepted version of the PEWS Score. Many variations exist, often developed and implemented at individual hospitals. As a result, nearly no scoring system has received multicenter validation. A systematic review from 2016 uncovered 33 different versions of pediatric predictive algorithms that aggregated three to 19 physiologic variables2. The authors note that studies of PEWS effectiveness were frequently confounded by their implementation as part of a bundle of interventions, and comparison across tools was difficult because each study examined different outcomes: mortality, arrest events, unplanned transfer to intensive care, and code blue activations, among others2.

Moreover, because death and cardiac/respiratory arrest are (fortunately) rare, many PEWS studies struggled to achieve statistical significance for these outcomes. Of statistically significant data that exist from studies that implemented PEWS as an isolated intervention, there appears to be a decreased intubation rate, and a mixed effect on the rate of ICU transfers2.

To validate pediatric predictive systems in a more standardized way, the same authors conducted a retrospective case-controlled study, published earlier this year. They compared the performance of 18 different PEW systems applied to the same dataset, determining their ability to predict “deterioration events” that included death, cardiac/respiratory arrest, and unplanned transfer to intensive care3. For each tool, they calculated the area under the receiver operator characteristic (AUROC) curve, a combined metric of sensitivity and specificity that seeks to maximize each. Three PEWS scores ultimately outperformed the others, with significantly higher AUROC values: The Cardiff and Vale Pews, the Bedside PEWS, and the Modified PEWS III Intriguingly, these systems did not include the greatest number of physiologic parameters or achieve the highest sensitivities, but they did tend to be the most specific. The authors speculate that by maximizing true positives and minimizing false positives, a highly specific score may serve to enhance provider confidence and limit alarm fatigue3.

Of these three systems, the Bedside PEWS is the only score to be validated by a multicenter study4. The Bedside PEWS integrates seven variables: heart rate, systolic blood pressure, capillary refill time, respiratory rate, respiratory effort, transcutaneous oxygen saturation, and need for supplemental oxygen. The multicenter trial demonstrated that patients who experienced an unscheduled ICU admission or code blue had significantly higher maximum PEWS scores than controls, and PEWS scores even increased in a predictive fashion over the preceding 24 hours4.

The Bedside PEWS is now being investigated in an ongoing multicenter randomized controlled trial5, hopefully further reinforcing the supportive evidence for this score. The concept that, with the correct combination of physiologic parameters, a smart algorithm could longitudinally predict patient acuity more effectively than a busy clinician, sounds far-fetched. Yet early versions of such algorithms are already widely implemented and continually advancing (for example, some automatically interface with the electronic medical record to dynamically calculate deviations from patients’ prior baselines).

While promising, few scores have undergone multicenter validation, and no physiologic parameters or outcomes have been standardized to help compare different scores’ performance. Thus, in their current form, PEWS scores by no means replace clinical judgment. Instead, they are best considered early-prototype decision support tools, alerting providers to changes in patient status that may reflect critical changes in their acuity.


  1.   Monaghan A. Detecting and managing deterioration in children. Paediatr Nurs. 2005 Feb;17(1):32-5.
  2.   Chapman SM, Wray J, Oulton K, Peters MJ. Systematic review of paediatric track and trigger systems for hospitalised children. Resuscitation. 2016 Dec;109:87-109.
  3.   Chapman SM, Wray J, Oulton K, Pagel C, Ray S, Peters MJ. ‘The Score Matters’: wide variations in predictive performance of 18 paediatric track and trigger systems. Arch Dis Child. 2017 Jun;102(6):487-495.
  4.   Parshuram CS, et al. Multicentre validation of the bedside paediatric early warning system score: a severity of illness score to detect evolving critical illness in hospitalised children. Crit Care. 2011 Aug 3;15(4):R184.
  5.   Parshuram CS, et al. Canadian Critical Care Trials Group. Evaluating processes of care and outcomes of children in hospital (EPOCH): study protocol for a randomized controlled trial. Trials. 2015 Jun 2;16:245.