Cut-off points and summary statistics
For population-based assessment, there are two ways of expressing child growth survey results using Z-scores. One is the commonly used cut-off-based prevalence; the other includes the summary statistics of the Z-scores: mean, standard deviation, standard error, and frequency distribution.
For consistency with clinical screening, prevalence-based data are commonly reported using a cut-off value, often <-2 and >+2 Z-scores. The rationale for this is the statistical definition of the central 95% of a distribution as the "normal" range, which is not necessarily based on the optimal point for predicting functional outcomes.
The WHO Global Database on Child Growth and Malnutrition uses a Z-score cut-off point of <-2 SD to classify low weight-for-age, low height-for-age and low weight-for-height as moderate and severe undernutrition, and <-3 SD to define severe undernutrition. The cut-off point of >+2 SD classifies high weight-for-height as overweight in children.
The use of -2 Z-scores as a cut-off implies that 2.3% of the reference population will be classified as malnourished even if they are truly "healthy" individuals with no growth impairment. Hence, 2.3% can be regarded as the baseline or expected prevalence. To be precise the reported values in the surveys would need to subtract this baseline value in order to calculate the prevalence above normal. It is important to note, however, that the 2.3% figure is customarily not subtracted from the observed value. In reporting underweight and stunting rates this is not a serious problem because prevalences in deprived populations are usually much higher than 2.3%. However, for wasting, with much lower prevalence levels, not subtracting this baseline level undoubtedly affects the interpretation of findings.
Summary statistics of the Z-scores:
A major advantage of the Z-score system is that a group of Z-scores can be subjected to summary statistics such as the mean and standard deviation. The mean Z-score, though less commonly used, has the advantage of describing the nutritional status of the entire population directly without resorting to a subset of individuals below a set cut-off. A mean Z-score significantly lower than zero—the expected value for the reference distribution—usually means that the entire distribution has shifted downward, suggesting that most, if not all, individuals have been affected. Using the mean Z-score as an index of severity for health and nutrition problems results in increased awareness that, if a condition is severe, an intervention is required for the entire community, not just those who are classified as "malnourished" by the cut-off criteria (15).
The observed SD value of the Z-score distribution is very useful for assessing data quality. With accurate age assessment and anthropometric measurements, the SDs of the observed height-for-age, weight-for-age, and weight-for-height Z-score distributions should be relatively constant and close to the expected value of 1.0 for the reference distribution. An SD that is significantly lower than 0.9 describes a distribution that is more homogenous, or one that has a narrower spread, compared to the distribution of the reference population. If the surveyed standard deviation of the Z-score ranges between 1.1 and 1.2, the distribution of the sample has a wider spread than the reference. Any standard deviation of the Z-scores above 1.3 suggests inaccurate data due to measurement error or incorrect age reporting. The expected ranges of standard deviations of the Z-score distributions for the three anthropometric indicators are as follows (5):
- height-for-age Z-score: 1.10 to 1.30
- weight-for-age Z-score: 1.00 to 1.20
- weight-for-height Z-score: 0.85 to 1.10
Available means and SDs of Z-scores of survey data are being included in the Global Database. However, as these summary statistics have been available only for a number of surveys, they do not appear on the website. Given the importance of the mean and SD of Z-scores, it is hoped that an increasing number of survey reports will include them in the future.
'Trigger-levels' as a basis of public health decisions
Experience with surveillance has contributed to emphasizing the usefulness of identifying prevalence ranges to assess the severity of a situation as the basis for making public health decisions. For example, when 10% of a population is below the -2SD cut-off for weight-for-height, is that too much, too little, or average? The intention of the so-called 'trigger-levels' is to assist in answering this question by giving some kind of guideline for the purpose of establishing levels of public health importance of a situation. Such classifications are very helpful for summarizing prevalence data and can be used for targeting purposes when establishing intervention priorities.
The prevalence ranges shown in Table 1 are those currently used by WHO to classify levels of stunting, underweight, and wasting. It should be borne in mind, however, that this classification is largely arbitrary and simply reflects a convenient statistical grouping of prevalence levels worldwide. Moreover, the designations of a prevalence as "low" or "medium" should be interpreted cautiously and not be taken as grounds for complacency. Since only 2.3% of the children in a well-nourished population would be expected to fall below the cut-off, the "low" weight-for-age group, for example, includes communities with up to four times that expected prevalence, and the "medium" group communities with up to an eightfold excess.
Table 1. Classification for assessing severity of malnutrition by prevalence ranges among children under 5 years of age
|Indicator||Severity of malnutrition by prevalence ranges (%)|