- Research article
- Open Access
- Open Peer Review
Implementing the WHO integrated tool to assess quality of care for mothers, newborns and children: results and lessons learnt from five districts in Malawi
BMC Pregnancy and Childbirthvolume 17, Article number: 271 (2017)
In 2014 the World Health Organization (WHO) developed a new tool to be used to assess the quality of care for mothers, newborns and children provided at healthcare facility level. This paper reports on the feasibility of using the tool, its limitations and strengths.
Across 5 districts in Malawi, 35 healthcare facilities were assessed. The WHO tool includes checklists, interviews and observation of case management by which care is assessed against agreed standards using a Likert scale (1 lowest: not meeting standard, 5 highest: compliant with standard). Descriptive statistics were used to provide summary scores for each standard. A ‘dashboard’ system was developed to display the results.
For maternal care three areas met standards; 1) supportive care for admitted patients (71% of healthcare facilities scored 4 or 5); 2) prevention and management of infections during pregnancy (71% scored 4 or 5); and 3) management of unsatisfactory progress of labour (84% scored 4 or 5). Availability of essential equipment and supplies was noted to be a critical barrier to achieving satisfactory standards of paediatric care (mean score; standard deviation: 2.9; SD 0.95) and child care (2.7; SD 1.1). Infection control is inadequate across all districts for maternal, newborn and paediatric care. Quality of care varies across districts with a mean (SD) score for all standards combined of 3 (SD 0.19) for the worst performing district and 4 (SD 0.27) for the best. The best performing district has an average score of 4 (SD 0.27). Hospitals had good scores for overall infrastructure, essential drugs, organisation of care and management of preterm labour. However, health centres were better at case management of HIV/AIDS patients and follow-up of sick children.
There is a need to develop an expanded framework of standards which is inclusive of all areas of care. In addition, it is important to ensure structure, process and outcomes of health care are reflected.
It is important that care for mothers, newborns and children is both available and of good quality. This is reflected in the latest global initiatives such as the strategies towards ending preventable maternal mortality (EPMM), the Every Newborn Action Plan (ENAP), and the new Global Strategy for Women’s, Children’s and Adolescents’ Health (2016–2030) for the post-2015 Sustainable Development Goal era [1,2,3]. Leaders of global health agencies have agreed an agenda for better measurement of the quality of healthcare and aim to align the various efforts, reduce the burden of data collection and reporting for countries and improve linkage of results with decision-making. New tools for the assessment of quality have been developed and international consensus reached on indicators for quality of care in maternal, newborn and child health [4, 5].
In 2014 the World Health Organization (WHO) developed a new integrated tool to assess the quality of care, designed to help the Ministry of Health (MoH), key stakeholders and partners in maternal, newborn and child health (MNCH) to carry out comprehensive assessments at facility level. The objectives, structure and methods differ from other global facility assessment tools currently in use in that it allows for an assessment of the quality of care provided, not just the quantity or availability. For example, the World Bank Service Delivery Indicators (SDI) initiative collects evidence on the quantity of health services to help decision makers track progress and to benchmark countries , and the WHO Service Availability and Readiness Assessment (SARA) monitors tracer indicators of service availability and readiness, with a focus on provision of interventions across the continuum of care, in order to support health system strengthening . However, despite their related focus, neither of these tools assess the quality of care. The new WHO integrated tool in principle, enables an assessment of the quality of care provided against national standards (accepted and established in context) to produce an overall diagnosis of and to identify obstacles to quality of care. It is designed to be both a management and an evaluation tool. It is proposed that this tool is used country-wide as a component of a quality improvement strategy, or in a representative sample of healthcare facilities. It can potentially also be used in a single health facility to track progress in quality of care and inform quality improvement activities.
The new WHO tool for assessment of Quality of Care incorporates existing survey modules and instruments, namely the Health Facility Survey to evaluate quality of care for sick children , and the quality assessment and improvement tool for hospital care for mothers and newborn babies . The new fully integrated tool was used for the first time in Malawi in 2015.This study reports on the practical feasibility of using the tool, its limitations and strengths. We present the key findings of the assessment as well as recommendations for adaptation and implementation of the tool at scale.
Adaptation and familiarization with the tool
In Malawi, WHO and MoH staff reviewed and adapted the WHO tool for assessment of quality of care to the epidemiology and health system context of the country. A team of national and international assessors comprising senior clinicians, nurses, representatives from medical and nursing training institutions, professional associations, committees in charge of national treatment guidelines and practicing doctors and nurses reviewed the tool and agreed the standards to be included. Standards were derived from the Malawi Standard Treatment Guidelines , the National Integrated Maternal and Newborn Health Management guidelines and the WHO Pocket Book of Hospital Care for Children . The tool was then pilot-tested in two hospitals (not included in the assessment) for the assessors to become familiar with the instrument and assess time needed to collect data.
Structure of the tool
The WHO tool for assessment of quality of care comprises four modules related to A) infrastructure, B) maternal, C) newborn and D) paediatric care. In Malawi two versions of the tool were used; one for district hospitals or tertiary care centres designated to provide Comprehensive Emergency Obstetric and Neonatal Care (CEmOC) and one for use at health centre level or health facilities designated to provide Basic Emergency Obstetric and Neonatal Care (BEmOC). The only difference between the two tools is that the CEmOC tool has three additional areas that are assessed: i) case management of Caesarean section, ii) paediatric inpatient care and iii) paediatric surgery and rehabilitation. Data was collected via: pre-formatted questionnaires; checklists for availability of services, equipment, drugs and supplies; structured forms for scoring of case management observations against standards of care; and exit interviews for health service providers, caretakers or mothers. All assessors were trained and familiarised with national standards of care across the four modules (three days). Practical sessions were held at a health facility to practice the use of the tool and scoring method.
Module A gathers basic information about infrastructure, ward layout and organisation of care including staffing. Modules B, C and D assess quality of maternal, neonatal and child health care respectively and each includes sections on: emergency care, in patient care, infection control and supportive care, essential drugs, equipment and supplies, case management, and monitoring and follow up. In total the CEmOC tool assesses 45 variables (10 variables relating to infrastructure, 14 for maternal care, 9 for neonatal and 13 for paediatric care) and the BEmOC tool assesses 43 variables (10 infrastructure, 13 maternal care, 9 neonatal care, and 11 paediatric care).
Scoring of each component of care
Across all four modules each aspect of care is observed and then scored 1–5 (5: good practice complying with standard of care; 4: little need for improvement to reach standard of care; 3: some need for improvement to reach standard of care; 2: considerable need for improvement; and 1: services are not provided, there is totally inadequate care or potentially life-threatening practice). For each area of care assessed, several standards and components of care are scored. For example, for emergency obstetric care standards relating to patient flow and layout and structure of emergency care are individually scored. The section ends with a summary table which includes a summary score (an average of all the standards assessed) in that section and space to document main strengths and weaknesses (in writing).
The selection of the five districts for inclusion in this assessment was opportunistic; they are districts where the Centre for Maternal and Newborn Health (CMNH) has an ongoing programme of work to improve availability and quality of Emergency Obstetric Care and early Newborn Care. Nevertheless, they were considered representative of the 28 districts and three regions of Malawi; two target districts are in Central region, two in Southern region, and one in Northern region. Within each district each healthcare facility providing CEmOC (i.e. the district hospital) and the five largest (according to number of births per year) healthcare facilities providing BEmOC were included. Thirty-three healthcare facilities are under the direct control of the Malawi MoH, and two healthcare facilities in district 5 are members of the Christian Health Association of Malawi. Healthcare facilities were assessed using either the CEmOC or BEmOC assessment tool depending on the level of care provided; in each district, at least one CEmOC and five BEmOC facilities were assessed. In total six CEmOC and 29 BEmOC assessments were carried out.
The assessment process
Assessments took between one to three days depending on the type and size of healthcare facility. WHO and the MoH trained district level teams to carry out facility visits; teams were multidisciplinary and included district health officers, midwives, nurses, obstetricians, paediatricians and general physicians with enough experience to make valid observations of the care provided. Most healthcare facilities were assessed by a wholly ‘external’ team, except for district hospitals where teams included one ‘internal’ staff member; typically, teams included three or four people. All facility assessments were conducted between July–August 2015.
The MoH and WHO developed a preliminary analysis plan, which CMNH adapted and used for aggregating data and reporting across the five districts. A Likert scale (1–5) was used to assess degree of compliance with standards of care. Basic descriptive statistics were used to analyse summary scores for each standard (proportion of healthcare facilities where standards were met or not) and measures of central tendency (mean scores, standard deviation (SD)). Data were entered manually and analysis was done using Excel 2013. To help districts and healthcare facilities readily identify areas in need of improvement, we developed a ‘dashboard’ to display summary scores for each standard, by facility (Fig. 1). The figure displays summary scores only; these represent the average score across all components assessed for each area of care.
Data was obtained from 31 of a total of 35 healthcare facilities assessed; two facility assessments in districts 4 and 5 were not received by CMNH and paper files were reported as missing. Data for each module are summarised in Table 1 as proportion of healthcare facilities where standards are met in each district. The mean score and standard deviation for each standard assessed are presented across all five districts, as well as for the different levels of care (CEmOC and BEmOC).
Across all five districts, referral is the only aspect of facility infrastructure/organisation of care where more than half of all healthcare facilities (51.6%) meet the standard or require little improvement (mean score: 3.7; SD 1.13) Other aspects of infrastructure and organisation that scored well across the five districts were infrastructure (3.8; SD 0.83), availability of essential drugs and blood products (3.8; SD 0.73), and health information systems (3.8; SD 0.79). Healthcare facilities scored less well in relation to standards for laboratory systems (3.3 SD 1.06) and standards relating to availability of guidelines and conducting audit or review (2.6; SD 1.11).
Three areas of maternal care met standards or required little improvement: supportive care (71% of healthcare facilities scored 4 or 5); prevention and management of infections during pregnancy (71% of healthcare facilities scored 4 or 5); and management of unsatisfactory progress of labour (84% of healthcare facilities scored 4 or 5). Average scores were lowest for infection control (3; SD 0.74), maternity wards (3.2; SD 0.77) and prevention and management of preterm labour (3; SD 1.33).
Fewer than half of all healthcare facilities comply with standards or require little improvement for neonatal care. Fifteen healthcare facilities (48% of the total) score 4 or 5 for routine neonatal care. For all other standards assessed the average score was 3.5 or less with problems highlighted for essential equipment and supplies (2.9; SD 0.95), nursery facilities (3.2; SD 1.19), and management of the sick newborn (3.2; SD 0.93).
Across the five districts, healthcare facilities generally scored well in relation to supportive care for sick children (4.2; SD 0.66); management of HIV/AIDS cases (4.3; SD 0.66) and management of malnutrition (4; SD 0.59). However, availability of essential paediatric equipment and supplies (2.7; SD 1.1), emergency paediatric care (3.2; SD 0.69), and infection control (3.2; SD 0.60) are areas in need of improvement.
Results by level of care
Table 1 presents mean scores for the standards disaggregated by level of care (CEmOC and BEmOC). Overall mean scores for all aspects of facility infrastructure and organisation of care are higher for CEmOC facilities. CEmOC and BEmOC facilities have similar average scores for most aspects of maternal care, although for availability of essential drugs, and management of preterm labour, the mean score is higher at CEmOC facilities. For neonatal care average scores are similar across the levels of care for most standards, except availability of essential drugs, equipment and supplies where CEmOC facilities have a higher mean score. For the paediatric care standards, there are no differences in average scores between CEmOC and BEmOC facilities, except for case management of HIV/AIDS patients and monitoring and follow-up of admitted children where BEmOC facilities have higher average scores.
Key emerging issues
Healthcare facilities in district 1 presented the most gaps and challenges, with an average score of all assessed standards of 3 (SD 0.19) across all healthcare facilities in the district. The healthcare facilities assessed in district 2 emerged as the best performing with an average score of all assessed standards of 4 (SD 0.27) across all healthcare facilities. The availability of essential equipment and supplies remains a critical barrier to achieve satisfactory standards of care. In particular, the availability of equipment and supplies for both neonatal care (2.9; SD 0.95) and child care (2.7; SD 1.1) was deemed to be insufficient. Infection control was also a cross-cutting area in all districts that emerged as a barrier to quality of care provided along the continuum of care, with an average score of 3 (SD 0.74) for maternal care, 3.3 (SD 0.93) for newborn care and 3.2 (SD 0.60) for paediatric care.
Case management along the continuum of care was generally scored above 3.5 across all districts and healthcare facilities, however, there were some critical emerging areas. For maternal care, standards for maternity wards (3.2; SD 0.77) and prevention and management of preterm labour (3; SD 1.33) required improvement. For newborn care, management of the sick newborn (3.2; SD 0.93); nursery facilities (3.2; SD 1.19) and emergency paediatric care (3.2; SD 0.69) require further improvement.
This is the first time the WHO integrated tool has been used to assess quality of maternal newborn and child health care at country level. Application of the tool is feasible and has provided valuable information highlighting areas of good quality care as well as where there are deficiencies in the quality of care at healthcare facility and district levels in Malawi. Using a “dashboard” to display the assessment findings makes it possible to easily identify priority areas of care that require immediate action. This study also highlights the need for modification and further standardisation of the new WHO Quality of Care tool. In particular, we recommend a reduction in the overall number of standards assessed, revision of the current set of standards such that all aspects of the continuum of care are included and revision of the formulation of standards such that these are specifically reflective of all aspects of quality (i.e. both with regard to inputs, process and outcome), measurable and adaptable to context (healthcare facility level).
Strengths and limitations of the new WHO tool
While the integrated tool is designed to assess quality across the continuum of care, the standards currently included in the tool are not fully representative of all the areas of care that need to be assessed. Antenatal care is not assessed at all and postnatal care in a very limited way. These are typically neglected areas of care which are often not included in quality improvement activities. This is in part because national standards for antenatal and postnatal care are often not in place. Developing such standards and including them in comprehensive quality of care assessments is a priority. In addition, the tool would be enhanced by including indicators for routine intrapartum care practices, for example the choice of a companion at the time of birth, freedom in position and movement throughout labour, non-supine position in labour and careful monitoring of progress with the partograph . These aspects of care, together with others relating to women’s experience of care (e.g. effective communication, care with respect and dignity), are essential and inter-linked dimensions of quality yet are difficult to assess and monitor well. Methods such as structured observation and properly conducted exit interviews with women would be appropriate to measure these aspects of care and could easily be incorporated into a revised version of the tool.
There are some important points to highlight in relation to how well the tool was able to provide complete and accurate data. It proved difficult to report on the size and capacity of the healthcare facilities assessed as the data on basic hospital statistics and outcome measures were often not available and were not collected consistently. In addition, for neonatal and paediatric module, data collection was frequently incomplete. In its current format, the tool is very long and detailed.
Some standards are easier to assess (e.g. for ward infrastructure) than others (e.g. for satisfactory progress in labour), some are better defined (e.g. criteria for the standard on referral) than others (e.g. availability of essential drugs, equipment and supplies). This does make it more likely that some aspects of care are scored more highly than others simply because the relevant “standard” is easier to measure and more accurately defined. For example, it would be more accurate and informative to collect data on stock-outs or non-availability of specific essential drugs.
Completion of exit interviews with women, caretakers and providers is a mandatory component and while some healthcare facilities did complete these, we did not have access to the complete data set and so have not reported these findings. Where case observation is used, it is not clear how many cases assessors should observe before judging whether standards are met or not. There are also challenges to relying on observation; especially the potential for assessor bias and the likely impact on provider behaviour of having an external assessor present . Peer or self-assessment at healthcare facilities are alternative approaches. In addition, if there are no cases available to observe at the time of the assessment this part of the assessment cannot be completed. In these circumstances, assessors are advised to use staff interviews and data from registers to gather relevant information where possible.
Data collection could be made more efficient via use of technology including tablets or machine readable forms, and this is something to consider for future iterations of the tool. Table 2 summarises our key recommendations for improvement of the integrated tool.
We have not reported on the resource requirements for implementing the WHO integrated quality of care assessment tool at national or sub-national level. These data could be generated reliably in future through careful implementation research conducted alongside country level assessments. The burden of collecting a large number of (additional) data on quality of care and performance at scale is a factor that other pilot assessments have encountered  and for this reason we would recommend that the tool is shortened whenever possible and/or that selected components or modules are used as needed.
The debriefing and action plan provided in the assessment tool annex was not completed for any healthcare facility, and the reasons for non-completion of this critical step in the process need to be understood, perhaps through dialogue with assessors.
Implications for policy and practice
Until recently, the emphasis has been on coverage and availability of care rather than quality . A new tool to measure quality is, in principle, useful to provide baseline information and highlights specific areas for quality improvement. The new WHO tool has this potential. A key bottleneck in quality improvement efforts at healthcare facility level in Malawi and other low- and middle-income countries is translation of assessment data to action. The dashboard approach highlights in a very visual and accessible way where the key quality of care problems exist at both healthcare facility and district level. The findings were presented at a national workshop to share lessons learnt on maternal, newborn and child health quality of care in Lilongwe, where the Minister for Health in Malawi recommended that a dashboard similar to the one developed for this analysis be adopted to help map quality of care at district level. Subsequently, the assessment data were disseminated at district level and action plans were developed. A similar standards-based action-oriented healthcare facility assessment approach has been implemented in other lower-middle income countries [16, 17], and is the core component of clinical or standards-based audit .
At a global level, the shift towards improving quality of maternal and newborn health services demands a coordinated approach. Yet measurements of quality of care are often not done consistently and there are many different tools, indicators and methods, making it difficult to compare between and within countries. There is a need to clarify where and how the new integrated WHO tool fits with other facility-based assessment tools, such as SARA and the World Bank Service Delivery Indicators (SDI) survey. The new WHO Quality of Care tool is unique in its ability to judge quality not just quantity of services, but it assesses relatively more structure and process characteristics; ideally a healthcare facility assessment tool should assess quality in relation to structure, process and outcome . There are plans to extend the SARA assessment to include structured observations of consultations between providers and women for, as well as vignettes to determine providers’ usual case management practices . It is essential that partners prioritise alignment of quality of care assessment tools. The recent development and testing by WHO and partners of a core set of harmonized maternal newborn and child health indicators is a step in the right direction, but the final indicators need to be rapidly integrated into existing tools . In addition, as with any new tool developed by international agencies, it is imperative that the standards on which the tools are based are accepted by healthcare workers and established in the local context as realistic. A recent assessment of quality of care in a low resource referral hospital in Zanzibar used a participatory approach with skilled birth attendants and midwifery and obstetrics specialists to agree realistic criteria for quality of care that reflected local realities . In this ‘bottom-up’ approach fetal heart rate assessment every 30 min was maintained as ‘optimal’ practice, but team agreed that assessments within intervals of 90 min were an acceptable audit criterion.
Facility-based assessment methods are often time consuming and expensive to use at scale. The new WHO integrated quality of care assessment tool in its current format contains a number of separate modules even though not all areas of care are represented yet. The data produced requires interpretation and discussion to distil practical advice on improvements to care. This assessment identified important lessons for future development of the tool, including shortening it and/or using specific modules at a time, streamlining data collection methods and data sources and revision of standards to ensure the three components (inputs, process and outcomes) of quality are included. With modification, the tool could be used in other countries for baseline and subsequent periodic assessments of the quality of care.
Basic Emergency Obstetric and Neonatal Care
Comprehensive Emergency Obstetric and Neonatal Care
Centre for Maternal and Newborn Health
Every Newborn Action Plan
Ending preventable maternal mortality
Maternal, newborn and child health
Ministry of Health
Service Availability and Readiness Assessment
Service Delivery Indicators
World Health Organization
World Health Organization. Every newborn: an action plan to end preventable deaths. Geneva: World Health Organization; 2014. https://www.everynewborn.org/every-newborn-action-plan/
World Health Organization. The global strategy for women’s, children’s and adolescents’ health (2016–2030). Geneva: World Health Organization; 2015. http://globalstrategy.everywomaneverychild.org/
United Nations. Every woman, every child. New York: United Nations; 2016. http://www.everywomaneverychild.org/
Madaj B, Smith H, Mathai M, et al. Developing global indicators for quality of maternal and newborn care: a feasibility assessment using health facility data from ten countries. Bull World Health Organ. 2017;95(6):445–52.
World Health Organization and Partnership for Maternal, Newborn and Child Health. Consultation on improving the measurement of the quality of maternal, newborn and child care in facilities. Geneva: World Health Organization; 2014. http://apps.who.int/iris/bitstream/10665/128206/1/9789241507417_eng.pdf
World Bank. Service delivery indicators: methodology. Washington D.C.: World Bank; 2013. http://www.sdindicators.org/methodology
World Health Organization. Service Availability and Readiness Assessment (SARA): an annual monitoring system for service delivery: Reference manual. Geneva: World Health Organization; 2015.
World Health Organization. Health Facility Survey: tool to evaluate the quality of care delivered to sick children attending outpatients facilities. Geneva: World Health Organization; 2003.
World Health Organization Regional Office for Europe. Hospital care for mothers and newborn babies: quality assessment and improvement tool (second edition). Denmark: WHO Regional Office for Europe; 2014.
Ministry of Health. Malawi Standard Treatment Guidelines. 5th ed. Lilongwe: Government of Malawi; 2015.
World Health Organization. Pocket book of hospital care for children: Guidelines for the management of common illnesses with limited resources. Geneva: World Health Organization; 2005. http://www.who.int/maternal_child_adolescent/documents/9241546700/en/
Hulton L, Matthews Z, Stones RW. A framework for the evaluation of quality of care in maternity services. Southampton: University of Southampton; 2000.
Simmons R, Elias C. The study of client-provider interactions: a review of methodological issues. Stud Fam Plan. 1999;25:1–17.
Souza JP, Gulmezoglu AM, Vogel J, et al. Moving beyond essential interventions for reduction of maternal mortality (the WHO Multi-country Survey on Maternal and Newborn Health): a cross-sectional study. Lancet. 2013;381:747–55.
van den Broek N. Content and quality – integrated, holistic, one-stop antenatal care is needed for all. BJOG. 2016;123:558.
Tamburlini G, Yadgarova K, Kamilov A, et al. The maternal and neonatal care quality improvement working group. Improving the quality of maternal and neonatal care: the role of standard based participatory assessments. PLoS ONE. 2013;8:e78282. 10.1371/journal.pone.0078282.
Duke T, Keshishiyan E, Kuttumuratova A, et al. Quality of hospital care for children in Kazakhstan, Republic of Moldova, and Russia: systematic observational assessment. Lancet. 2006;367:919–25. 10.1016/S0140-6736(06)68382-7.
Raven J, Hofman J, Adegoke A, van den Broek N. Methodology and tools for quality improvement in maternal and newborn health care. Int J Gynecol Obstet. 2011;114:4–9.
Donabedian A. The quality of care: how can it be assessed? JAMA. 1988;260:1743–8.
STAR-E, USAID, MSH, and John Snow Inc. Uganda Health Facility Assessment report. 2013; http://www.starelqas.ug/wp-content/uploads/Bugiri-District-HFA-.pdf
Maaloe N, Housseine N, Bygbjerg C, Meguid T, Khamis RS, Mohamed AG, et al. Stillbirths and quality of care during labour at the low resource referral hospital of Zanzibar: a case control study. BMC Pregnancy Childbirth. 2016;16:351.
We would like to thank the national and international Assessors who conducted the facility visits and all staff at the participating healthcare facilities who contributed time and data to this assessment.
We gratefully acknowledge UNICEF Malawi and the World Health Organization for funding the assessment; neither were involved in the design of the study or the analysis of data. Two authors (AGA and KMA) are employees of UNICEF and provided technical oversight of data collection and contributed to writing the manuscript. The content represents the authors’ views and not the official views of UNICEF or WHO.
Availability of data and materials
The data generated and/or analysed for this study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
These finding are the results of a Ministry of Health-led assessment of government healthcare facilities in Malawi; all relevant permissions were granted at national and subnational level to conduct facility assessments using Ministry of Health-trained district level teams. No ethical approval was sought for the work.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.