Bulletin of the World Health Organization

Evidence briefs and deliberative dialogues: perceptions and intentions to act on what was learnt

Kaelan A Moat a, John N Lavis b, Sarah J Clancy c, Fadi El-Jardali d, Tomas Pantoja e & for the Knowledge Translation Platform Evaluation study team

a. Health Policy PhD Program, McMaster University, Hamilton, Canada.
b. Department of Clinical Epidemiology and Biostatistics, McMaster University, 1280 Main Street West, CRL 209, Hamilton, Ontario, L8S 4K1, Canada.
c. McMaster Health Forum, McMaster University, Hamilton, Canada.
d. Department of Health Management and Policy, American University of Beirut, Beirut, Lebanon.
e. Departamento Medicina Familiar, Pontificia Universidad Católica de Chile, Santiago, Chile.

Correspondence to John N Lavis (e-mail: lavisj@mcmaster.ca).

(Submitted: 21 December 2012 – Revised version received: 23 July 2013 – Accepted: 06 August 2013 – Published online: 11 October 2013.)

Bulletin of the World Health Organization 2014;92:20-28. doi: http://dx.doi.org/10.2471/BLT.12.116806


Over the last decade there has been growing interest in identifying methods to ensure that policy decisions that are aimed at strengthening health systems in low- and middle-income countries are guided by the best available research evidence.14 As a result, several “knowledge translation” platforms, such as the Evidence-informed Policy Networks supported by the World Health Organization, have been established in countries across Africa, the Americas, Asia and the eastern Mediterranean.58 Currently, nearly all of these platforms are focusing their efforts – at least in part – on two distinct but interrelated strategies: the preparation of “evidence briefs for policy”8 and the convening of “deliberative dialogues” that use such briefs as their primary inputs.5

Evidence briefs are a relatively new form of research synthesis. Each starts with the identification of a priority policy issue within a particular health system. The best available global research evidence – such as systematic reviews – and relevant local data and studies are then synthesized to clarify the problem or problems associated with the issue, describe what is known about the options available for addressing the problem or problems, and identify the key considerations in the implementation of each of these options. Research evidence generally needs to be made available in a timely way if it is to stand a good chance of being used as an input in policy-making.9,10 Evidence briefs can generally be prepared in a few weeks or months and – unlike most summaries of single reviews or studies – can place the relevant data in the context of what they mean for a particular health system.

Evidence briefs are used as primary inputs for the deliberative dialogues that facilitate interactions between researchers, policy-makers and stakeholders – the latter defined in this study as administrators in health districts, institutions and nongovernmental organizations, members of professional associations and leaders from civil society. Such interactions are known to increase the likelihood that research evidence will be used in policy-making.9,10 Deliberative dialogues also provide an opportunity to consider the best available global and local research evidence alongside the tacit knowledge of the key health-system “actors” who are involved in the issue being considered or likely to be affected by a decision related to it. At the same time, allowance can be made for other country- or region-specific influences on the policy process, such as institutional constraints, pressure from interest groups and economic crises.

Taken together, briefs and dialogues address the majority of the barriers that hinder the use of research evidence – such as the common perception that the research evidence that is available is not particularly valuable, relevant or easy to use – while building on the factors found to increase the likelihood that such evidence will be used to guide policy-making.5,913 The results of formative evaluations of both strategies in general – as well as some of their common features – have been encouraging.14 However, there have been no systematic attempts to determine how design and content affect the usefulness of evidence briefs and deliberative dialogues in supporting the use of research evidence by policy-makers and stakeholders.1518 There have also been few attempts to develop a method for evaluating such briefs and dialogues that can be applied across a range of countries, health system issues and groups and that includes an appropriate and tractable outcome measure.

To address this gap, we developed and administered two questionnaire-based surveys – one for evidence briefs and one for deliberative dialogues – across a range of issues and low- and middle-income countries. The main aim was to determine whether health system policy-makers, stakeholders and researchers in low- and middle-income countries viewed such knowledge translation strategies as helpful. Drawing on the “theory of planned behaviour”, we also sought to determine the respondents’ intentions to act on the research evidence contained in the evidence briefs and discussed during the deliberative dialogues and their assessment of the factors that might influence whether and how they would act on that evidence.19,20 The theory of planned behaviour was originally developed in the context of individual behaviour. However, this theory has been used successfully in the context of professional behaviour21,22 and has already shown some promise in the study of the behaviour of those involved in policy-making.23


Study participants

We conducted surveys as part of a 5-year project – the Knowledge Translation Platform Evaluation study – that is evaluating the activities, outputs and outcomes of knowledge translation platforms in 44 low- and middle-income countries, using all data that have been collected from the start of the project in 2009 to the initiation of this analysis.5 For the present investigation, this included data collected from surveys of policy-makers, stakeholders and researchers who were invited to attend deliberative dialogues in Burkina Faso, Cameroon, Ethiopia, Nigeria, Uganda and Zambia after being sent evidence briefs that had been prepared – by local knowledge translation platforms – as inputs for the dialogues.24 In each study country in which an evidence brief was prepared, potential dialogue participants were identified – via a “stakeholder-mapping” exercise – by the team responsible for the local knowledge translation platform. The aim of this exercise was to identify all those policy-makers, stakeholders and researchers who were likely to be involved in or affected by decisions made during the policy process surrounding the issue on which the evidence brief was focused. Samples of the policy-makers, stakeholders, and researchers identified in this manner were then sent the relevant evidence brief and invited to the corresponding dialogue.

Questionnaire development and administration

Two types of questionnaires were used to collect information from policy-makers, stakeholders and researchers: an “evidence brief” questionnaire and a “dialogue” questionnaire. Each type of questionnaire was divided into three or four sections. The first section investigated how helpful the respondent found each key feature of the brief or dialogue and the second section investigated how well the respondent felt that the brief or dialogue achieved its intended purpose. The dialogue questionnaire included a third section that contained 15 items based on “theory of planned behaviour” constructs.19 Questions about the respondent’s professional experiences formed the final section of both types of questionnaire.

The design of each questionnaire was based on the results of a pilot study, a review of the relevant literature, and feedback from a three-day workshop attended by members of the teams running knowledge translation platforms in eastern Africa, Kyrgyzstan and Viet Nam. The evidence brief questionnaire was also refined using feedback from a workshop that brought together representatives of all of the knowledge translation platforms in Africa.24 In addition, the portion of the same questionnaire that related to the theory of planned behaviour was subjected to a reliability assessment.25 Both types of questionnaires were translated into French for use in countries in which English was not widely spoken. Details of the survey instruments and their development can be accessed on line.26

All dialogue invitees from the six countries included in this study who were identified during the stakeholder mapping exercise were sent a package containing a letter of invitation to participate in the dialogue, a copy of the evidence brief, information about the study, a copy of the evidence brief questionnaire and a pre-stamped envelope addressed to the country team running the local knowledge translation platform.5 Participants were asked to return the completed evidence brief questionnaire in the pre-stamped envelope before arriving at the dialogue session. Invitees who did not do this but who presented at the registration desk to participate in a dialogue were asked to complete an evidence brief questionnaire before the dialogue had commenced. Each dialogue participant was handed a copy of the dialogue questionnaire at the end of the dialogue and asked to complete and return it immediately – before his or her departure. Completed questionnaires were collected by country teams and sent to the Knowledge Translation Platform Evaluation study team at McMaster University (Hamilton, Canada). All of the data from the questionnaires were then transferred into an Excel (Microsoft, Redmond, United States of America) database so that they could be compiled, compared and analysed.


Two investigators independently coded the key features of each evidence brief and dialogue, which are listed in Table 1 and Table 2, and reconciled their coding. Although this coding was largely based on reviews of electronic copies of the briefs, dialogue summaries and reports to funders that described the dialogue process, it was finalized for each knowledge translation platform in discussions with the core members of the country team responsible for the platform. We used Excel to calculate detailed descriptive statistics for the respondents’ assessments of the evidence briefs in general, the deliberative dialogues in general and each of the key features of the briefs and dialogues that we investigated. The assessments of the various types of respondents were compared. We conducted ordinary least-squares regressions – in version 19 of the SPSS software package (SPSS Inc., Chicago, USA) – to explore associations between the respondents’ professional characteristics and their overall assessments of the briefs and dialogues as well as their assessments of how helpful they found each key feature.

Respondents were asked to identify their own professional roles. Since many respondents claimed to have multiple roles, for the regression models it was necessary to categorize each respondent’s role as a policy-maker, stakeholder, researcher or “other”. Respondents were coded as policy-makers if they chose “policy-maker” for at least one of their current roles and as stakeholders if they reported “stakeholder” but not “policy-maker” as one of their current roles. Those who identified themselves as “researchers” but not “policy-makers” or “stakeholders” were coded as researchers. Respondents who did not identify themselves as either a policymaker, a stakeholder or a researcher and who marked “other” as their role were considered to have “other” roles that could not be further defined. In the regression models, “number of years in current role” was entered as a continuous variable, while “experience or training in other roles” was entered as a binary variable – with values of 1 and 0 indicating such experience or training and no such experience or training, respectively. Respondents with missing data were omitted from the corresponding regression. We used simple t-tests to compare group values for variables that could not be included in our regression analyses because of multicollinearity.


In total, 530 individuals from six African countries were sent questionnaires on the evidence briefs, which addressed 17 priority issues (Table 3). Of these 530 subjects, 304 (57%) and 303 (57%) completed the questionnaires about the briefs and deliberative dialogues, respectively. Cameroon had the largest number of respondents for the evidence brief surveys (n = 99), followed by Uganda (n = 66) and Zambia (n = 46). Cameroon also had the largest number of respondents for the dialogue surveys (n = 77), followed by Uganda (n = 69) and Nigeria (n = 48). In all six study countries, the category of professional role that was most frequently self-reported in the evidence brief survey was policy-maker (49%), followed by stakeholder (24%), researcher (8%) and “other” (5%). In this survey, 45 (15%) of the respondents did not provide a role category. The category of professional role most frequently self-reported in the dialogue survey was also policy-maker (49%), followed by stakeholder (23%), researcher (10%) and “other” (4%). In this survey, 43 (14%) of the respondents did not provide a role category. Full details of the data collected on professional roles are available in Appendix A (available at: http://www.testserver5.org/moat_et_al._2013_BWHO_Appendix-A.pdf).

All the briefs included in this study contained a description of the context for the issue being addressed, a description of the various features of the problem and a description of the options for addressing the problem. All the briefs also employed a “graded-entry” format – such as one comprising a list of key messages as well as a full report – and included a reference list for those who wanted to read more about the issue involved. However, only 52% of the evidence briefs investigated either explicitly took quality considerations into account when discussing the research evidence or were subjected to a merit review and only 62% explicitly took local applicability into account when discussing the research evidence.

All but two of the key features listed in Table 2 were included in all of the convened dialogues that we investigated. The exceptions were “providing an opportunity to discuss who might do what differently” and “not aiming for a consensus”, which were features of 50% and 95% of the dialogues investigated, respectively (Appendix A).

Every key feature of the evidence briefs that we investigated was viewed very favourably by all – or almost all – of the respondents (Table 1). Compared with the other key features of the evidence briefs, “not concluding with recommendations” was judged less favourably by the respondents categorized as policy-makers, stakeholders, researchers or “other”.

Similarly, all of the key features of the deliberative dialogues were generally viewed favourably by all groups of respondents (Table 2). However, “not aiming for consensus” was viewed less favourably than any other key feature, particularly by policy-makers.

Respondents in the “other” category often rated key features of the briefs and dialogues less favourably than the respondents who could be assigned to a more specific role. In general, respondents reported strong intentions to use research evidence of the type that was discussed at the deliberative dialogues; positive attitudes towards research evidence of the type discussed at the dialogues; and subjective norms in their professional life that were conducive to using research evidence of the type that was discussed at the dialogues (Table 4). Compared with the other respondents, those who did not provide a role category considered themselves to have relatively limited behavioural control and so to be less likely to act on what they had learnt from the briefs and dialogues.

Although we initially attempted to include all of the respondent characteristics that we investigated into our regression models, we had to exclude “previous experience or training” because of multicollinearity. The data analyses only revealed two differences between groups of respondents that reached statistical significance. In the regression models for the evidence briefs – in comparisons with researchers, the reference category – a self-reported professional role that fell in the “other” category was found to be a significant predictor of giving “not concluding with recommendations” a lower score for helpfulness (P = 0.028; Table 5). In the analysis of the data for the deliberative dialogues, t-tests revealed that respondents without past experience as a researcher gave “not aiming for consensus” significantly lower scores for helpfulness than respondents with experience (P = 0.015).


Our evaluation has shown that evidence briefs and deliberative dialogues – two novel approaches to supporting the use of research evidence in policy-making – are very well received, regardless of the countries in which they are used, the health system issues that they address or the group of “actors” that is investigated. Respondents tended to view the evidence briefs and deliberative dialogues in general – as well as each of their key features – very favourably. These observations support previous recommendations that have been made about the use of these strategies in the research literature.1517,2831 “Not concluding with recommendations” emerged as the least helpful feature of evidence briefs from the perspective of all of the respondents taken together, whereas “not aiming for consensus” emerged as the least helpful feature of deliberative dialogues from the perspective of policy-makers. It is not clear whether these observations represent a problem in the ways those running the knowledge translation platforms in the study are explaining the rationale for not concluding evidence briefs with recommendations and not aiming for a consensus during deliberative dialogues, or if they represent true variations in preferences. The rationale for not concluding evidence briefs with recommendations is that any such recommendations would have to be based on the views and values of the authors of the brief – even though it is the views and values of the participants in the subsequent deliberative dialogue that are assumed to be much more important. The rationale for not aiming for consensus in the dialogues is that most dialogue participants cannot commit their organizations to a course of action without first building support within their organizations.

The policy-makers, stakeholders and researchers who had read an evidence brief as an input into a deliberative dialogue all reported strong intentions to act on what they had learnt from this process. However, those who did not report a role category were relatively unlikely to report that they intended to act on the same information. It is possible that these respondents were aware of factors beyond their control – such as the political context in which they worked – that would hamper their ability to use research evidence.

The present study is an early attempt to develop a better understanding about how two novel strategies to support the use of research evidence in policy-making – evidence briefs and deliberative dialogues – are viewed by their target audiences in low- and middle-income countries. It was also an attempt to see if the same strategies encourage their target audiences to act – or, at least, to want to act – on research evidence. Our evaluation covered several countries, issues and categories of profession and was designed to measure an appropriate and tractable outcome: intention to act. This approach could easily be applied across more countries and issues in the future. The intention was to make our study sample as representative as possible, by attempting to include data from every individual who had read an evidence brief and attended a deliberative dialogue.

Our study has two weaknesses that should be acknowledged. First, we only used a first wave of data and so our regression models were often constrained by small sample sizes; response rates were less than optimal; and data for specific questions were sometimes missing. Second, we focused on the characteristics of the respondents because we lacked high-quality data about the characteristics of the context – which can vary in terms of the institutions, interests and ideas that might influence the policy process. Despite these limitations, our observations provide useful insights for those seeking to inform policy-making or to evaluate evidence briefs, deliberative dialogues and similar strategies in the future.


Several members of the Knowledge Translation Platform Evaluation study team contributed to this paper but are not listed as authors: Gbangou Adjima and Salimata Ki (Burkina Faso); Jean Serge Ndongo and Pierre Ongolo-Zogo (Cameroon); Mamuye Hadis and Adugna Woyessa (Ethiopia); Abel Ezeoha and Jesse Uneke (Nigeria); Harriet Nabudere and Nelson Sewankambo (Uganda); and Joseph Kasonde and Lonia Mwape (Zambia). JNL has dual appointments with the McMaster Health Forum, McMaster University’s Centre for Health Economics and Policy Analysis and Department of Political Science, and the Department of Global Health and Population at the Harvard School of Public Health. FE has a dual appointment with McMaster University’s Department of Clinical Epidemiology and Biostatistics.


We thank the European Commission FP7 programme (which funded the Supporting the Use of Research Evidence in African Health Systems project), the Alliance for Health Policy and Systems Research, the International Development Research Centre (IDRC) International Research Chair in Evidence-Informed Health Policies, and the Canadian Institutes of Health Research for their financial support.

Competing interests:

None declared.