Skip to main content

Expanding the pragmatic lens in implementation science: why stakeholder perspectives matter

Abstract

Background

Pragmatism is important in implementation science to ensure that implementation methods reflect the practical concerns of the stakeholders and services involved in change. To evaluate the usability of these methods, pragmatic measures have been developed using psychometrics. However, existing approaches have predominantly inherited a definition of pragmatism from the evidence-based healthcare movement. These metrics may not reflect concerns with pragmatism that public stakeholders (defined as those with expertise by experience of healthcare systems) may have with implementation science.

Aims

Consequently, our aim was to carry out participatory research to explore stakeholder views of pragmatic measures in implementation science theory.

Methods

We convened a working group of eight stakeholders. To facilitate discussion, we created educational materials, including a video and flyer. The working group conducted three meetings, engaging in abductive analysis to investigate the presented issues.

Results

Stakeholders expressed concerns about the restricted definition of pragmatism, the potential for biases in measurement, and the necessity for a holistic, pluralistic approach that incorporates diverse perspectives when developing and evaluating implementation theory and metrics. These findings underscore the risk of distorting the development of implementation science methods without the input and scrutiny of stakeholders. Neglecting the wider application of pragmatic philosophy in implementation science could limit stakeholder involvement in the design of implementation methods and service transformation.

Conclusions

This study, guided by experts with lived experience in healthcare services, opens doors for considering pragmatic philosophy in the evolution of pragmatic implementation measures and metrics, offering numerous promising directions for further exploration.

Peer Review reports

Background

Recent developments in implementation science have underscored the significance of pragmatism, emphasising the need for research to align with ‘real-world’ practicalities [1]. This trend reflects a broader shift in evidence-based healthcare towards practical, usable measures rooted in practice [2].

Implementation science has primarily focused on two areas: firstly, the development of embedding methods or frameworks in practice, such as pragmatic trials or RE-AIM [3, 4].Footnote 1 And secondly, the creation of pragmatic implementation measures which aim to evaluate the pragmatic qualities of implementation measures, exemplified by tools like ‘the Psychometric and Pragmatic Evidence Rating Scale’ (PAPERS) [5]. Where the former is a straight forwards task of relating programs or treatments to practice, the latter task of evaluating pragmatism poses more of a challenge, because firstly, there is an added level of abstraction in measuring the measures that are used in practice. And secondly, in the words of Glasgow et al.: “there is no [accepted] way of universally evaluating pragmatism” [2].

Our team conducted a recent scoping review on defining pragmatism in implementation science, which revealed a lack of coherence in the field's use of the term [6]. Only nine papers discussed pragmatism, with limited stakeholder involvement in developing pragmatic measures like the PAPERS rating scale. Typically, assessments of pragmatism relied on expert panels and psychometrics [7,8,9],Footnote 2 neglecting input from diverse stakeholders, including patients and service users, resulting in a narrow focus and potential oversight in methodology.

Pierce’s original maxim of pragmatism states:"Consider the practical effects of the objects of your conception. Then, your conception of those effects is the whole meaning of the conception"[10]. For this study, it can be taken to mean that the challenge of evaluating pragmatism lies in the constantly changing social dynamic between real-world scenarios and pragmatic measures. The creation of measures may make ideal principles detached from practical realities and stakeholder concerns [11]. Any use of a scale is an attempt to place a theory on to a more complex reality [12]. Psychometric scales therefore, cannot avoid remaining open to the possibility of their measurement being inaccurate [13]. Additionally, methodological biases may favour certain measurement methods and forms of expertise, neglecting diverse perspectives and exceptional cases [14].

Addressing these issues requires expanding conceptions of pragmatism and incorporating diverse voices and perspectives into the measurement process. Broader discussions may explore the relevance and inclusivity of implementation science methodologies and prompt considerations on balancing reflexivity, fidelity, and adaptivity [15,16,17].

Therefore, questioning the conceptualisation of pragmatism directly engages the flexibility and inclusivity of implementation measurement design, and the extent to which they are guided by both professional expertise and lived experience. Considerations of pragmatism may demonstrate why channels of participation, reflection, and interpretation are an important accompaniment to any evaluation of a measurement’s pragmatic qualities.

Aims

This study aims to explore definitions of pragmatism in implementation science with public stakeholders, focusing on how pragmatism is evaluated by pragmatic measures. Specific objectives include:

  • Exploring how to measure pragmatism from the point of view of stakeholders.

  • The theoretical implications of bringing in a wider understanding of pragmatism for pragmatic implementation research.

Methods

We formed a diverse working group comprising stakeholders (defined as those with expertise by experience of healthcare systems)Footnote 3 to engage in discussions regarding pragmatism within implementation science. Our approach aimed to frame the problem effectively and foster meaningful debate among stakeholders, ensuring their active involvement in the research process.

Framing the problem

Initially, our strategy involved attempting to validate the pragmatic constructs of existing psychometric scales in the field by seeking input from patients and the public.Footnote 4 However, feedback from a Patient and Public Involvement and Engagement panel revealed potential limitations with this approach.Footnote 5 It was noted that such an approach could inadvertently steer discussions towards merely validating existing measures rather than engaging participants in a deeper exploration of the research process itself [18,19,20]. This insight prompted us to develop user-friendly informational resources to address these concerns and provide a foundation for informed discussions among participants.

To ensure accessibility and comprehensibility, we designed a set of non-technical informational resources, including a short AV presentation, a concise flyer, and a worksheet of consideration points (appendix 1–3) [21,22,23]. These materials aimed to introduce key concepts such as implementation science, pragmatism, pragmatic measures, and PAPERS in a clear and understandable manner, while also highlighting the importance of diversity and wider representation in research endeavours. Our multimodal approach aimed to cater to diverse learning styles and personal perspectives, thereby facilitating more inclusive and engaging discussions within the working group [24,25,26].

Working group debate

The informational resources were shared with a public research panel for feedback and further revisions to enhance inclusivity. Subsequently, we employed a targeted recruitment strategy to assemble a diverse group of participants. This involved advertising our project through various public research networks, including King’s Improvement Science, the National Institute for Health Research (NIHR) Applied Research Collaboration (ARC) South London, and Shaping Our Lives.Footnote 6 To attract individuals with a range of perspectives and firsthand experiences of healthcare systems [27, 28]. And include viewpoints that may not have had the chance to directly reflect on pragmatic measures before. Potential participants were invited to complete a selection form to ensure as much demographic and experiential diversity as possible within the working group (Table 1, appendix 4) [29].

Table 1 Demographics of the PPI group members

Meetings were conducted using Microsoft Teams to maximise accessibility and accommodate the diverse schedules of participants. We organised three one-hour discussions over three weeks, limiting the group size to eight members to facilitate in-depth exchanges and meaningful contributions from all participants [30, 31]. Members received compensation for their time to ensure equitable participation and acknowledge the value of their input [32].

Throughout the working group meetings, the research team created a supportive and inclusive environment conducive to open dialogue and meaningful engagement [33]. Discussions were structured (around themes taken from PAPERS) to encourage participants to delve deeply into the subject matter, interact with each other's perspectives, and build upon shared insights over time (appendix 2). Topic guides were provided in advance to facilitate focused discussions around key themes and questions related to the concept of pragmatism in implementation science [34].

Analysis

Debates were recorded, transcribed, and analysed in NVivo using abductive analysis [35, 36]. Codes were created using abductive reasoning by RB in 3 stages. (1) iterative movement between a close reading of the data and theoretical concepts from pragmatic philosophy to create a code book. (2) Abductive data reduction through coding equations to refine and structure the codes. (3) In-depth abductive qualitative analysis to explore relationships between coded data and pragmatic philosophy. At each stage the coding book was shared, discussed, and verified in research team meetings. This was to ensure the accuracy, understandability, and relevance of the themes as they emerged (appendix 5). The final paper was shared with participants, and they were asked if they would like to be included as co-authors. The GRIPP2 checklist was used to ensure the quality of the report (appendix 6) [37]. The findings section below summarises the themes agreed.

Findings

Stakeholder discussions highlighted 6 themes. The participant quotes that informed the themes are compiled in Table 2.

Table 2 Coding table

Complexity of the subject matter

Participants universally acknowledged the complexity and intellectual challenge inherent in the subject matter of psychometrics, pragmatic measures, and pragmatic philosophy. Participants had different levels of familiarity with the methods and techniques introduced. Difficulty in understanding the PAPERS scale, particularly its abstraction, was a common sentiment. Participants expressed a preference for discussing tangible outcomes rather than abstract measures and constructs.

Weighting pragmatism

While recognising the importance of psychometric pragmatic measurement constructs, participants hesitated to judge the relative importance or propose alternatives to constructs in the PAPERS scale. The feasibility of ranking or rating outcome measures was questioned due to potential bias, limitations, and subjectivity. Discussions highlighted the need for a balanced approach, incorporating qualitative components when quantitative measures and scales are used.

Bias

Participants raised concerns about potential bias that may creep into fixed scales or measures over time. They emphasised how pragmatism requires dynamic thinking to remain representative and the need to maintain interpretation in measures to mitigate bias. Some participants shared experiences from projects in diverse communities which underscored the importance of inclusivity and acknowledging different viewpoints.

Holism

Participants highlighted the holistic nature of human experiences, emphasising that the multidimensional aspects of being human are not fully reduced to clinical symptoms and measurement scales. Discussions highlighted the need to consider social, relational, and quality of life factors in measure design. Participants acknowledged the limitations of formulating a pragmatic scale that accurately captures human complexity.

Plurality

Participants stressed the importance of incorporating diverse perspectives in to evaluating measures, emphasising the value of considering pragmatism on an individual case by case bases. A participant shared a further example from a past project that highlighted the pitfalls of overlooking diverse perspectives, where one specific person’s perspective in one moment may speak more universally for us all at larger scales and over a longer span of time.

Perspectivism

Discussions reflected on the complexities of reconciling differing perspectives, particularly in culturally diverse contexts. Participants cautioned against adopting a one-size-fits-all approach to pragmatism, recognising the inherent subjectivity in value judgments. Some considered measurement scales to be utilitarian in their approach to ethics, that may result in decision-making processes where the perspectives of the many are considered over the few.

Combined, the findings underscored the need for a nuanced and multifaceted approach to measuring pragmatism in implementation science. The need for inclusive, participatory, and adaptable approaches to measurement emerged as a key theme, reflecting the complex interplay of dynamic factors influencing what it means to be pragmatic.

Discussion

The themes arising from the discussions revealed public perspectives on both pragmatism and how to measure pragmatism in implementation science. Despite their varying levels of familiarity with expert debates in evidence-based methodologies or implementation science, participants raised pertinent issues that highlight the partiality inherent in the concept of pragmatism employed in Implementation Science.

The themes echo broader discussions in scholarly literature, particularly within American Pragmatism, which has informed the development of research methodologies distinguishing between qualitative and quantitative approaches [38]. While some methodological strands, such as mixed-methods, middle-ranged theory, and science and technology studies, have incorporated pragmatic philosophy, implementation science methodologies have yet to consistently integrate these perspectives [39].

Inconsistencies in pragmatism

While some sources (such as 1) acknowledge the importance of stakeholder involvement in defining pragmatic measures, discussions of pragmatism remain confined to quantitative methodologies. Implementation science's commitment to evidence-based methods, prioritising objective realities over subjective ones, further exacerbates these inconsistencies [40].

The reliance on evidence-based methods to validate pragmatism poses challenges. Evidence-based methods prioritise the ‘evidence’ from certain voices and interpretations over others, potentially leading to discrimination [41]. Pragmatic measures prioritise actionable research evidence rather than a fully accurate exploration of reality, creating a potential for mismatch between the idealised pragmatism of psychometric constructs and pragmatic realities as encountered during implementation, and coming to bear on diverse stakeholders [42].

This raises questions about whether implementation science methodologies should exclusively adhere to evidence-based principles. Public stakeholder concerns may be difficult to assimilate into evidence-based hierarchies without losing effectiveness, yet they offer valuable insights into practical implementation challenges and priorities [43]. Embracing pragmatism may necessitate a revaluation of evidence-based methods in favour of more practical approaches, such as participatory action research or co-production [44, 45].

Scales and pragmatism

While the use of scales like PAPERS is not invalidated and remains a useful tool, relying solely on psychometrics may overlook subjective considerations crucial to the implementation process [46, 47].Footnote 7 Other qualitative approaches should be considered where possible when evaluating implementation measures. Stakeholder perspectives underscore the importance of including diverse voices in discussions on measurement methodologies, highlighting issues that should be addressed in formative discussions on methodology shaping.

The full implications of pragmatism cannot be encapsulated in a formula or rating scale. Decision-making must continuously evaluate diverse concerns and perspectives within specific situations. Involving public stakeholders in measurement evaluation and method design requires ongoing, co-produced techniques that acknowledge the multifaceted nature of pragmatic decision-making [48].

Wider implications for implementation

Although this study has focused on pragmatic measures, there is a wider implication to the discussion and application of the methods. Stakeholders should be better considered not only in the implementation of interventions, but in the formulation of our methods, measures, strategies, and as partners in research more broadly.

The design of many implementation science resources limits exploration of pragmatism, often assuming that wider interpretation is problematic to arriving at concise research evidence. There is a need for a broader examination of research methodologies and their representation in the field to better integrate implementation science aims with partners'needs [49].

As implementation science frameworks continue to expand, there is a growing need to emphasise openness to interpretation, accommodation of conflicting perspectives, and promotion of reflective practice [50]. This study contributes to discussions on the representativeness and usability of implementation science theory by highlighting the misalignment between wider stakeholder perspectives and the current use of pragmatism in the field.

Strengths and limitations

Engaging stakeholders on complex topics like pragmatic measures and evidence-based methodologies is a significant challenge, and simple questionnaires risk tokenistic engagement. We chose to educate and involve fewer participants more deeply rather than assemble a larger, less engaged group. The diverse 8-member panel aimed to represent marginalised viewpoints but cannot encompass all perspectives possible. While the article format reflects group discussions, it provides only a glimpse into public concerns about pragmatic measures. Future research could use the identified themes to explore nuances in implementation science methods/theory and pragmatism.

Conclusion

Different approaches and different methods may account for different definitions of pragmatism. There is no one method which may account entirely for a given reality in every circumstance. The pragmatic, practical measures sought after to validate evidence-based methods may not correspond to the pragmatic ‘real world’ observed when employing methods to increase participation and inclusion in the implementation process. As practice and the practical are a key component in any instance of implementation, implementation science should conduct a more thorough and inclusive appraisal of what pragmatism means in the process of creating measures and scales, i.e. if psychometric measures are used to evaluate scales how and where do further participant voices come into the future application of scales.

When contemplating more broadly what pragmatism means, some things to consider are how implementation methods can be made more direct and use inclusive language and procedures. Any methods used should attempt to engage public stakeholders in reflections from the ground up and not just from the top down. Methods should minimise obfuscatory language or research processes that divert localised discussion, dialogue, and decision making in measuring outcomes. Where complex clinical measures are warranted, they should be combined with other methods to ensure meaningful engagement with ‘real world’ considerations not biased to validating any one methodological representation.

Wider pluralistic exploration of pragmatism in implementation method design may uncover service realities not reducible to (and obscured by) a focus on evidence.

Data availability

The datasets generated and/or analysed during the current study are not publicly available due to participant confidentiality but are available from the corresponding author on reasonable request.

Notes

  1. also see PRECIS.

  2. PAPERS takes a quantitative approach using psychometrics (a technique of mathematical modelling) to give implementation measures a rating (or score) on a Likert scale on their pragmatic qualities.

  3. For clarity and consistency with existing literature, the term ‘stakeholder’ is used in this paper to refer to efforts to include patient and public perspectives in the research process, ensuring diverse knowledge and expertise. Whilst acknowledging that an ideal group of who exactly is a ‘stakeholder’ cannot be definitively defined.

  4. We initially questioned a pragmatic way to rate scales in the Implementation Outcome Repository https://implementationoutcomerepository.org/ or COSMIN https://www.cosmin.nl/ .

  5. See ARC South London Public Research Panel: https://arc-sl.nihr.ac.uk/about-us/nihr-arc-south-london-public-research-panel

  6. See King’s Improvement Science https://kingsimprovementscience.org/involving-the-public ARC South London https://arc-sl.nihr.ac.uk/involving-patients-public and Shaping Our Lives https://shapingourlives.org.uk/about/

  7. E.g. additional ways of rating scales in the Implementation Outcome Repository may involve facilitating collaborative fora and the qualitative appraisals of scales.

References

  1. Glasgow RE. What does it mean to be pragmatic? Pragmatic methods, measures, and models to facilitate research translation. Health Educ Behav. 2013;40(3):257–65.

    Article  PubMed  Google Scholar 

  2. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45(2):237–43.

    Article  PubMed  Google Scholar 

  3. Gaglio B, Phillips SM, Heurtin-Roberts S, Sanchez MA, Glasgow RE. How pragmatic is it? Lessons learned using PRECIS and RE-AIM for determining pragmatic characteristics of research. Implement Sci. 2014;9:96.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Ford I, Norrie J. Pragmatic trials. N Engl J Med. 2016;375(5):454–63.

    Article  PubMed  Google Scholar 

  5. Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, et al. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Transl Behav Med. 2021;11(1):11–20.

    Article  PubMed  Google Scholar 

  6. Hull L, Boulton R, Jones F, Boaz A, Sevdalis N. Defining, conceptualizing and evaluating pragmatic qualities of quantitative instruments measuring implementation determinants and outcomes: a scoping and critical review of the literature and recommendations for future research. Transl Behav Med. 2022;12(11):1049–64.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Hunsley J, Mash EJ, Hunsley J, Mash EJ, editors. Developing criteria for evidence-based assessment: an introduction to assessments that work. In: A guide to assessments that work. Oxford University Press. 2008;0. Available from: https://doiorg.publicaciones.saludcastillayleon.es/10.1093/med:psych/9780195310641.003.0001. Cited 2022 Sep 16.

  8. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res. 2010;19(4):539–49.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):1–12.

    Article  Google Scholar 

  10. Pierce CS. Illustrations of the logic of science. Pop Sci Mon. 1878;12. Available from: https://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_12/January_1878/Illustrations_of_the_Logic_of_Science_II. Cited 2025 Feb 14.

  11. Talisse RB, Aikin SF, editors. The pragmatism reader. 2011. Available from: https://press.princeton.edu/books/paperback/9780691137063/the-pragmatism-reader. Cited 2022 Sep 16.

  12. Sloman S, Lagnado D. The problem of induction. In: Holyoak KJ, Morrison RG, editors. The Cambridge handbook of thinking and reasoning. Cambridge: Cambridge University Press; 2005;847.

    Google Scholar 

  13. Jones G, Perry C. Popper, induction and falsification. Erkenn 1975-. 1982;18(1):97–104.

    Google Scholar 

  14. Greenhalgh T, Snow R, Ryan S, Rees S, Salisbury H. Six ‘biases’ against patients and carers in evidence-based medicine. BMC Med. 2015;13(1):200.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Jorm C, Iedema R, Piper D, Goodwin N, Searles A. ‘Slow science’ for 21st century healthcare: reinventing health service research that serves fast-paced, high-complexity care organisations. J Health Organ Manag. 2021;35(6):701–16.

  16. Snell-Rood C, Jaramillo ET, Hamilton AB, Raskin SE, Nicosia FM, Willging C. Advancing health equity through a theoretically critical implementation science. Transl Behav Med. 2021;11(8):1617–25.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Eboreime EA, Banke-Thomas A. Beyond the science: advancing the ‘art and craft’ of implementation in the training and practice of global health. Int J Health Policy Manag. 2022;11(3):252–6.

    PubMed  Google Scholar 

  18. Hahn DL, Hoffmann AE, Felzien M, LeMaster JW, Xu J, Fagnan LJ. Tokenism in patient engagement. Fam Pract. 2017;34(3):290–5.

    PubMed  Google Scholar 

  19. Majid U. The dimensions of tokenism in patient and family engagement: a concept analysis of the literature. J Patient Exp. 2020;7(6):1610–20.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Ocloo J, Matthews R. From tokenism to empowerment: progressing patient and public involvement in healthcare improvement. BMJ Qual Saf. 2016;25(8):626–32.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Banner D, Bains M, Carroll S, Kandola DK, Rolfe DE, Wong C, et al. Patient and public engagement in integrated knowledge translation research: are we there yet? Res Involv Engagem. 2019;5(1):8.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Meyer M. Three challenges for risk-based (research) regulations. In: Cohen IG, Lynch HF, editors. Human subjects research regulation: perspectives on the future. Cambridge Mass: MIT Press. 2014.

    Google Scholar 

  23. Sleigh J, Vayena E. Public engagement with health data governance: the role of visuality. Humanit Soc Sci Commun. 2021;8(1):1–12.

    Article  Google Scholar 

  24. Anderson CR, McLachlan SM. Transformative research as knowledge mobilization: transmedia, bridges, and layers. Action Res. 2016;14(3):295–317.

    Article  Google Scholar 

  25. Mas FD, Garcia-Perez A, Sousa MJ, da Costa RL, Cobianchi L. Knowledge translation in the healthcare sector. a structured literature review. Electron J Knowl Manag. 2020;18(3):198–211.

    Google Scholar 

  26. Scolari C. Transmedia storytelling: new ways of communicating in the digital age. In: AC/E digital culture annual report 2014: focus 2014: the use of new technologies in the performing arts. Madrid Spain: Dosdoce. 2014.

    Google Scholar 

  27. Mallery C, Ganachari D, Smeeding L, Fernandez J, Lavallee D, Siegel J, et al. PHP5 innovative methods for stakeholder engagement: an environmental scan. Value Health. 2012;15(4):A14.

    Article  Google Scholar 

  28. Hoddinott P, Pollock A, O’Cathain A, Boyer I, Taylor J, MacDonald C, et al. How to incorporate patient and public perspectives into the design and conduct of research. F1000Research. 2018;7:752.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Edwards HA, Huang J, Jansky L, Mullins CD. What works when: mapping patient and stakeholder engagement methods along the ten-step continuum framework. J Comp Eff Res. 2021;10(12):999–1017.

    Article  PubMed  Google Scholar 

  30. Khodyakov D, Savitsky TD, Dalal S. Collaborative learning framework for online stakeholder engagement. Health Expect Int J Public Particip Health Care Health Policy. 2016;19(4):868–82.

    Google Scholar 

  31. Montesanti SR, Abelson J, Lavis JN, Dunn JR. Enabling the participation of marginalized populations: case studies from a health service organization in Ontario, Canada. Health Promot Int. 2017;32(4):636–49.

    PubMed  Google Scholar 

  32. Boaz A, Hanney S, Borst R, O’Shea A, Kok M. How to engage stakeholders in research: design principles to support improvement. Health Res Policy Syst. 2018;16(1):60.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Albers B, Metz A, Burke K, Bührmann L, Bartley L, Driessen P, et al. The mechanisms of implementation support - findings from a systematic integrative review. Res Soc Work Pract. 2022;32(3):259–80.

    Article  Google Scholar 

  34. Rolfe DE, Ramsden VR, Banner D, Graham ID. Using qualitative health research methods to improve patient and public involvement and engagement in research. Res Involv Engagem. 2018;4(1):49.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Tavory I. Timmermans S. Abductive analysis: theorizing qualitative research. Chicago: University of Chicago Press; 2014. p. 179.

  36. Timmermans S, Tavory I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol Theory. 2012;30(3):167–86.

    Article  Google Scholar 

  37. Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ. 2017;2(358):j3453.

    Article  Google Scholar 

  38. Pihlström S. Reserach methods and problems. In: Pihlström S, editor. The continuum companion to pragmatism. London: Bloomsbury Publishing. 2011.

  39. Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘implementation science.’ J Med Humanit. 2020;41(3):379–94.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Berwick DM. Broadening the view of evidence-based medicine. BMJ Qual Saf. 2005;14(5):315–6.

    Article  CAS  Google Scholar 

  41. Nutley SM, Powell A, Davies H. What counts as good evidence? In: Provocation paper for the alliance for useful evidence. Alliance for Useful Evidence. 2013.

    Google Scholar 

  42. Pawson R. Pragmatic trials and implementation science: grounds for divorce? BMC Med Res Methodol. 2019;19(1):176.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Kahneman D. Thinking, fast and slow. Penguin UK. 2012. 752.

  44. Larsen LT. Not merely the absence of disease: a genealogy of the WHO’s positive health definition. Hist Hum Sci. 2022;35(1):111–31.

    Article  Google Scholar 

  45. Allemang B, Sitter K, Dimitropoulos G. Pragmatism as a paradigm for patient-oriented research. Health Expect. 2022;25(1):38–47.

    Article  PubMed  Google Scholar 

  46. Schoenherr JR, Hamstra SJ. Psychometrics and its discontents: an historical perspective on the discourse of the measurement tradition. Adv Health Sci Educ. 2016;21(3):719–29.

    Article  Google Scholar 

  47. Wijsen LD, Borsboom D, Alexandrova A. Values in psychometrics. Perspect Psychol Sci. 2022;17(3):788–804.

    Article  PubMed  Google Scholar 

  48. Arnstein SR. A ladder of citizen participation. J Am Inst Plann. 2007. Available from: https://www.tandfonline.com/doi/abs/10.1080/01944366908977225. Cited 2023 Mar 16.

  49. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17(1):55.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Rapport F, Smith J, Hutchinson K, Clay-Williams R, Churruca K, Bierbaum M, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. J Eval Clin Pract. 2022;28(6):991–1002.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Louise Hull for the centre for implementation science at King’s College London for her involvement in the development of the project and comments on the paper, and Annette Boaz from the London School of Hygiene and tropical Medicine for her involvement in the development of the project.

Funding

RB, AS and FJ’s research is supported by the National Institute for Health Research (NIHR) Applied Research Collaboration (ARC) South London at King’s College Hospital NHS Foundation Trust. NS’s research is funded by the National University of Singapore Yong Loo Lin School of Medicine, via the Centre for Behavioural and Implementation Science Interventions (BISI), Singapore.

Author information

Authors and Affiliations

Authors

Contributions

All patients provided written informed consent prior to enrolment in the study. RB, NS and FJ designed the study, RB and AS collected data, RB and AS analysed data. All authors involved in writing and reviewing manuscript. All authors reviewed and approved final manuscript.

Corresponding author

Correspondence to Richard Boulton.

Ethics declarations

Ethics approval and consent to participate

Our study is registered with the ethics board of Kings College London (registration no. MRA- 22/23–34271).

Consent for publication

Not applicable.

Competing interests

Nick Sevdalis is the director of London Safety and Training Solutions ltd, which offers training and improvement and implementation solutions to healthcare organisations and the pharmaceutical industry. The other authors have no conflicts of interest to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boulton, R., Semkina, A., Jones, F. et al. Expanding the pragmatic lens in implementation science: why stakeholder perspectives matter. Implement Sci Commun 6, 48 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00730-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00730-z

Keywords