- Research
- Open access
- Published:
Development and testing of an interactive evaluation tool: the Evaluating QUality and ImPlementation (EQUIP) Tool
Implementation Science Communications volume 6, Article number: 32 (2025)
Abstract
Background
Evaluating implementation outcomes is gaining momentum in health service delivery organizations. Teams are increasingly recognizing the importance of capturing and learning from their implementation efforts, and Implementation Scientists have published extensively on implementation outcomes. However, Quality Improvement approaches and tools are more widely recognized and routinely used in healthcare to improve processes and outcomes. This article describes the development of an interactive online tool designed to help researchers and practitioners effectively design and develop appropriate evaluation plans that support the understanding of successful implementation.
Methods
There were two main development phases. Phase 1, from January to October 2020, involved several design sessions with a small group of professionals leading implementation initiatives within the provincial health delivery system. This resulted in a testable prototype. Phase 2, from November 2020 to June 2021, focused on usability testing and interviews with a broader group of researchers and professionals leading implementation initiatives across the province.
Results
The result is the EQUIP (Evaluating QUality and ImPlementation) Tool, an interactive online tool that integrates quality measures from the Alberta Quality Matrix for Health and implementation measures from widely used outcomes frameworks, such as the one developed by Proctor and colleagues and the RE-AIM planning and evaluation framework. The tool encourages users to explore implementation outcomes and quality dimensions from different perspectives and select questions and indicators relevant to their project.
Conclusion
The EQUIP tool was designed and refined in collaboration with end users to create an accessible and practical online tool. This work addresses the call for greater integration of Quality Improvement and Implementation Science by combining approaches from both fields to strengthen evaluation processes within the health system.
Background
The issue
Evaluating implementation is gaining momentum in health service delivery organizations [1, 2]. Teams are recognizing the importance of capturing and learning from their implementation efforts [3,4,5]. This is especially important if an innovation (i.e., a new way of doing things) is successful. Those responsible for implementing the innovation need to understand all the factors, formal and informal, seen and unseen, that influence the outcome of an implementation process, so that it can be successfully replicated elsewhere. As Proctor and others have described, when health-system innovations fail, and they often do, it is essential to know if the failure occurred because the innovation was ineffective (innovation failure) or if a good innovation was poorly implemented (implementation failure) [6, 7].
Health service delivery organizations have largely adopted Quality Improvement approaches as a way to improve processes and outcomes [8, 9]. This is demonstrated by the extensive availability and uptake of Quality Improvement infrastructure, supports, and tools available across different care contexts [8, 10, 11]. An example of this is the Alberta Quality Matrix for Health (referred to as “the matrix”)—a single Quality Improvement framework for health planning and evaluation that guides the design and development of most healthcare-related evaluations taking place in the province of Alberta, Canada [10]. The matrix is designed to assess patient outcomes and quality in a standardized way across the complex healthcare system. It includes six dimensions of health service quality: Acceptability, Accessibility, Appropriateness, Effectiveness, Efficiency, and Safety. However, the matrix is missing measures of implementation, which are essential to produce robust evaluations of health system initiatives [2, 6, 12].
The emerging field of Implementation Science, dedicated to understanding methods and strategies to move research evidence into practice and policy [13], is increasingly recognized as a source of guidance that can strengthen Quality Improvement approaches [14]. Implementation Science offers researchers and health system staff a range of frameworks for planning, executing, and evaluating implementation initiatives with an evidence-informed lens. Individuals and teams who want to bring implementation into focus can pull from published validated measures associated with recommended implementation outcomes and structured implementation evaluation frameworks. By using these tools, implementation processes become a source of learning rather than being hidden in a black box.
The opportunity
The tools generated from Implementation Science are often difficult to find and apply in practice [15]. This is especially true for people implementing change initiatives in complex settings, like a provincial acute health delivery system. Finding, navigating, or incorporating Implementation Science tools with tight timelines and limited resources is a recurring challenge. As Implementation Support Practitioners with the Alberta Strategy for Patient-Oriented Research SUPPORT Unit (AbSPORU), three authors of this paper (LM, CR, and GZ) are tasked with helping health researchers and health system improvement teams incorporate Implementation Science principles (i.e., tools) into their implementation initiatives. Our advanced training in Implementation Science and unique mandate of providing evidence-based support to health system change initiatives enables us to devote time to developing resources customized to the Alberta health system.
Our team receives numerous requests from across the provincial health system to help build evaluations incorporating dimensions of health quality outlined in the matrix, along with evidence-based implementation outcomes. Thus, the authors of this paper undertook a research project to collaboratively design a solution that helps users integrate quality and implementation outcomes, thereby, strengthening existing evaluation processes in the provincial health system.
An outcomes framework commonly used by our team is the one developed by Proctor and colleagues [6]. They describe eight implementation outcomes (Table 1), which precede but are interrelated with service and client outcomes [6]. The service outcomes, which are based on the six Quality Improvement Aims from the Institute of Medicine [16], complement the quality dimensions of the Alberta Quality Matrix for Health. Thus, integrating the matrix’s quality outcomes with Proctor et al.’s implementation outcomes naturally supports the development of an evaluation tool that combines both Quality Improvement and Implementation Science principles.
Aims
We are writing this article to share the product collaboratively developed in response to this opportunity— the Evaluating QUality and ImPlementation (EQUIP) Tool—an online evaluation tool that integrates quality and implementation outcomes. This tool was designed with a team of content experts and health system partners and then tested and refined with a broader group of potential end users (i.e., people who would use the tool) to create the final product.
The questions used to guide the research project were:
-
1.
How might we co-design an evaluation tool that brings together the Alberta Quality Matrix for Health with Proctor et al.’s taxonomy of implementation outcomes?
-
2.
How might we test and refine the tool so that it meets the needs of intended users?
-
3.
How might we ensure the tool is accessible and valuable to users and the work they do?
By answering these questions, we aimed to support researchers, funders, and practitioners working in health service delivery organizations who want to, directly or indirectly, strengthen implementation evaluation capacity and establish routine evaluation of implementation outcomes in health research studies and practice change initiatives.
Context
This project took place in Alberta, Canada, within the context of Alberta Health Services, which at the time of writing was the single provincial health authority responsible for providing programs and services at more than 900 facilities throughout the province, including hospitals, clinics, continuing care facilities, cancer centres, mental health facilities, and community health sites [17].
Methods
The goal of this research was to create a tool that is suited to the implementation and evaluation needs of people working in the provincial health system. Therefore, we used established methods for co-design and usability testing.
There were two main development phases. Phase 1 took place over ten months, from January to October 2020. It consisted of several design sessions with a small group of professionals leading implementation initiatives in the provincial health delivery system. This resulted in a testable prototype. Phase 2 occurred over the next eight months, from November 2020 to June 2021. It consisted of usability testing and interviews with a broader array of researchers and professionals leading implementation initiatives across the province. This extensive process of iteration and feedback was essential to ensure the tool was practical and applicable.
The development of the EQUIP tool is reported in accordance with the 'Guidance for reporting intervention development studies in health research' (GUIDED) checklist [18]. See Additional File 1 for the completed checklist.
Phase 1
Design team
During an initial brainstorming session, AbSPORU staff identified potential design team members based on pre-existing relationships with health system practitioners from the provincial health delivery system— Alberta Health Services. The aim was to have at least three members from different parts of the health system and three members from AbSPORU. The six members of the team brought practical and academic expertise in evaluation, Implementation Science and/or design; all were based in Alberta, Canada.
Design approach
The tool was developed using a co-design approach to incorporate end-user perspectives in the product design. A combination of the Successive Approximation Model and Design Thinking was used to guide the process [19,20,21]. The Successive Approximation Model, from the field of Instructional Design, outlines an iterative, participatory design and development process that focuses on end users’ experiences, engagement, and motivation [19]. Design Thinking is a human-centered, solution-based approach to design, which encourages teams to focus on the users and their contexts. It accomplishes this by drawing on methods targeted at gaining a deep understanding of users’ needs, experiences, and desires from their perspective to develop solutions that are effective and accessible [20, 21].
Two rapid, iterative, virtual design sessions were held over a video conferencing platform, Zoom (Zoom Video Communications, Inc.). At the first design session held in April 2020, attendees discussed potential users and their contexts, formulated a “how might we” question to guide design and development, brainstormed possible solutions, and began prototyping. A voting exercise was held at the end of the session to select one prototype for further development. Notes from the session were captured using graphic recording by the team’s graphic designer (CR) (See Additional File 2). A summary of the notes is provided in Fig. 1.
A second design session was held in May 2020 to revisit and expand upon discussions held during the first session and confirm the selection of the prototype. Over the next three months (May to October 2020), the design team created and refined the content for the tool through ongoing email discussions as it wasn’t possible to meet in person due to COVID restrictions. During this same period, the team’s graphic designer (CR) further developed the prototype for the online interactive tool.
Phase 2
Usability
Usability testing was conducted from November 2020 to April 2021 using two different online questionnaires, administered using Google Forms: one on the functionality of the tool and website and the second on the design and applicability of the tool. See Additional File 3 for the complete questionnaires. The first questionnaire consisted of several tasks for participants to work through to ensure they could access, navigate, and interact with the tool. Upon completing the first questionnaire, participants were directed to a link to access the second questionnaire. The second questionnaire asked for feedback on the ease of use, the relevance of the tool to their work, and the likelihood of recommending the tool, along with targeted questions on what to improve (and how) and what users liked about the tool. Data was collected from yes/no responses, multiple choice responses, Likert scales, and open-ended answers (Table 2). A group of participants representing potential end users were recruited through existing relationships with design team members to complete the usability testing. Twenty-one people (including three design team members) completed both questionnaires over seven months. Most users came from the healthcare system (57%) or academia (35%). Two participants were evaluation specialists for community and health policy organizations. All had experience in Quality Improvement, evaluation, or Implementation Science. The majority of participants had knowledge of the Alberta Quality Matrix for Health.
Usability feedback was collected and analyzed using descriptive statistics in Microsoft Excel. Adjustments were made to the tool in an ongoing manner. Regular, live meetings with two lead team members (LM and CR) were held over Zoom every six to eight weeks to discuss results, identify improvements, and update the tool (and accompanying website)—increasing to every two weeks near the end of testing.
Interviews were conducted with a subset of usability testing respondents (n = 8) to discuss their usability testing experience, review resulting updates to the tool, and better understand its relevance to their work. Participants self-identified by responding to a question at the end of the feedback questionnaire. Of the eight, three were with Implementation Science networks in the provincial health delivery system, three worked with other groups in the provincial health system, one was a health policy researcher, and one worked in the field of Health Quality. Interviews were carried out over Zoom over a six-month period (December 2020 to June 2021) and ran from 30 to 45 min in length. All respondents were asked the same set of three questions during the interviews:
-
1.
What do you think of the updates to the website, especially the overall design and function?
-
2.
How might the tool help your work or the work of others on your team?
-
3.
What improvements could we make to the tool and website to support this?
As with the questionnaire responses, key points from the interviews were compiled by two lead team members (LM and CR) and used to refine the tool and accompanying website.
Results
The phases of developing and testing the prototype incorporated viewpoints from people working in different levels of the provincial health system. Their feedback informed the ongoing adaptation of the prototype to better fit the local context and also contributed insight into how the tool could build users’ capacity in implementation and evaluation.
Phase 1
This phase covered the development of the initial prototype of the tool. By the end of the second design session, the prototype comprised a table that included the six Dimensions of Health Service Quality outlined in the matrix and Enola Proctor et al.’s Taxonomy of Implementation Outcomes. Both were listed across the top of the table, while several key perspectives were listed down the side of the table (Fig. 2a). The prototype was designed to show users that two of the six quality dimensions in the matrix overlap with two of the outcomes from Proctor et al.’s recommended taxonomy (in name only): Acceptability and Appropriateness. This overlap between the two frameworks was deemed helpful because the importance of assessing Acceptability and Appropriateness is already well established in practice. However, Proctor et al.’s outcomes focus primarily on the perspectives of healthcare providers and the healthcare setting, while the Alberta Quality Matrix for Health focuses on the patient experience.
The iterative development process allowed the team to identify limitations and consider other tools and frameworks to adapt the prototype to better fit the local context (Table 3). For example, the design team felt it was important to consider several perspectives when deciding on relevant outcomes for an evaluation and suggested that sample questions would be helpful to include. A frequent comment during the design sessions was that ‘people don’t know what they don’t know.’ This started discussions around how this tool could prompt thinking and build the capacity of users.
While the table prototype demonstrated the integration of the quality dimensions in the matrix with implementation outcomes, further prototype development was put on hold. Coding a digital tool that reflected the design team’s original prototype was unworkable, given the team’s resource constraints. Accordingly, the team’s graphic designer created an online slide-box format of the prototype (Fig. 2b). The design team reviewed and approved this version to advance to usability testing. They also asked that a question be included in usability testing to see if participants preferred one version. An image similar to Fig. 2 was included to allow participants to compare the two layouts and select which one they preferred.
Phase 2
Usability testing
All usability testing participants (n = 21) completed the two questionnaires, including the tasks intended to confirm that the slide-box layout and accompanying website functioned as intended for participants. When asked which version they preferred, thirteen usability testing participants chose the slide boxes and eight chose the table format. All users thought the tool could help with their work, and 90% (n = 19) were likely (n = 6) or extremely likely (n = 13) to recommend the tool to others. Participants rated the tool as easy to understand (average of 4.33 on a scale of 1-very hard to 5-extremely easy). Most respondents (81%, n = 17) agreed that the included perspectives (patients, healthcare providers, support teams, and organizations) were sufficiently thorough. Three respondents thought key perspectives were missing, including that of funders and care partners. The remaining respondent thought too many perspectives were already included.
Overall, participants’ responses were positive and helpful in informing updates to the prototype (Fig. 3). Table 4 summarizes the suggestions received and the changes made as a result. The tool's layout was praised for being intuitive and easy to follow. Figure 4 highlights what respondents felt was done well and should not be changed. A majority of participants (62%) said that working through the tool prompted them to consider including implementation outcomes in future evaluations.
Final testing
Eight interviews were completed with a subset of the respondents who had participated in usability testing to review updates to the tool and describe the relevance of the tool to their work. Interviews were conducted over six months, with improvements made iteratively. Improvements included aesthetic updates to the tool and accompanying website, simplifying the instructions for the tool, and adding links to validated measures directly within the tool. The overall response from participants was positive:
“[It’s] easy to navigate through. Color coding, clicking on the boxes was easy to understand [and] makes site clean - text only when you want to see it. Found it approachable.” (Academic #1)
“[We’ve] struggled with separating out different terms, but you’ve done it nicely. Given [the] challenges, you’ve done a great job on this.” (Health system staff #1)
“Overall, thought tool was great, usability navigation, visuals, examples, easy to use.” (Academic #2)
Some participants described how they were already using the tool to build evaluation capacity in their teams and inform evaluations of innovations in the health system. They also shared how they were currently using the tool in their work, including collaboratively designing evaluations, building Quality Improvement and Implementation Science capacities, and supporting patients:
“Working with [an] undergraduate student [who found the] matrix confusing. Something like this is consolidated, provides examples, and is easy as a teaching tool. Helps to get trainees on board with the process.” (Academic #1)
“Working with an evaluation team. Everyone has [an evaluation] background. [They’re] familiar with RE-AIM, but not with implementation outcomes… Before [the] EQUIP [tool] was done, it was hard to get it across. In a practical applied world, the tool applies better. The people we consult with don’t need to know the theoretical principles.” (Health system staff #2)
“Important to show resources like this for patients… They can see themselves instead of high-level abstract measures only healthcare professionals [are] interested in.” (Academic #1)
The final tool: the EQUIP (Evaluating QUality and ImPlementation) Tool (theequiptool.com)
The final tool – the Evaluating QUality and ImPlementation (EQUIP) Tool – is housed on a website (theequiptool.com) that includes information about what the tool is, who the intended users are, and why the tool is important. There are interactive visuals that provide definitions of the dimensions of health service quality, the recommended implementation outcomes, and the different perspectives included in the tool. A resource page includes a link to a printable version of the tool, as well as an example evaluation matrix for an implementation initiative taking place in the provincial health system.
The tool itself is interactive, allowing users to explore the implementation outcomes and quality dimensions from different perspectives and to select questions and indicators relevant to their work. After completing selections, users can save or print a record of them. Each outcome is defined and includes sample questions from a quality perspective, an Implementation Science perspective, or both (for those domains represented by both like Acceptability and Appropriateness). Examples of indicators and validated measures are included in the tool. Sample indicators and links to measures were pulled from several different evaluation frameworks used within the provincial health delivery system, as well as systematic reviews and scoping reviews that consider Proctor et al.’s list of implementation outcomes [22,23,24]. The tool is meant to stimulate thinking and discussions. Therefore, any of the questions or indicators may need to be further developed by the user to suit their needs.
Discussion
While Implementation Scientists have published extensively on evidence-based implementation outcomes [1, 6, 23,24,25], there is a need to improve how these outcomes are integrated into routine health system improvement evaluations [4]. Through rapid, iterative design sessions and usability testing, the authors of this paper developed a web-based evaluation tool called EQUIP (Evaluating QUality and ImPlementation, available at theequiptool.com) that incorporates quality and implementation outcomes in a user-friendly way.
Ongoing engagement with the design team and potential end-users was an important part of ensuring the tool was relevant and responsive to users’ needs. The tool was designed and developed by and for researchers and practitioners working in the provincial healthcare delivery system, integrating team members’ and participants’ experiences, knowledge, and needs – the cornerstones of Design Thinking and the Successive Approximation Model (from the field of Instructional Design). In usability testing, we found that some participants were already using it to build capacity within their teams (e.g., to better understand quality dimensions and implementation outcomes), to stimulate thinking (e.g., about the different perspectives involved), and to develop evaluations. The EQUIP tool was also included as a resource in the updated Alberta Health Services Innovation Pipeline Primer 2.0, which outlines what evidence is needed to move an innovation into routine healthcare practice in Alberta [26].
An important feature of the EQUIP tool is that it highlights the different perspectives – those of patients, healthcare providers, support teams, and organizations—that should be considered when evaluating implementation efforts. While improving patient outcomes is the ultimate goal of most Quality Improvement and implementation efforts, it is also important to consider the impact of a given implementation on the people and systems involved. In fact, all of these perspectives are needed to understand why implementation efforts succeed or fail and to determine what constitutes success in the first place [6, 27, 28]. An early version of the EQUIP tool included multiple perspectives (Fig. 2a), each of which included all possible quality dimensions and implementation outcomes. However, feedback from the design team and usability testing respondents helped streamline the most important perspectives to include and which dimensions and outcomes were most relevant for each one.
The perspectives included in the EQUIP tool take into account the various roles involved in implementation and the broader context. They are based on the Interactive Systems Framework and Alberta Health Services’ use of the Quadruple Aim (now Quintuple Aim) and refined by feedback from users [27, 28]. The Interactive Systems Framework centers on the people, support, and organizations that are needed to carry out the activities required for successful implementation [27]. The Quadruple Aim highlights not only patient experience but provider satisfaction in how they are supported in providing care [28]. The intersection of these two frameworks highlights the high level of influence that health care providers hold in the system [29]. They are largely responsible for delivering products and services to patients while also being accountable to the systems within which they work. In fact, all of the implementation outcomes and several quality dimensions are considered from the provider perspective in the EQUIP tool. Without considering the whole context within which an intervention is being implemented, important evaluation questions could be missed.
The ongoing discussions with the design team provided an opportunity to delve deeper into different frameworks and consider other outcomes and definitions in order to better fit the local context. These included the British Columbia Health Quality Matrix and the RE-AIM framework (RE-AIM stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance) [11, 30]. A member of the design team highlighted that the British Columbia Health Quality Matrix includes the dimension ‘Equity’, which had not yet been incorporated into the Alberta Quality Matrix for Health but is increasingly important to consider [11]. The British Columbia Health Quality Matrix also does a good job of defining the quality dimensions and indicators in a straightforward way, which the team felt was important. The RE-AIM Framework was identified by the design team as being widely used and well understood in the Alberta health system; therefore, some of the language, definitions and indicators for the implementation outcomes of the EQUIP tool were adapted from the RE-AIM framework (e.g., Reach instead of Penetration) [30]. As Reilly and colleagues point out, there is a great deal of consistency between RE-AIM and Proctor et al.’s implementation outcomes [31]. However, until recently RE-AIM did not include Acceptability, Appropriateness, and Feasibility (which they include as antecedents of several implementation outcomes in their proposed expansion to RE-AIM indicators) [31].
As the available frameworks have different definitions for the various outcomes, ongoing discussions with users helped to find the right fit for the range of relevant perspectives. For example, Appropriateness and Acceptability have different definitions in the Alberta Quality Matrix for Health and Proctor’s implementation outcomes, but the quality definition was felt to suit the patient perspective, while the implementation definition suited providers. There was also some discussion about the difference between Appropriateness and Acceptability as they are conceptually similar which can cause confusion. However, referring to the examples in Proctor’s taxonomy of outcomes provided clarity [6]. The sample questions and indicators in the tool are designed to help users understand the different dimensions and outcomes. The inclusion of links to validated measures improves certainty.
The ability to review and select questions, indicators, and validated measures relevant to their project was identified by participants as being particularly helpful when discussing and designing evaluations. The EQUIP tool encourages users to consider important quality and implementation questions from each key perspective. While the level of analysis for implementation outcomes has not been well studied [6, 31], the EQUIP tool provides an opportunity to explore if a given level or perspective is appropriate for a particular implementation outcome and project. Future research in this area can inform updates to the EQUIP tool.
Strengths and limitations
The design process and usability testing were strengths that ensured the end product would be relevant to end users. Although several frameworks were discussed, there may be others that were not considered but could have added value. However, the frameworks included were those that were familiar to end users in Alberta. Depending on the scope of a given evaluation, certain perspectives may be missing and should be considered if relevant to that evaluation. Patients were not consulted when selecting the outcome definitions or indicators for the patient perspective. This appears to be a limitation of most frameworks. However, the Alberta Quality Matrix for Health is currently being refreshed and is involving patient family advisors [personal communication]. Future updates of the EQUIP tool will consider any important changes to outcomes or indicators for patients (by patients).
The EQUIP tool was designed to help users incorporate implementation outcomes into quality evaluations. It can help with buy-in, build capacity, consider different perspectives, provide sample questions and indicators, and connect users to validated measures. The tool was not developed to identify interest holders, assess readiness, build entire evaluation plans, or guide implementation planning.
Conclusions
Our work addresses the call for greater integration of Quality Improvement and Implementation Science by combining approaches from each of these fields to strengthen evaluation processes in the health system. The EQUIP tool, designed and refined with end users, is an example of integrating Quality Improvement and Implementation Science to support the spread, scale, and sustainment of health innovations. It can help researchers and practitioners better design and develop appropriate evaluation plans to support understanding of successful implementation. The EQUIP tool website is receiving steady traffic and is being accessed across North America (and the globe). As of January 2025, it had been visited 3,977 times by 2,154 unique visitors, 86% of whom were from North America (59% from Alberta and 27% from outside of Alberta), suggesting generalizability of the EQUIP Tool. A future evaluation of the tool will explore the tool’s generalizability further. We will continue to improve it based on ongoing user feedback.
Data availability
All data generated or analysed are not available due to participant privacy but are available from the corresponding author on reasonable request.
Abbreviations
- AbSPORU:
-
Alberta Strategy for Patient-Oriented Research SUPPORT Unit
- EQUIP:
-
Evaluating Quality and Implementation
- RE-AIM:
-
Reach, Effectiveness, Adoption, Implementation, Maintenance
References
Glasgow RE, Harden SM, Gaglio B, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7: 64. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpubh.2019.00064.
Tabin M, Diacquenod C, Petitpierre G. Evaluating implementation outcomes of a measure of social vulnerability in adults with intellectual disabilities. Res Dev Disabil. 2021;119: 104111. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ridd.2021.104111.
Yiu V, Belanger F, Todd K. Alberta’s strategic clinical networks: enabling health system innovation and improvement. Can Med Assoc J. 2019;191(Suppl):S1–3. https://doiorg.publicaciones.saludcastillayleon.es/10.1503/cmaj.191232.
Rapport F, Smith J, Hutchinson K, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. J Eval Clin Pract. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/jep.13600.
Moullin JC, Dickson KS, Stadnick NA, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1:42. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-020-00023-7.
Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s10488-010-0319-7.
Quinn Patton M. Essentials of utilization-focused evaluation. 1st ed. SAGE Publications, Inc; 2011. https://us.sagepub.com/en-us/nam/essentials-of-utilization-focused-evaluation/book233973. Accessed 15 Nov 2021.
Endalamaw A, Khatri RB, Mengistu TS, et al. A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact. BMC Health Serv Res. 2024;24(1):487. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12913-024-10828-0.
Excellent care for all act, 2010, S.O. 2010, c. 14 | ontario.ca. https://www.ontario.ca/laws/statute/10e14. Accessed 4 Mar 2025.
Health Quality Council of Alberta. Alberta quality matrix for health user guide. 2005. https://hqca.ca/about/how-we-work-health-quality-council-of-alberta/the-alberta-quality-matrix-for-health-1/. Accessed 29 Sep 2021.
BC Patient Safety and Quality Council. BC health quality matrix. 2020. https://bcpsqc.ca/wp-content/uploads/2020/02/BC-Health-Quality-Matrix-March-2020.pdf. Accessed 24 Aug 2021.
Wensing M, Grol R. Knowledge translation in health: how implementation science could contribute more. BMC Med. 2019;17(1):88. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12916-019-1322-9.
Glasgow RE, Eckstein ET, ElZarrad MK. Implementation science perspectives and opportunities for HIV/AIDS research: integrating science, practice, and policy. J Acquir Immune Defic Syndr. 2013;63:S26. https://doiorg.publicaciones.saludcastillayleon.es/10.1097/QAI.0b013e3182920286.
Albers B, Shlonsky A, Mildon R, editors. Implementation science 3.0. Springer International Publishing; 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/978-3-030-03874-8.
Westerlund A, Nilsen P, Sundberg L. Implementation of implementation science knowledge: the research-practice gap paradox. Worldviews Evid Based Nurs. 2019;16(5):332–4. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/wvn.12403.
Wolfe A. Institute of medicine report: crossing the quality chasm: a new health care system for the 21st Century. Policy Polit Nurs Pract. 2001;2(3):233–5. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/152715440100200312.
Services AH. About AHS. Alberta Health Services. https://www.albertahealthservices.ca/about/about.aspx. Accessed 19 May 2022.
Duncan E, O’Cathain A, Rousseau N, et al. Guidance for reporting intervention development studies in health research (GUIDED): an evidence-based consensus study. BMJ Open. 2020;10(4): e033516. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmjopen-2019-033516.
Reiser RA, Dempsey JV. Trends and issues in instructional design and technology. 4th ed. Pearson; 2018. https://login.ezproxy.library.ualberta.ca/login?url=https://ebscohostsearch.publicaciones.saludcastillayleon.es/login.aspx?direct=true&db=cat03710a&AN=alb.7870354&site=eds-live&scope=site.
Doorley S, Holcomb S, Klebahn P, Segovia K, Utley J. Design thinking bootleg. 2018. https://static1.squarespace.com/static/57c6b79629687fde090a0fdd/t/5b19b2f2aa4a99e99b26b6bb/1528410876119/dschool_bootleg_deck_2018_final_sm+%282%29.pdf. Accessed 25 Aug 2021.
Design Thinking Frequently Asked Questions (FAQ). IDEO | Design thinking. 2021. https://designthinking.ideo.com/faq/whats-the-difference-between-human-centered-design-and-design-thinking. Accessed 12 Nov 2021.
Mettert K, Lewis C, Dorsey C, Halko H, Weiner B. Measuring implementation outcomes: an updated systematic review of measures’ psychometric properties. Implement Res Pract. 2020;1:2633489520936644. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/2633489520936644.
Khadjesari Z, Boufkhed S, Vitoratou S, et al. Implementation outcome instruments for use in physical healthcare settings: a systematic review. Implement Sci. 2020;15(1):66. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13012-020-01027-6.
Willmeroth T, Wesselborg B, Kuske S. Implementation outcomes and indicators as a new challenge in health services research: a systematic scoping review. Inq J Health Care Organ Provis Financ. 2019;56:0046958019861257. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0046958019861257.
Mettert K, Lewis C, Dorsey C, Halko H, Weiner B. Measuring implementation outcomes: an updated systematic review of measures’ psychometric properties. Implement Res Pract. 2020;1:1–29. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/2633489520936644.
Waye A, Hughes B, Mrklas K, Fraser N. Innovation pipeline: intent to scale for impact. 2020. https://www.albertahealthservices.ca/assets/about/scn/ahs-scn-so-innov-pipeline-primer.pdf. Accessed 24 Aug 2021.
Wandersman A, Duffy J, Flaspohler P, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3–4):171–81. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s10464-008-9174-z.
Alberta Health Services. Enhancing care in the community. Alberta Health Services; 2021. https://www.albertahealthservices.ca/about/Page13457.aspx. Accessed 25 Aug 2021.
Pereno A, Eriksson D. A multi-stakeholder perspective on sustainable healthcare: from 2030 onwards. Futures. 2020;122: 102605. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.futures.2020.102605.
Glasgow RE, Harden SM, Gaglio B, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7: 64. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpubh.2019.00064.
Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, contrasting, and integrating dissemination and implementation outcomes included in the RE-AIM and implementation outcomes frameworks. Front Public Health. 2020;8. https://www.frontiersin.org/article/10.3389/fpubh.2020.00430. Accessed 19 May 2022.
Acknowledgements
Thank you, Anna Noga and Morgan Potter for contributing to early design sessions, Cody Alba, Stephanie Brooks, and Denise Thomson for reviewing and revising manuscript drafts. This work was led by the AbSPORU. AbSPORU also acknowledges its implementation partners: the University of Alberta, the University of Calgary, the University of Lethbridge, Alberta Health Services, Athabasca University, the Women and Children's Health Research Institute, the Alberta Children's Hospital Research Institute and Alberta Health.
Funding
This work was led by the Alberta Strategy for Patient-Oriented Research SUPPORT Unit (AbSPORU), which is co-funded by the Strategy for Patient-Oriented Research program of the Canadian Institute for Health Research (CIHR), Alberta Innovates and the University Hospital Foundation. AbSPORU also acknowledges its implementation partners: the University of Alberta, the University of Calgary, the University of Lethbridge, Alberta Health Services, Athabasca University, the Women and Children's Health Research Institute, the Alberta Children's Hospital Research Institute and Alberta Health.
Author information
Authors and Affiliations
Contributions
LM and GZ conceptualized the project. LM and CR led the collaboration with the tool development team: LM designed the development process, and CR led the design process. GZ supervised the overall project. LM, CR, EF, NP, EK, and GZ designed and developed the EQUIP tool. LM led data collection and analysis. LM and GZ drafted the manuscript. CR, EF, NP, and EK reviewed and provided input on numerous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Ethical approval was obtained from the University of Alberta Research Ethics Board 2 (Pro00130141). All participants provided informed consent.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
McAlpine, L., Ramjohn, C., Faught, E.L. et al. Development and testing of an interactive evaluation tool: the Evaluating QUality and ImPlementation (EQUIP) Tool. Implement Sci Commun 6, 32 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00715-y
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00715-y