Skip to main content

Integration and evaluation of implementation strategies to improve guideline-concordant bladder cancer surveillance: a prospective observational study

Abstract

Background

Despite guideline recommendations, our prior work revealed more than half of low-risk bladder cancer patients within the Department of Veterans Affairs (VA) undergo too many surveillance procedures and about a third of high-risk patients do not undergo enough procedures. Thus, we developed and integrated implementation strategies to improve risk-aligned bladder cancer surveillance for the VA.

Methods

Prior work used Implementation Mapping to develop nine implementation strategies: change record systems, educational meetings, champions, tailoring, preparing patients to be active participants, external facilitation, remind clinicians, audit & feedback, and a blueprint. We integrated these strategies as improvement approaches across four VA urology clinics. Primary implementation outcomes were qualitatively measured via coding of semi-structured interviews with clinicians and co-occurrence of codes. Implementation outcomes included: appropriateness, acceptability, and feasibility. Exploratory quantitative outcomes included clinicians’ recommendations for guideline-concordant bladder cancer surveillance intervals and sustainability.

Results

Eleven urologists were interviewed. Co-occurrence analysis of codes across strategies indicated that urologists most commonly reported on the acceptability and appropriateness of changing the record system, preparing patients to be active participants (“surveillance grid”), reminders (i.e., cheat sheet), and educational sessions. We confirmed feasibility of all implementation strategies. Urologists indicated that changing the record system had a high impact, reduced documentation time, and guided resident physicians. Preparing patients to be active participants using the “surveillance grid” was seen as an effective but time-consuming tool. Educational sessions were seen as critical to support implementation. In quantitative analyses, clinicians recommended guideline-concordant surveillance about 65% of the time at baseline for low-risk patients, and this improved to 70% during evaluation. Across all risk levels, the largest improvement was observed at site 2 while site 3 did not improve. All sites sustained use of the changed record system, while sustainability of other strategies was variable.

Conclusions

Based on summative interpretation of results, the most appropriate, acceptable, and feasible strategies include changing record systems via a template and educational meetings focused on guideline-concordant surveillance. Future work should assess the impact of the improvement approaches on clinical care processes, particularly on reducing overuse of surveillance procedures among low-risk patients.

Trial registration

The implementation strategies were not considered a healthcare intervention on human participants by the governing funding agency and IRB. Rather, they were seen as quality improvement interventions. Thus, this study did not meet criteria for a clinical trial and was not registered as such.

Peer Review reports

Introduction

Bladder cancer is one of the most prevalent cancers in the Department of Veterans Affairs (VA) [1]. Most patients present with non-muscle invasive “early stage” cancer. After resection, these patients undergo regular surveillance cystoscopy procedures. According to current guidelines [2], the frequency of these surveillance cystoscopy procedures should be aligned with each patient’s risk for recurrence and progression. Risk is categorized as low, intermediate, or high, and is based on bladder cancer history and pathologic details [2]. Our prior work indicated that, despite guideline recommendations, more than half of low-risk patients undergo too many procedures and about a third of high-risk patients do not undergo enough procedures [3, 4].

Thus, we set out to develop and integrate implementation strategies to improve risk-aligned bladder cancer surveillance within the VA. Strategies were selected from 73 strategies clearly defined within the Expert Recommendations for Implementing Change (ERIC) compilation [5]. For this selection, we used a rigorous Implementation Mapping process. In brief, we developed objectives to implement risk-aligned bladder cancer surveillance based on qualitative data organized by Tailored Implementation for Chronic Disease (TICD) framework determinants [6]. We then used data visualization techniques to select strategies with potentially high impact on risk-aligned surveillance (see details in our prior separate publication) [6]. The selected implementation strategies were then combined into four multi-faceted improvement approaches that were subsequently integrated in four VA sites. They included external facilitation, educational meetings, reminders, and preparing patients to be active participants [6].

Here, we present the process of integrating the Implementation Strategies in four VA urology clinics, with the aim to assess the associated implementation and process outcomes, including acceptability, appropriateness, feasibility, urologist satisfaction, fidelity, sustainability, and adoption of risk-aligned bladder cancer surveillance [7].

Methods

Overview

In our prior work, nine strategies were systematically developed to improve risk-aligned bladder cancer surveillance guided by the Tailored Implementation for Chronic Diseases framework [6]. The goal was for four VA urology clinics to integrate these nine strategies as specified by the ERIC compilation. The four clinics were identified based on prior quantitative data indicating room for improvement, defined as sites which performed surveillance not aligned with individual patients’ bladder cancer risk [6, 8]. Risk-aligned bladder cancer surveillance thus was the clinical intervention targeted by the implementation strategies. Work conducted at the four sites occurred in three phases: pre-implementation (4–6 months), integration of the strategies (3–5 months), and evaluation (6 months). Integration commenced when we started the first implementation strategy at each site, which was the first external facilitation meeting. Evaluation commenced when sites indicated they had incorporated all the strategies they feasibly could integrate during the 3-to-5-month integration time frame. Sites were actively supported via external facilitation during both the integration and evaluation periods. Sustainment of strategies was assessed 6 months after the end of the evaluation period.

External facilitation was used to support integration, adaptation, and sustainment of all strategies at all sites. Facilitation consisted of at least monthly meetings between the central research team and the local site investigator and their team. Facilitation activities were based on a Blueprint (see Additional file 1 for Blueprint). During each meeting, sites provided an update: successes and challenges were discussed, and then we worked together to address strategy-specific challenges. The expectation for sites was to participate in at least 5 of 16 pre-specified facilitation activities defined by the VA Quality Enhancement Research Initiative [9]. The Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist [10] and the Standards for Reporting Implementation Studies (STaRI) [11] were used for reporting (see Additional files 2 and 3).

Implementation strategies and implementation processes

We integrated the following nine implementation strategies (labelled according to the ERIC compilation): facilitation, audit and feedback, tailor strategies, conduct educational meetings, identify and prepare champions, remind clinicians, change record systems, prepare patients to be active participants, and an implementation blueprint. Strategies were combined into four multifaceted improvement approaches: external facilitation (including facilitation, audit and provide feedback, and tailor strategies), educational meetings (including conduct of educational meetings, and identification and preparation of a champion), reminders (including changing record systems and remind clinicians with cheat sheets or posters) and prepare patients to be active participants (the only patient-facing improvement approach). Note that in the remainder of this manuscript the term “reminders” refers to the multifaceted improvement approach while “remind clinicians” is a component of that approach, i.e., cheat sheets or posters. Preparing patients to be active participants was operationalized as providing a bladder cancer risk-specific hand-out to patients with a “surveillance grid”, that outlined where they are in the bladder cancer surveillance journey (see Additional file 1 for examples of the surveillance grid). The implementation strategies and improvement approaches were specified as recommended by Proctor including actors, actions, targets of actions, temporality, dose, implementation outcomes likely to be affected, and theoretical justification [6, 12]. They were documented in the implementation blueprint, which was distributed to all participating sites to communicate the components of the improvement approaches. The blueprint also included sections to be filled out by the local site investigator or research coordinator to track progress on strategy integration within each site [6]. For details on strategies see Blueprint in Additional file 1.

Initially, we had planned to implement these strategies at each site over the course of one month. However, while working with the first site we realized that more time was needed, and implementation of the strategies was thus expanded over a 3-month period. All sites initially received facilitation based on the blueprint, which outlined facilitation activities as well as the other seven strategies. The blueprint was shared with the local team at each site at the beginning of the implementation phase and sites were encouraged to implement all strategies. We then supported sites during the integration of the strategies via regular facilitation meetings. During these meetings, the local team was encouraged to follow the steps outlined in the blueprint [6], and the central research team provided advice and support. Sites were free to determine the sequence of integration of the additional seven strategies. Later, topics during the external facilitation meetings transitioned to providing ongoing support and facilitating adaptation.

To characterize the process of integrating the implementation strategies, we categorized strategies into whether they were integrated with a similar timeline at all sites, with variation in the timeline across sites, or not at all sites. A strategy was categorized as successfully integrated when sites reported that they completed the minimum criteria outlined in the blueprint. We cross-checked site-reported data against the central study team’s activity tracking. The timeline was constructed using the notes documented during each facilitation meeting: we determined when each strategy was first discussed and first used at each site. For each strategy, we then plotted the timeline by site. We normalized each timeline to the date of first use to display start-up time needed to launch the strategy.

Implementation outcome measurement—overview

The goal was to measure implementation outcomes as specified by Proctor et al. [7]. At the design stage, the investigators assessed the most feasible way to measure each outcome. Given the anticipated low number of evaluable clinicians, many outcomes were measured qualitatively. However, quantitative data could be obtained for some outcomes, such as patient satisfaction, fidelity,, and the process outcomes.

Primary qualitative implementation outcomes

Outcomes

The following Implementation Outcomes were measured qualitatively: appropriateness, acceptability, feasibility, and urologist satisfaction. We also collected qualitative data on urologists’ suggestions for improvement and time spent on integrating the strategies. See Appendix Table (in Additional file 1) for further details on outcome measures, definitions, and data sources [7].

Participants

We aimed to recruit three urologists from each site, for a total of twelve participants. All those approached agreed to participate, although one never scheduled the interview.

Interview procedures

A semi-structured interview guide, informed by Proctor [7], Powell [5], and Weiner [13], was developed by three co-authors (FRS, AAO, LZ) and refined collaboratively with additional co-authors (KB, EK, SZ; see Additional file 1 for interview guide). Three authors (LZ, AAO, FRS) created a priori codes based on key outcomes of interest, including acceptability, appropriateness, usability, feasibility, time spent, and suggestions for improvement. The goal of the interviews was to ascertain which strategies were most and least impactful. We aimed to interview the majority of clinicians who were closely involved in the project and chose to refrain from collecting demographic information as doing so risked participant re-identification.

Interviewees were recruited by local site study coordinators using purposive sampling. A team of qualitative researchers from the Salt Lake City VA Medical Center, who are external from the core study team, conducted interviews and initial coding. Prior to conducting interviews, interviewers (KB: Female, MS, Salt Lake City VA Medical Center, Research Analyst, not related to the central research team; EK: MS, University of Utah, Research Analyst, not related to the central research team) were provided with study details, the interview guide, the preliminary codebook, and samples of implementation materials (e.g., Reminders, Blueprint, Electronic Health Record (EHR) template). Interview participants were provided an information sheet and provided verbal consent to be interviewed and recorded prior to the interview. In addition, the central research team kept notes after each facilitation meeting, documenting the activities the meeting focused on as well as any information provided from each site’s team during these meetings.

The median time of interviews was 21 min (range 16 min – 52 min). Repeat interviews were not conducted because they would not have yielded additional information about implementation outcomes related to the strategies that were already integrated at the time of the interview. Situational factors were not collected. All interviews were one-on-one, audio only, and recorded. Recordings were transcribed verbatim.

Qualitative analyses

Anonymized transcripts were transferred to Atlas.ti v23. Transcripts were iteratively analyzed [14]. We deductively coded segments along the two study dimensions: implementation strategy and outcome (see Additional file 1 for codebook). In the first cycle of coding, we used a priori codes (KB, EK) to categorize segments relating to implementation strategy and outcomes [15, 16]. In the second cycle of coding, we sorted segments by code to identify patterns. Commonalities, similarities, or recurring patterns were grouped and summarized (LJ) as themes [17], for each strategy [16,17,18,19]. Coding was reviewed collectively, discussed, and agreed upon (LZ, LJ, FRS) throughout analyses. Following the finalization of the coding process, co-occurrence analysis was conducted to provide a frequency count that reflects the number of times participants discussed each implementation strategy and outcome.

Quantitative implementation outcomes

Outcomes

Quantitatively measured outcomes included adoption of risk-aligned surveillance, fidelity, patient experience and patient acceptability, and sustainability.

Participants

Participants included site investigators, champions, and patients undergoing bladder cancer surveillance at each site.

Data collection procedures

Quantitative data were collected via (1) ongoing tracking of implementation activities by the central research team in a facilitation tracking sheet, (2) site report to the central research team based on the final submitted blueprint, (3) clinician self-report, and (4) chart abstraction. For the chart abstraction, we used national VA Corporate Data Warehouse data to identify patients who recently had surveillance cystoscopy procedures at each site. Trained research assistants then reviewed the electronic charts to abstract each patient’s bladder cancer history, whether bladder cancer risk was documented in the chart, whether a site-specific template was used for documentation, and whether the surveillance recommendation documented in the note was in line with guideline recommendations (for further details see our prior publication) [20].

Patients who presented for surveillance cystoscopy procedures with a history of non-muscle invasive bladder cancer were asked to fill out a one-page pen and paper anonymous survey focused on their experience and on assessing how acceptable the presentation of bladder cancer-related information was during their surveillance visit. This survey was modified based on a prior published survey (see Additional file 1 for survey) [21].

Quantitative analyses

We used descriptive statistics for quantitative analyses. For the patient survey, we assessed the overall patient experience based on a single-item response. We calculated an acceptability score based on the four acceptability questions. We converted the mean of the answers to the four questions to a 1- to 7-point acceptability summary scale. We categorized a scale score of 6 or higher as indication for acceptability from the patient’s perspective.

Exploratory quantitative process outcomes

We measured adoption of risk-aligned bladder cancer surveillance for each bladder cancer surveillance encounter in two ways: (1) whether the clinician accurately assessed bladder cancer risk and (2) whether the clinician recommended a guideline-concordant surveillance interval. Accurate assessment of bladder cancer risk was defined as an encounter note that documented a bladder cancer risk that was in line with the gold standard based on abstraction of pathologic details and prior bladder cancer history [20]. A guideline-concordant surveillance interval was documentation of a recommended follow-up interval within the encounter note that was in line with risk-specific guideline recommendations [20]. In exploratory analyses, we estimated the proportion of encounters with accurate risk assessment and with a guideline-concordant surveillance interval recommendation by study phase (pre-implementation, integration, evaluation). We calculated these proportions overall and stratified by cancer risk status and site. The study was approved by the VA Central Institutional Review Board (CIRB) (No.19–01).

Results

We supported the integration of nine strategies as part of four improvement approaches across four urology clinic sites within the VA. Sites were located in the east, southeast, and mid-west of the United States (referred to as Site 1, 2, 3, and 4). Qualitative results are based on the interview data from eleven urology clinicians (three female, eight male). Patient survey data was collected from 221 patients. Quantitative process outcomes data was abstracted from encounters for 168 low-risk, 245 intermediate-risk, and 342 high-risk patients.

Process of integrating the implementation strategies

All nine strategies were integrated at least at some sites (Fig. 1). Three of the nine strategies were integrated across all sites within a similar timeline. Another three of the strategies were also integrated across all sites, but with substantial variation in the time needed for integration. When changing record systems across sites, Sites 1 and 3 took more than twice as long (> 6 months) as Sites 2 and 4 (about 3 months). At Site 1, the local team wanted to tailor the draft template to the local context before going live with a change in record systems, and at Site 3 it was challenging to collaborate with the electronic health records team. Time to integrate education meetings also varied substantially, mostly due to the time needed to fit the educational meeting into existing meeting schedules. There was also variation across sites in the time they needed to start tracking integration of the strategies within the implementation blueprint. Appendix Fig. 1 (in Additional file 1) summarizes variation in integration of these strategies.

Fig. 1
figure 1

Variation in the timeline of integrating the implementation strategies across four sites. Implementation strategies are labelled according to the Expert Recommendations for Implementing Change [6]

Three strategies were integrated at only three of the four sites, including champion, audit and feedback, and remind clinicians. Site 1 did not integrate audit and feedback. Site 3 did not integrate a champion and did not integrate all originally planned reminders for clinicians. Challenges with integrating the champion included training, champion engagement with the clinical team at the local site, and a staffing model which limited the champion’s availability to the research team. A challenge with audit and feedback was that one site reported not receiving it, although audit data was provided to them at least once during a facilitation meeting. Lack of wall space was a challenge with integrating reminders via the use of a poster.

Qualitative implementation outcomes

Co-occurrence analysis of codes across strategies indicated that interview participants most commonly reported on the acceptability and appropriateness of changing the record system, preparing patients to be active participants (“surveillance grid”), reminders in the form of a cheat sheet, and educational sessions (Fig. 2). Qualitative data indicated feasibility of all implementation strategies, except for facilitation and the implementation blueprint. For those two strategies, we confirmed feasibility using the research team’s tracking of implementation activities. Participants frequently commented on being satisfied with changing the record system, preparing patients to be active participants (“surveillance grid”), reminders, and champion support. Participants frequently reported challenges related to the “surveillance grid” (Fig. 2).

Fig. 2
figure 2

Co-Occurrence Analysis. Each cell represents the frequency count reflecting the number of times participants discussed each implementation strategy and an outcome

Table 1 presents a summary of qualitative results and themes with exemplary quotes. Participants indicated that changing the record system had a high impact, reduced documentation time, and guided resident physicians. Preparing patients to be active participants by using the “surveillance grid” was seen as an effective tool which supported communication with patients. However, challenges related to the “surveillance grid” included that it was time intensive for clinicians to fill it out and that patients did not bring it back to their clinic visits as originally intended. Reminders in the form of cheat sheets and posters aided in risk stratification and provided visual cues. Educational sessions were seen as critical to support implementation and to overcome implementation barriers. Facilitation was seen as an important component to support implementation.

Table 1 Implementation outcomes—summary of qualitative results

Regarding the blueprints, this was only discussed with the champions, because they were the ones responsible for integration of the strategies. There was a limited number of interviews (N = 3) and coding identified confusion among both interviewees and interviewers on the role of the blueprint. Thus, no reliable information could be gleaned from these interviews. However, from the central research team’s tracking of activities, we do know that the blueprint was used as intended to track activities related to the integration of the improvement approaches, suggesting appropriateness and feasibility. Regarding audit and feedback, the central research team distributed information on baseline rates of risk-aligned surveillance at the site to champions, but this information was not reliably shared with local team members at any of the sites. More detailed data on clinicians’ perceptions of the implementation strategies is summarized in Table 1.

Quantitative implementation outcomes

Table 2 presents quantitative implementation outcomes, including fidelity and sustainability. Across all sites, fidelity ranged from 65% for preparing patients to be active participants to 100% for tailoring. For some strategies, there was variation in fidelity across sites. With regards to sustainability, only the template within the EHR was still in use at all sites 6 months after completion of external facilitation. Sustainability of the other strategies was variable across sites (Table 2).

Table 2 Summary of quantitative implementation outcomes assessed during pilot testing. Numbers refer to the numerator and denominator as described in the measure column. For binary outcomes in the sustainability section such as “use of template”, 1 indicates use and 0 indicates non-use. Note: the blueprint is a strategy to present and track the other eight implementation strategies. N/a = not applicable

In the evaluation phase, we collected 221 surveys from patients presenting for surveillance cystoscopy. Of them, 191 (86%) indicated that the amount of information provided during their bladder cancer surveillance encounter was “just right”. Also, 161 patients (73%) indicated acceptability based on an acceptability scale score of 6 or higher.

Exploratory quantitative process outcomes

Table 3 presents encounter-level data on accurate documentation of risk assessment and clinicians’ recommended surveillance intervals. For accurate documentation of bladder cancer risk, this improved from 58% during pre-implementation to 75% during evaluation. During pre-implementation, accurate documentation of risk assessment varied by bladder cancer risk group, ranging from 32% for low-risk encounters to 73% for high-risk encounters. When stratified by site, there was substantial improvement in accurate documentation of risk assessment at Site 2, increasing from 27 to 80% (Table 4).

Table 3 Encounter-level data on the explorative process outcomes including accurate documentation of risk assessment and clinicians’ recommended surveillance intervals by risk group. N refers to the number of encounters evaluated in each risk group and phase
Table 4 Encounter-level data on the explorative process outcomes including documentation of accurate risk assessment and clinicians’ recommended surveillance intervals by site. N refers to the number of encounters evaluated at each site and phase

Recommendations for guideline-concordant surveillance intervals were already present in more than 85% of baseline encounters and did not change overall (Table 3). When stratified by site, recommendations for guideline-concordant surveillance intervals improved most at site 2, increasing from 80 to 91% of all encounters. At site 3, no improvements in recommendations for guideline-concordant surveillance intervals were observed (Table 4). When stratified by risk level, clinicians recommended guideline-concordant surveillance intervals for low-risk encounters only about 65% of the time during pre-implementation, and this improved to 70% during evaluation (Table 3).

Discussion

We report on the integration of nine implementation strategies packaged into four improvement approaches for risk-aligned bladder cancer surveillance. Facilitation, tailoring of strategies, and surveillance grids to prepare patients to be active participants were readily integrated at all four sites. Use of an implementation blueprint, the conduct of educational meetings, and changing record systems via templates in the EHR took more than 6 months at some sites. Not all sites were able to integrate a champion, audit and feedback, and all intended reminders. Overall, the implementation strategies were characterized as appropriate, acceptable, and feasible by local clinicians. Clinicians perceived changing record systems with an EHR template and educational meetings focused on guideline-concordant surveillance as impactful. Lack of a fully engaged champion at one site made the integration of strategies and measurement of implementation outcomes challenging. Most participants did not receive audit and feedback data relating to risk-aligned surveillance, indicating that our approach of disseminating baseline data to local teams via the champion was not feasible.

We found that educational meetings and changing record systems are appropriate and acceptable approaches, which is consistent with prior literature. Although clinician education is a common strategy used to change behavior, we recognize it can be difficult to measure its outcomes leading to mixed findings on the effectiveness [22]. Use of a blueprint to present guideline recommendations and implementation approaches was appropriate and feasible, which is consistent with literature showing its applicability to support implementation efforts [23, 24]. Further, a systematic review by Grimshaw et al. found that 73% of the included studies reported use of multi-component strategies and that reminders, educational materials, and audit and feedback were the most evaluated single strategies [25]. Although these strategies changed clinician behavior, the impact on patient outcomes was less clear [26]. Lastly, changing record systems has been shown to facilitate the provision of guideline-concordant care. In a hybrid type I effectiveness-implementation study, Matulewicz et al. used an electronic medical record-based clinical decision support as a strategy to facilitate the use and documentation of evidence-based tobacco screening in VA urology practices, resulting in increased screening at visits [27].

With respect to cancer care, a systematic review by Tomasone et al. found that education, audit and feedback, and reminders for clinicians were the most used strategies. As single interventions, reminders and audit and feedback resulted in improved health care professional behavior (e.g., compliance with the clinical practice guideline, antecedents such as knowledge or attitudes about the guidelines) and patient outcomes (e.g., screening rate, test completion, symptom management, detection of cancer, quality of life) in a cancer care context. When used together as a multi-component strategy as done in this work, group education, reminders, and audit and feedback yielded positive significant outcomes [28].

In our experience, audit and feedback was less successful than frequently reported in the literature [29]. This may be related to several project-specific issues. First, the lack of structured data on both bladder cancer risk and recommended surveillance intervals made data abstraction very labor intensive, which precluded timely delivery and feedback of data. Second, once patients are stratified by cancer risk, month, and site, numbers in each stratum are low which makes it difficult to provide reliable proportions within shorter timeframes. Third, given concerns about confidentiality voiced by participating sites, we refrained from collecting or assessing clinician-level data, which may have made the data that was shared less impactful and relevant for clinicians.

Our data on process outcomes revealed that documentation of accurate risk assessment improved substantially, likely driven by use of the template in the EHR. The largest documentation improvement was seen at site 2, which was likely driven by the lack of a template during pre-implementation resulting in accurate documentation of risk only 27% of the time. However, this did not translate into more guideline-concordant surveillance recommendations, except for a small to moderate improvement among low-risk patients. At first glance, this finding seems to conflict with recent findings reported from our group, where we demonstrated an association between accurate documentation and guideline-concordant surveillance recommendations. However, further informal discussion with the site that had most of the guideline-discordant surveillance recommendations among low-risk patients revealed that there was a misinterpretation of the recommendations that were included in the template for low-risk patients. This issue contributed to a decrease in guideline-concordant surveillance recommendations among low-risk patients after integration of the template. This problem could easily be addressed with targeted education and by adapting the template to decrease the risk for misinterpretation. The template was also highly valued by the participants in our interviews. Thus, integration of a template into the EHR seems to be one of the most promising implementation strategies for guideline-concordant bladder cancer surveillance but should be combined with appropriate educational meetings. This tool may be of particular help to increase surveillance for high-risk and reduce or de-implement unnecessary services for low-risk patients.

Limitations

The data on integration and implementation outcomes rely on site reporting. However, when we compared the data to tracking done by the central research team, there were no pertinent differences.

Although we conducted interviews with clinicians from all sites, we had a limited sample and therefore cannot state that we reached saturation. Our study was not powered to assess for statistically significant differences in surveillance recommendations or process outcomes, given the anticipated low number of patients per risk category and site. Thus, no definitive conclusions can be drawn on whether the implementation strategies improved guideline-concordant surveillance recommendations among low-risk patients and whether they would contribute to less overuse of cystoscopy procedures in this population. In addition, the baseline rate of approximately 85% guideline-concordant surveillance recommendations across all risk categories combined (Table 2) was higher than suggested by our prior preliminary data [3, 4]. This was likely due to secular trends that happened while the earlier phases of this multi-year project were completed. As such, there was little room for improvement – especially among high-risk patients. Future work should focus on improving guideline-concordant surveillance recommendations for low-risk patients, as about a third of them were issued recommendations for too many surveillance cystoscopy procedures (Table 3). Finally, we may not know the impact of each individual implementation strategy because we asked participants about the most and least impactful strategy, thereby potentially missing important thoughts on strategies that fall in the middle.

Implications for Practice

Despite these limitations, our findings have implications relevant not only for surveillance after bladder cancer treatment, but also for surveillance after treatment for other cancers. For example, the frequency and type of surveillance for patients who underwent treatment for prostate, lung, or colorectal cancer also depends on factors influencing their risk for recurrence and progression [30,31,32]. These factors may include stage, grade, and type of treatment received. Thus, clinicians need to assess cancer risk for these patients and provide guideline concordant recommendations for surveillance intervals after treatment. The implementation strategies evaluated in our current work might also be applicable and further tailored to clinicians who manage these and other cancers.

Conclusions

Based on a summative interpretation of our results, the most appropriate, acceptable, and feasible, strategies include changing record systems via an EHR template and educational meetings focused on guideline-concordant surveillance. Identifying and preparing a champion at each site was critical for integration of the strategies and for the collection of implementation outcomes. It was time consuming to provide surveillance grid handouts to patients. However, surveillance grids and external facilitation may enhance the effectiveness of the other strategies. Tailoring of strategies should be allowed, provided core components of each strategy are maintained. Further research should assess the extent to which a broader integration of these strategies improves guideline concordant surveillance for low-risk early-stage bladder cancer patients and the extent to which adding patient-facing surveillance grids and external facilitation enhances the strategies’ effectiveness.

Data availability

The data sets generated and analyzed during the current study are not publicly available because they contain potentially identifying and sensitive information but are available from the Principal Investigator on reasonable request. Upon request from a qualified investigator, a limited data set will be created for that investigator’s use and shared pursuant to a Data Use Agreement (DUA) appropriately limiting use of the data set and prohibiting the recipient from identifying or reidentifying (or taking steps to identify or reidentify) any individual whose data are included in the data set. Investigators who request to use the data will be required to obtain institutional review board approval and sign the DUA before release of the data. Interested investigators are encouraged to directly contact the Principal Investigator, Florian R. Schroeck.

Abbreviations

CIRB:

Central Institutional review board

Cysto:

Cystoscopy

EHR:

Electronic health record

ERIC:

Expert recommendations for implementing change

QUERI:

Quality enhancement research initiative

TICD:

Tailored implementation for chronic diseases

VA:

Department of Veterans Affairs

References

  1. Moye J, Schuster JL, Latini DM, Naik AD. The Future of Cancer Survivorship Care for Veterans. Fed Pract. 2010;27(3):36–43.

    PubMed  Google Scholar 

  2. Chang S, Boorijan S, Chou R, Clark P, Siamak D, Konety B, et al. Non-muscle invasive bladder cancer: Americal Urological Association / SUO guideline 2016 [Available from: https://www.auanet.org/education/guidelines/non-muscle-invasive-bladder-cancer.cfm. Accessed 1 Mar 2024.

  3. Han DS, Lynch KE, Chang JW, Sirovich B, Robertson DJ, Swanton AR, et al. Overuse of Cystoscopic Surveillance Among Patients With Low-risk Non-Muscle-invasive Bladder Cancer - A National Study of Patient, Provider, and Facility Factors. Urology. 2019;131:112–9.

    PubMed  Google Scholar 

  4. Schroeck FR, Lynch KE, Chang JW, MacKenzie TA, Seigne JD, Robertson DJ, et al. Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer. JAMA Netw Open. 2018;1(5):e183442.

    PubMed  PubMed Central  Google Scholar 

  5. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    PubMed  PubMed Central  Google Scholar 

  6. Schroeck FR, Ould Ismail AA, Haggstrom DA, Sanchez SL, Walker DR, Zubkoff L. Data-driven approach to implementation mapping for the selection of implementation strategies: a case example for risk-aligned bladder cancer surveillance. Implement Sci. 2022;17(1):58.

    PubMed  PubMed Central  Google Scholar 

  7. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

    PubMed  Google Scholar 

  8. Schroeck FR, Ould Ismail AA, Perry GN, Haggstrom DA, Sanchez SL, Walker DR, et al. Determinants of Risk-Aligned Bladder Cancer Surveillance-Mixed-Methods Evaluation Using the Tailored Implementation for Chronic Diseases Framework. JCO Oncol Pract. 2022;18(1):e152–62.

    PubMed  Google Scholar 

  9. Ritchie MJ DK, Miller CJ, Smith JL, Oliver KA, Kim B, Connolly SL, Woodward E, Ochoa-Olmos T, Day S, Lindsay JA, Kirchner JE. Using Implementation Facilitation to Improve Healthcare (Version 3): Behavioral Health Equity Quality Enhancement Research Initiative (QUERI); 2020 Available from: https://www.queri.research.va.gov/tools/Facilitation-Manual.pdf. Accessed 1 Mar 2024.

  10. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

    PubMed  Google Scholar 

  11. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7(4): e013318.

    PubMed  PubMed Central  Google Scholar 

  12. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    PubMed  PubMed Central  Google Scholar 

  13. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

    PubMed  PubMed Central  Google Scholar 

  14. Morse JM. Essentials of Qualitatively-Driven Mixed-Methods Designs. New York: Routledge; 2016.

    Google Scholar 

  15. Crabtree BF. Miller WL. A template approach to text analysis: Developing and using codebooks. Sage Publications; 1992. p. 93–109.

    Google Scholar 

  16. Bingham AJ, Witkowsky P. Deductive and Inductive Approaches to Qualitative Data Analysis. Analyzing and Interpreting Qualitative Data: After the Interview. Thousand Oaks: SAGE Publications, Inc.; 2022. p. 133–46.

  17. Saldana J. The Coding Manual for Qualitative Researchers. 3rd ed. Thousand Oaks: Sage Publications, Inc.; 2016.

    Google Scholar 

  18. Fereday J, Muir-Cochrane E. Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. Int J Qual Methods. 2006;5(1):80–92.

    Google Scholar 

  19. Vanover C, Mihas P, Saldana J. Analyzing and Interpreting Qualitative Research. Thousand Oaks: SAGE Publications, Inc.; 2021.

    Google Scholar 

  20. Lyall V, Ould Ismail AA, Haggstrom DA, Issa MM, Siddiqui MM, Tosoian J, et al. Accurate Documentation Contributes to Guideline-concordant Surveillance of Nonmuscle Invasive Bladder Cancer: A Multisite Department of Veterans Affairs Study. Urology. 2023;181:92–7.

    PubMed  Google Scholar 

  21. Weymiller AJ, Montori VM, Jones LA, Gafni A, Guyatt GH, Bryant SC, et al. Helping patients with type 2 diabetes mellitus make treatment decisions: statin choice randomized trial. Arch Intern Med. 2007;167(10):1076–82.

    CAS  PubMed  Google Scholar 

  22. Forsetlund L, Bjorndal A, Rashidian A, Jamtvedt G, O’Brien MA, Wolf F, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009;2009(2):CD003030.

    PubMed  PubMed Central  Google Scholar 

  23. Huynh AK, Hamilton AB, Farmer MM, Bean-Mayberry B, Stirman SW, Moin T, et al. A Pragmatic Approach to Guide Implementation Evaluation Research: Strategy Mapping for Complex Interventions. Front Public Health. 2018;6:134.

    PubMed  PubMed Central  Google Scholar 

  24. Lewis CC, Scott K, Marriott BR. A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting. Implement Sci. 2018;13(1):68.

    PubMed  PubMed Central  Google Scholar 

  25. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii–iv, 1–72.

    Google Scholar 

  26. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Front Public Health. 2019;7:3.

    PubMed  PubMed Central  Google Scholar 

  27. Matulewicz RS, Bassett JC, Kwan L, Sherman SE, McCarthy WJ, Saigal CS, et al. Using a multilevel implementation strategy to facilitate the screening and treatment of tobacco use in the outpatient urology clinic: A prospective hybrid type I study. Cancer. 2022;128(6):1184–93.

    PubMed  Google Scholar 

  28. Tomasone JR, Kauffeldt KD, Chaudhary R, Brouwers MC. Effectiveness of guideline dissemination and implementation strategies on health care professionals’ behaviour and patient outcomes in the cancer care context: a systematic review. Implement Sci. 2020;15(1):41.

    PubMed  PubMed Central  Google Scholar 

  29. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.

    Google Scholar 

  30. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines ®) Non-Small Cell Lung Cancer Version 2.2024 Pennsylvania: National Comprehensive Cancer Network; 2024. Available from: https://www.nccn.org/professionals/physician_gls/pdf/nscl.pdf. Accessed 1 Mar 2024. updated 2024; cited 2024.

  31. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines ®) Prostate Cancer Version 1.2024 Pennsylvania: National Comprehensive Cancer Network; 2024. Available from: https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Accessed 1 Mar 2024. Updated 2024.

  32. Lieberman DA, Rex DK, Winawer SJ, Giardiello FM, Johnson DA, Levin TR. Guidelines for colonoscopy surveillance after screening and polypectomy: a consensus update by the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology. 2012;143(3):844–57.

    PubMed  Google Scholar 

Download references

Acknowledgements

This study was supported using resources and facilities at the Birmingham Department of Veterans Affairs (VA) Healthcare System, the White River Junction VA Healthcare System, the Roudebush Veterans Affairs Medical Center, the Atlanta VA Medical Center, the Nashville VA Medical Center, the Baltimore VA Medical Center, the Salt Lake City VA Medical Center, and the VA Informatics and Computing Infrastructure (VINCI), VA HSR RES 13-457.

Disclaimer

Opinions expressed in this manuscript are those of the authors and do not constitute official positions of the U.S. Federal Government or the Department of Veterans Affairs.

Funding

This work is supported by a grant from the Department of Veterans Affairs Health Services Research & Development (IIR 18–215, I01HX002780-01). The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Author information

Authors and Affiliations

Authors

Contributions

Conception and design (LZ, FRS), acquisition of data (AAOI, LJ, KB, EK, FRS), analysis (LZ, AAOI, LJ, SK, KB, EK, FRS) and interpretation (all authors) of data, drafting of the manuscript (LZ, FRS), critical revision of the manuscript for important intellectual content (all authors), obtaining funding (FRS), administrative, technical, or material support (DAH, MI, JT, MMS, SZ, FRS), supervision (LZ, SZ, FRS).

Corresponding author

Correspondence to Lisa Zubkoff.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Department of Veterans Affairs Central Institutional Review Board (Study #19–01). For surveys, an information sheet was provided, and we had a waiver of informed consent. For interviews, an information sheet was provided, a script was read at the beginning of the interviews, and verbal consent was obtained.

Consent for publication

Not applicable.

Competing interests

FRS reports research funding from Pacific Edge Ltd., Cepheid, and Nucleix.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zubkoff, L., Ould Ismail, A.A., Jensen, L. et al. Integration and evaluation of implementation strategies to improve guideline-concordant bladder cancer surveillance: a prospective observational study. Implement Sci Commun 6, 37 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00721-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s43058-025-00721-0

Keywords