Request for Proposals
In light of the current COVID-19 pandemic, the date of close of this Request for Proposals has been extended to Tuesday, June 30th, 2020. The required in-person meeting for semi-finalists, by invitation only, will now take place in Palo Alto, CA on Monday, August 17th, 2020. Please see below for more updated information.
Request for Proposals for the Development of Clinical Quality Measures to Improve Diagnosis
1. Project Description
The purpose of this funding opportunity is to provide assistance in the form of grants for the development of innovative clinical quality measures (defined in Appendix B) that promote excellence in diagnosis of three categories of disease – acute vascular events, infections and cancer.
The Diagnostic Excellence Initiative
In November 2018, the Moore Foundation announced its Diagnostic Excellence Initiative with a focus on diagnostic performance improvement. The initiative aims to reduce harm from erroneous or delayed diagnoses, reduce costs and redundancy in the diagnostic process, improve health outcomes and save lives. The initiative’s first area of focus is to develop and validate new measures for diagnostic performance. Examples of our first cohort of grants for diagnostic clinical quality measures are now available for review. Starting with measure development is important – currently, U.S. health care systems are unable to systemically measure diagnostic performance in real time, which limits the ability to quantify performance and guide improvements.
Defining the problem and gap analysis
Diagnosis is at the heart of the clinical practice of medicine; indeed, almost every action or intervention flows from the diagnosis. A wrong, delayed, or missed diagnosis allows illness or injury to persist or progress often with potentially preventable harm. Twelve million Americans experience a diagnostic error each year1,2 and diagnostic errors play a role in an estimated 40,000-80,000 deaths 3 annually in the U.S. alone. Diagnostic errors occur in both inpatient and outpatient settings, in both adult and pediatric populations, and across the health care system. Harm from diagnostic errors accounts for the highest proportion of malpractice cases and the largest settlements, suggesting they are a leading contributor to preventable injury or death.4 In fact, the true incidence of diagnostic failure isn’t known because studies are limited and health care centers do not routinely track such data.
There are many reasons why diagnostic error is common. First, diagnosis is difficult. There is inherent variability in disease presentation. Furthermore, diagnostic tests are less than perfect and clinical encounters inevitably have some lingering and irreducible uncertainty. Additionally, many health care systems are not optimally designed to support efficient and reliable diagnostic processes. There are systematic barriers for optimal diagnosis, including misaligned financial incentives and fragmented delivery systems. From a patient experience perspective, the diagnosis is often not adequately communicated or well understood.5
There is an urgent need to improve diagnosis. However, without an awareness of baseline performance, and standards against which to compare performance, there is no way to measure improvement or to gauge the results of interventions. Despite a lengthy and growing list of clinical quality measures in health care, few existing measures address diagnostic performance specifically.6 The challenge of finding meaningful clinical measures for diagnosis reflects the complexity of the diagnostic process. Much of the work of diagnosis is invisible to the outside reviewer and many diagnostic pathways involve a variable trajectory of thoughts and actions that can be difficult to capture or record. This work is made even more difficult in that there are few specific guidelines for what would constitute diagnostic excellence. Just how precise must a diagnosis be to be considered correct? What is timely? For stroke, every minute counts, but for cancer, a few days to weeks might be considered acceptable. Reasonable standards for one setting may be unrealistic for another. And finally, it can be difficult to find reliable sources of data on diagnosis. Large databases often lack sufficient granularity or include a patient’s full diagnostic journey. Available data from the electronic health record is typically optimized for billing and may not accurately capture patient symptoms, diagnostic reasoning, differential diagnoses, or diagnostic uncertainty.
With a growing awareness of diagnostic errors, the health care environment is ready for change. The National Academy of Medicine report in 2015 (Improving Diagnosis in Healthcare) helped galvanize action by declaring the need to improve diagnosis a “moral, professional, and public health imperative”.7 Recently, the Agency for Healthcare Research and Quality (AHRQ) listed diagnostic errors as one of the leading urgent priorities in health care for 2019 (AHRQ's Road Ahead: Seizing Opportunities in Three Essential Areas to Improve Patient Care) and announced funding for diagnostic safety research (With Increased Funding, AHRQ To Explore Scope and Causes of Diagnostic Errors). The movement to improve diagnosis is gaining traction, evidenced by the commitment of more than 50 medical societies and leading health care organizations in the Coalition to Improve Diagnosis. Additionally, the National Quality Forum has convened a committee to issue recommendations for measure development for diagnostic quality (Reducing Diagnostic Error: Measurement Considerations). The need is evident and increasingly acknowledged, but the difficulty rests with determining how to tackle this multi-faceted problem.
The characteristics for diagnostic quality described by the National Academy of Medicine (‘diagnosis should be accurate, timely, and communicated’) may present competing aims, and they omit safety and cost efficiency.5 Achieving higher accuracy with break-neck speed may drive over-testing, generate unnecessary and confounding data, exhaust diagnostic resources, and even directly harm patients with unnecessary procedures.
The aim of the funded work on measurement will focus on enabling clinicians and systems around them to find an optimal balance between these competing aims. We begin with developing measures, because measurement is integral to improvement.
c) Project scope
To align with the foundation’s principles of supporting work that is important, measurable, and impactful, we have identified three categories of disease that comprise the most common and most harmful diagnostic errors, including acute vascular events (such as stroke and myocardial infarction), infections (such as sepsis and pneumonia), and cancer (such as lung and colorectal cancer).8 Proposals must relate to one or more of these three broad categories.
d) Requirements and expected outcomes of grant
For this grant opportunity, there must be, at minimum, a proposed measure of diagnostic performance based on obtainable evidence in one or more of the three priority categories listed above. The expected work requires two interlinked activities: 1) development of the rationale for a measure and 2) operationalizing the measure into an algorithm that can undergo pilot (or proof-of-concept) testing as detailed in Table 1 (grant deliverables). Prior work in measure development is useful but not required.
The Moore Foundation is seeking measures that can eventually be developed into fully specified performance indicators that:
- address a performance improvement opportunity and fill a measurement gap;
- align with evidence (e.g., from the medical literature, clinical practice guidelines, or expert consensus);
- focus on outcomes (although process measures may be considered if they are particularly innovative and link to patient outcomes);
- are likely to be feasible—that is, the information can be easily and reliably retrieved or designed into commonly available data sources (such as the electronic health records or administrative claims) without imposing excessive burden on clinicians or patients;
- are likely to be high-value, that is, the challenges associated with developing or implementing the measure are outweighed by the potential benefits once implemented; and,
- rely on a data source (or sources) appropriate for pilot testing and accessible by the grantee for this purpose. The grantee need not be constrained to an existing data source if they have alternative methods or ideas for generating data, although their method must eventually be useable by others.
To optimize the likelihood of measure success, grantees are expected to seek input from multiple perspectives, working alongside technical experts as they develop and implement their measures. This can be satisfied with the formation of an “advisory panel,” or a series of panels, or ad hoc groups designed to focus on operationalizing a measure that meets its intended goals. The purpose of this requirement is to assure that the measure as imagined aligns with the measure as developed and implemented, and to assess the benefit and risks of implementation from multiple perspectives, including the patient, clinician, health care team, risk management, hospital organization, broader health care delivery system and the technical team comprised of informatics experts, data analysts, and others as needed. We will refer to this group(s) as the advisory panel, and reports from this activity will be referred to as an “expert input report.”
Grantees are expected to work with their advisory panel(s) to:
- iteratively refine their measure to generate a high-value measure, and
- iteratively operationalize the high-value measure into an algorithm (i.e., a set of steps that might involve collecting data, applying logic, and making calculations) to be pilot tested with a data source(s), and
- implement the measure in real-time clinical settings, and
- assess the success of their measure and revise as necessary.
- Rapid cycle evaluation and revision is typically required for successful measure development. We favor teams that are agile enough to test and refine, recognize failure early, and revise their project in rapid cycles. Learning what doesn’t work and demonstrating flexibility are desired features of this work and information about failed approaches is considered an important output to be understood and shared.
- Grantees will receive assistance in the form of technical experts and resources from the foundation to align their work to the specifications detailed in Table 1 and Appendix A. This funding opportunity prioritizes ideas for measures that are likely to be impactful over the applicant’s experience in measure development. Creative and novel approaches are strongly encouraged.
- Participants will be invited to engage with other grantees in virtual or in-person meetings organized and funded by the Foundation to inform their work and mutually benefit from lessons learned from the cohort of grantees.
Stage 1. Development of measure rationale
1. Summary of environmental scan to include literature search and review of relevant clinical guidelines
Assess the quality of evidence in support of the measure and summarize the argument for the measure.
CMS Blueprint for Measures Management System (pages 242-244)
NQF Submitting Standards (see "Measure Evaluation Criteria": pages 8 and 11-18)
2. Document detailing the search for similar existing measures
List search criteria, sources, and results to determine if related measures exist. (Is this new measure needed?)
3. Expert input report for development and refinement of measure
Report to include a description of the advisory panel(s), including a list of participants and their role/perspective, how they were selected, panel process, minutes of meetings, areas of controversy, and recommendations.
Stage 2. Pilot testing of measure
4.Description of measure
Provide details of measure specifications for all alternatives tested. See Appendix A.
CMS Blueprint for Measures Management System (pages 45-48, 117-130)
5. Pilot testing report
Summarize testing activities and findings, describing both successes and failures.
6. Report of implementation risk assessment
Detail the risks of implementation and mitigation strategies.
7. Final expert input report
Summarize the final conclusions from advisory panel after pilot testing. (Does the final measure meet the original intent, and will it be useful?)
2. Award Information
a) Award amount
Up to six projects will be awarded as much as $500,000 for work done over 18 months.
Complexity of the measure proposed in the application will contribute to variations in the amount of the award. The measure type (outcome, process, patient-reported outcomes), scope (single setting, cross-setting, across specialties), and data source (e.g., electronic health records, registries, claims, multiple and/or linked data sources, novel approaches, etc.) will impact the assessment of the measure’s complexity.
b) Anticipated award dates
Project start date is approximately January 1, 2021.
c) Period of performance
This project is the first phase of a larger plan for measure development. Promising work may qualify for additional funding for successive phases of work. This may include partnership with external measure developers for further testing of scientific acceptability, validity, and reliability, and potential submission for National Quality Forum (NQF) endorsement.
3. Eligibility Information
a) Eligible applicants
Applicants should have an affiliation with an institution or sponsoring organization, including but not limited to academic institutions, health care delivery systems, medical and clinical specialty societies, patients and patient advocacy groups, medical liability and risk management organizations, independent research organizations, electronic health record vendors, and others with interest and/or expertise relevant to diagnosis measure development.
Successful applications will describe teams and partnerships that include a multidisciplinary group of experts, including clinicians with content expertise, individuals with appropriate analytic expertise (data science, statistics, measure development) and persons with experience using relevant data sources. An individual may satisfy more than one of these areas of expertise. Measure development expertise is helpful but not a requirement for funding.
b) Eligibility criteria
Applicants must be familiar with the U.S. health care system and have grant outputs feasible for implementation in the U.S.
Suitable measure concepts must be based on existing scientific evidence and/or clinical guidelines, not new or as yet untested diagnostic tests.
Our scope of funding does not include support for the development or evaluation of new diagnostic tests or products, such as novel biomarkers.
Examples of previously funded projects can be viewed on our website at: New projects aim to develop clinical quality measures to improve diagnosis.
4. Application Information
a) Content and form of applications
All applications must be submitted through our online system. Applications will be limited to three applications for any given principal investigator.
b) Submission Dates
This funding opportunity requires a multi-phased competitive application process. A summary of key dates and deadlines is shown below.
February 18, 2020
Online application opens
June 30, 2020
Deadline for receipt of applications
July 20, 2020
August 17, 2020
Required in-person meeting in Palo Alto for semi-finalists, by invitation only
September 1, 2020
Finalists notified and invited to provide supplemental application materials
September 21, 2020
Deadline for receipt of all application materials
January 1, 2021
Approximate start of projects
5. Application Review Information
a) Evaluation criteria
Criteria used to assess each application are detailed in Table 2.
General Criteria for Evaluation of Proposals
General Quality of Measure
1. Identify at least one of the three priority categories.
2. Describe the diagnosis you want to improve and the problem you are trying to solve.
3. Succinctly describe your preliminary plan to assess the measure, including the specific diagnosis, target population, unit of analysis (individual, group, hospitals, health care groups, geographic area) and the anticipated data source(s).
4. Describe the method used to determine that the proposed measure is new or differs from existing measures.
Importance and Potential Impact of Measure
5. Address a condition or problem where diagnostic failure is common and of significant consequence and preventable; support with epidemiological data when available.
6. Summarize evidence that the proposed measure closes a performance gap.
7. Demonstrate, with evidence, that the measure will likely improve care and patient outcomes.
Quality of Team
8. Assemble a team with adequate expertise and relevant experience.
9. Engage an advisory panel throughout the project.
10. Define an available data source or sources.
11. Describe and characterize the data source(s).
12. Explain the rationale for selection of data source, its appropriateness for the proposed measure, and your expertise in using it.
13. Describe your planned analytic approach for proof-of-concept pilot testing, including quantitative and/or qualitative methods.
14. Demonstrate a grasp of the technical aspects of measure development, and/or have experience with measure development.
15. Develop outcome measures, or patient outcome measures, and risk adjust. (Preferred)
Risks and Mitigation
16. Articulate an understanding of potential risks of measure implementation.
17. Understand project risk and consider steps to mitigate.
Exceptional Proposal or Innovative Strategy
18. Describe an exceptional concept, team, or strategy, or have a particularly novel or creative idea.
*Technical assistance will be provided to facilitate a rigorous approach to measure development.
b) Review and selection process
This funding announcement initiates a multi-phased competitive application process soliciting ideas and strategies for diagnostic measures.
In the initial application process, we request a short description of the proposed clinical measure, its potential to improve patient outcomes, the intended data source that will be used to pilot test the idea, and a brief explanation of methods planned to prepare the measure for implementation (beginning with feasibility and acceptability). Access to and ability to use a specific data source is strongly preferred, however alternative and novel methods for testing will be considered. The application cycle will open on February 18, 2020 and close on June 30, 2020.
Semi-finalists will be announced by July 20, 2020. At that time, additional information will be solicited to advance to the second stage of the application process, including a preliminary budget, confirmation that applicants have notified their Office of Sponsored Research (or similar office), and attestation that they agree to participation in our cohort of grantees with our technical assistance partner in measure development. Additionally, the Moore policy on sharing data, HIPAA, and intellectual property should be reviewed.*
At least one member of the semi-finalist team is required to participate in a one-day, in-person meeting at the Gordon and Betty Moore Foundation offices in Palo Alto, California on August 17, 2020. Travel expenses for up to two team members will be paid by the foundation. Semi-finalists will present their proposal before a technical review board comprised of experts in clinical measure development. Proposals will be assessed for the quality of their concept and likelihood of success.
The most promising proposals will be selected as Finalists and will be eligible for funding. Individual grants will be developed in collaboration with Moore Foundation staff. Finalists will be notified by September 1, 2020; full proposals will be due before September 21, 2020.
6. Additional Information
a) Questions can be directed to email@example.com.
b) Applicants are encouraged to refer to the Resource List provided below.
*“The grantee may copyright any work that is subject to copyright and was developed, or for which ownership was acquired, under this award. The Moore Foundation reserves a royalty-free, nonexclusive, transferrable and irrevocable right to reproduce, publish, or otherwise use the work for charitable purposes.”
Appendix A: Measure specifications
Structure, process, outcome.
CMS Blueprint for Measures Management System (pages 45-48, 117-130, 253-258)
The group for whom quality of care is being assessed, with specifications such as age, gender, comorbidities, etc.
Type of score
Such as count, rate/proportion, ratio, categorical, other
Define numerator, denominator, exclusions (as appropriate).
Source from which data are obtained for measurement, such as administrative claims data, electronic health records, patient registries, or other novel data sources.
Unit of measure, analysis
The accountable entity whose performance is being measured, such as an individual clinician, group practice, health plan, or geographic region.
Such as hospital, ambulatory, emergency department, etc.
Time period in which data is aggregated to calculate the measure result.
Risk adjust for outcome measures
Account for differences in patient case mix.
Appendix B. Defined terms as used in RFI
Clinical Quality Measure
A tool, method, or mechanism for assessing the care of patients. Measures are commonly expressed as a proportion or rate, although novel expressions and techniques are evolving (and appreciated.)
Diagnostic Quality Measure
A measurement or expression that describes the accuracy, timeliness, efficiency, effectiveness, and/or safety of diagnosis (either as a process or an outcome).
As a noun: An explanation of a patient’s condition based on clinical criteria and scientific evidence, often rooted in an understanding of disease at the chemical, cellular, organ, or system level. In some cases, a diagnosis may exist only as a syndrome, i.e., a description of a set of signs or symptoms absent a full understanding of underlying pathophysiology.
As a verb: The steps, or series of steps, used to rule in or rule out disease to arrive at an explanation for symptoms or set of symptoms for a patient’s medical condition.
A multi-phased process for developing a clinical quality measure that includes measure conceptualization, specification, testing (for feasibility, reliability, and validity), and implementation that is guided by input from advisory groups and stakeholders.
Recommended Resources For Applicants
National Academies of Sciences, Engineering and Medicine. 2015. Improving Diagnosis in HealthCare. Washington, DC: The National Academies.
National Quality Forum. Improving Diagnostic Quality and Safety. Sept 19, 2017.
McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for improving diagnosis and reducing diagnostic error. A Report from the Institute of Medicine. JAMA. 2015;314(23):2501-2.
1. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf. 2014 Sep;23(9):727-31.
2. Singh H, Sittig DF. Advancing the science of measurement of diagnostic error in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-10.
3. Leape LL, Berwick DM, Bates DW. Counting deaths due to medical errors. In Reply. JAMA. 2002;288(19):2405.
4. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013 Aug;22(8):672-80.
5. McDonald KM, Bryce CL, Graber ML. The patient is in: patient involvement strategies for diagnostic error mitigation. BMJ Qual Saf. 2013 Oct; 22(Suppl 2):ii33-ii39.
6. National Quality Forum. Improving Diagnostic Quality and Safety. Sept 19, 2017.
7. National Academies of Sciences, Engineering and Medicine. 2015. Improving Diagnosis in HealthCare. Washington, DC: The National Academies.
8. Newman-Toker DE, Schaffer AC, Yu-Moe CW, Nassery N, Saber Tehrani AS, Clemens GD, Wang Z, Zhu Y, Fanai M, Siegal D. Serious misdiagnosis-related harms in malpractice claims: the “Big Three” – vascular events, infections, & cancers. Diagnosis, 2019 Aug 27; 6(3): 227-40.