Request for Proposals
The 2022 RFP Informational Webinar and Q&A sessions will be held on March 16th and April 20th.
A recording of the March 16th webinar is available here.
A recording of the April 20th webinar is available here.
The slide deck from both webinars is available here.
An FAQ is available here.
Please email email@example.com to RSVP and receive log in instructions for the webinars.
Request for Proposals for Clinical Quality Measures to Improve Diagnosis
1. Project Description
The purpose of this funding opportunity is to provide assistance in the form of grants for the development of innovative clinical quality measures (defined in Appendix A) that promote excellence in diagnosis of three categories of disease – acute vascular events, infections and cancer. This is our fourth round of funding for work in clinical quality measure development; this round prioritizes measures intended for use in the MIPS (Merit-based Incentive Payment System) and the MVP (MIPS Value Pathway), and applications that utilize Qualified Clinical Data Registries (QCDRs). However, interested applicants working on other types of quality measures and data sources are still encouraged to apply. Final funding decisions will be based on the quality of the proposal and feasibility of the project.
The Diagnostic Excellence Initiative
In November 2018, the Moore Foundation announced its Diagnostic Excellence Initiative with a focus on diagnostic performance improvement. The initiative aims to reduce harm from erroneous or delayed diagnoses, reduce costs and redundancy in the diagnostic process, improve health outcomes and save lives. The initiative’s first area of focus is to develop and validate new measures for diagnostic performance. Descriptions of our first three cohorts of grants for diagnostic clinical quality measures are now available for review. Starting with measure development is important – currently, U.S. health care systems are unable to systematically measure diagnostic performance in real time, which limits the ability to quantify performance and guide improvements.
Defining the problem and gap analysis
Diagnosis is at the heart of the clinical practice of medicine; indeed, almost every action or intervention flows from the diagnosis. A wrong, delayed, or missed diagnosis allows illness or injury to persist or progress often with potentially preventable harm. Twelve million Americans experience a diagnostic error each year1,2 and diagnostic errors play a role in an estimated 40,000-80,000 deaths 3 annually in the U.S. alone. Diagnostic errors occur in both inpatient and outpatient settings, in both adult and pediatric populations, and across the health care system. Harm from diagnostic errors accounts for the highest proportion of malpractice cases and the largest settlements, suggesting they are a leading contributor to preventable injury or death.4 In fact, the true incidence of diagnostic failure isn’t known because studies are limited and health care centers do not routinely track such data.
There are many reasons why diagnostic error is common. First, diagnosis is difficult. There is inherent variability in disease presentation. Furthermore, diagnostic tests are less than perfect and clinical encounters inevitably have some lingering and irreducible uncertainty. Additionally, many health care systems are not optimally designed to support efficient and reliable diagnostic processes. There are systematic barriers for optimal diagnosis, including misaligned financial incentives and fragmented delivery systems. From a patient experience perspective, the diagnosis is often not adequately communicated or well understood.5
There is an urgent need to improve diagnosis. However, without an awareness of baseline performance, and standards against which to compare performance, there is no way to measure improvement or to gauge the results of interventions. Despite a lengthy and growing list of clinical quality measures in health care, few existing measures address diagnostic performance specifically.6 The challenge of finding meaningful clinical measures for diagnosis reflects the complexity of the diagnostic process. Much of the work of diagnosis is invisible to the outside reviewer and many diagnostic pathways involve a variable trajectory of thoughts and actions that can be difficult to capture or record. This work is made even more difficult in that there are few specific guidelines for what would constitute diagnostic excellence. Just how precise must a diagnosis be to be considered correct? What is timely? For stroke, every minute counts, but for cancer, a few days to weeks might be considered acceptable. Reasonable standards for one setting may be unrealistic for another. And finally, it can be difficult to find reliable sources of data on diagnosis. Large databases often lack sufficient granularity or include a patient’s full diagnostic journey. Available data from the electronic health record is typically optimized for billing and may not accurately capture patient symptoms, diagnostic reasoning, differential diagnoses, or diagnostic uncertainty.
With a growing awareness of diagnostic errors, the health care environment is ready for change. The National Academy of Medicine report in 2015 (Improving Diagnosis in Healthcare) helped galvanize action by declaring the need to improve diagnosis a “moral, professional, and public health imperative”.7 Recently, the Agency for Healthcare Research and Quality (AHRQ) listed diagnostic errors as one of the leading urgent priorities in health care for 2019 (AHRQ's Road Ahead: Seizing Opportunities in Three Essential Areas to Improve Patient Care) and announced a funding opportunity for work to better understand diagnostic errors. (With Increased Funding, AHRQ To Explore Scope and Causes of Diagnostic Errors). The movement to improve diagnosis is gaining traction, evidenced by the commitment of 60 medical societies and leading health care organizations in the Society to Improve Diagnosis (SIDM) Coalition to Improve Diagnosis. Additionally, the National Quality Forum has convened a committee to issue recommendations for measure development for diagnostic quality and recently completed their report, Reducing Diagnostic Error: Measurement Considerations. The need is evident and increasingly acknowledged, but the difficulty rests with determining how to tackle this multi-faceted problem.
The characteristics for diagnostic quality described by the National Academy of Medicine (‘diagnosis should be accurate, timely, and communicated’) may present competing aims, and they omit safety and cost efficiency.7 Achieving higher accuracy with break-neck speed may drive over-testing, generate unnecessary and confounding data, exhaust diagnostic resources, and even directly harm patients with unnecessary procedures.
The aim of the funded work on measurement will focus on enabling clinicians and systems around them to find an optimal balance between these competing aims. We begin with developing measures, because measurement is integral to improvement.
c) Project scope
To align with the foundation’s principles of supporting work that is important, measurable, and impactful, we have identified three categories of disease that comprise the most common and most harmful diagnostic errors, including acute vascular events (such as stroke and myocardial infarction), infections (such as sepsis and pneumonia), and cancer (such as lung cancer and colorectal cancer).8 Proposals must relate to one or more of these three broad categories.
d) Requirements and expected outcomes of grant
For this grant opportunity, there must be, at minimum, a proposed measure of diagnostic performance based on obtainable evidence in one or more of the three priority categories listed above. The expected work requires two interlinked activities: 1) development of the rationale for a measure and 2) operationalizing the measure into an algorithm (see Appendix B). Prior work in measure development is useful but not required.
The Moore Foundation is seeking measures that can eventually be developed into fully specified performance indicators that:
- address a performance improvement opportunity and fill a measurement gap;
- align with evidence (e.g., from the medical literature, clinical practice guidelines, or expert consensus);
- focus on outcomes (although process measures may be considered if they are particularly innovative and link to patient outcomes);
- are likely to be feasible—that is, the information can be easily and reliably retrieved or designed into commonly available data sources (such as the electronic health records or administrative claims) without imposing excessive burden on clinicians or patients;
- are likely to be high-value, that is, the challenges associated with developing or implementing the measure are outweighed by the potential benefits once implemented; and,
- rely on a data source (or sources) appropriate for pilot testing and accessible by the grantee for this purpose. The grantee need not be constrained to an existing data source if they have alternative methods or ideas for generating data, although their method must eventually be usable by others.
To optimize the likelihood of measure success, grantees are expected to seek input from multiple perspectives, including patients, working alongside technical experts as they develop and implement their measures. This can be satisfied with the formation of an advisory panel, or a series of panels, or ad hoc groups designed to focus on operationalizing a measure that meets its intended goals. The purpose of this requirement is to assure that the measure as imagined aligns with the measure as developed and implemented, and to assess the benefit and risks of implementation from multiple perspectives, including the patient, clinician, health care team, risk management, hospital organization, broader health care delivery system and the technical team comprised of informatics experts, data analysts, and others as needed. We will refer to this group(s) as the technical expert panel (TEP).
Grantees are expected to work with their advisory panel(s) to:
1. iteratively refine their measure to generate a high-value measure, and
2. iteratively operationalize the high-value measure into an algorithm (i.e., a set of steps that might involve collecting data, applying logic, and making calculations) to be pilot tested with a data source(s), and
3. implement the measure in real-time clinical settings, and
4. assess the success of their measure and revise as necessary.
- Rapid cycle evaluation and revision is typically required for successful measure development. We favor teams that are agile enough to test and refine, recognize failure early, and revise their project in rapid cycles. Learning what doesn’t work and demonstrating flexibility are desired features of this work and information about failed approaches is considered an important output to be understood and shared.
- Grantees will receive technical assistance from Battelle, our consulting technical expert, to complete the grant deliverables described in Appendix C and to align their measures to specifications detailed in the CMS Measures Blueprint. This funding opportunity prioritizes ideas for measures that are likely to be impactful over the applicant’s experience in measure development. Creative and novel approaches are strongly encouraged.
- Participants will be invited to engage with other grantees in virtual or in-person meetings organized and funded by the Foundation to inform their work and mutually benefit from lessons learned from the cohort of grantees.
2. Award Information
a) Award amount
Projects will be awarded amounts varying from $250,000 to $500,000 for work done over 18 months.
Complexity of the measure proposed in the application will contribute to variations in the amount of the award. The measure type (outcome, process, patient-reported outcomes), scope (single setting, cross-setting, across specialties), and data source (e.g., electronic health records, registries, claims, multiple and/or linked data sources, novel approaches, etc.) will impact the assessment of the measure’s complexity.
b) Anticipated award dates
Project start date is approximately July 1, 2022.
c) Period of performance
This project is the first phase of a larger plan for measure development. Promising clinical measures may qualify for additional funding for successive phases of work to complete rigorous measure development and determine pathways for implementation.
3. Eligibility Information
a) Eligible applicants
Applicants should have an affiliation with an institution or sponsoring organization, including but not limited to academic institutions, health care delivery systems, medical and clinical specialty societies, patients and patient advocacy groups, medical liability and risk management organizations, independent research organizations, electronic health record vendors, and others with interest and/or expertise relevant to diagnosis measure development.
Successful applications will describe teams and partnerships that include a multidisciplinary group of experts, including clinicians with content expertise, individuals with appropriate analytic expertise (data science, statistics, measure development) and persons with experience using relevant data sources. An individual may satisfy more than one of these areas of expertise. Measure development expertise is helpful but not a requirement for funding.
b) Eligibility criteria
Applicants must be familiar with the U.S. health care system and have grant outputs feasible for implementation in the U.S.
Suitable measure concepts must be based on existing scientific evidence and/or clinical guidelines, not new or as yet untested diagnostic tests.
Our scope of funding is not directed at any of the following:
(1) development or evaluation of new diagnostic tests, products, or devices;
(2)development of new clinical guidelines or clinical prediction rules or clinical decision support, or
(3)clinical investigations designed to test a hypothesis.
Examples of previously funded projects can be viewed on the Moore Foundation website at Moore.org (see Third cohort of patient care grantees develop ideas for improving diagnosis, Second cohort aim to develop novel clinical quality measures to improve diagnosis, and New projects aim to develop clinical quality measures to improve diagnosis ).
4. Application Information
a) Content and form of applications
All applications must be submitted through the SurveyMonkey Apply online system. Applications will be limited to three applications for any given principal investigator.
b) Submission Dates
- This funding opportunity requires a multi-phased competitive application process. A summary of key dates and deadlines is shown below.
February 22, 2022
Online application opens
February 22 - May 16, 2022
Applications reviewed and funding decisions made and announced on a rolling basis
| March 16, 2022,|
11 am Pacific Time
Informational webinar about RFP*
| April 20, 2022,|
11 am Eastern Standard Time
Informational webinar about RFP*
May 16, 2022
Final date for which applications will be received and considered
June 13, 2022
Latest date for funding decisions for later-arriving applications
Estimated start of funding
September 2022, date TBA
Formal kick-off for cohort experience
*Interested applicants should indicate their desire to attend a webinar in an email to diagnosis@Moore.org to receive log in instructions.
5. Application Review Information
a) Evaluation criteria
Detailed instructions for application questions and our evaluation criteria are provided in Appendix D.
b) Review and selection process
This funding announcement initiates a multi-phased competitive application process soliciting ideas and strategies for diagnostic clinical quality measures.
In the initial application process, we request a short description of the proposed clinical quality measure, its potential to improve patient outcomes, the intended data source that will be used to test the idea, and a brief explanation of methods planned to prepare the measure for implementation (beginning with feasibility and acceptability). Access to and ability to use an existing data source is necessary, however alternative and novel methods for testing will be considered. The application cycle will open on February 22, 2022 and close on May 16, 2022.
Applications will be reviewed and funding decisions made on a rolling basis beginning February 22, 2022. Applicants are encouraged to submit early in the application cycle for optimal consideration. All final awardees will be notified no later than June 13, 2022.
Funded projects are expected to begin in July 2022.
A formal kick-off meeting to launch the cohort will be held for all grantees in September 2022 (date and time TBA). The principal investigator and one other team member are expected to participate.
6. Additional Information
a) Questions can be directed to firstname.lastname@example.org.
b) Applicants are encouraged to refer to the Resource List provided below.
Recommended Resources For Applicants
National Academies of Sciences, Engineering and Medicine. 2015. Improving Diagnosis in HealthCare. Washington, DC: The National Academies.
National Quality Forum. Improving Diagnostic Quality and Safety. Sept 19, 2017.
McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for improving diagnosis and reducing diagnostic error. A Report from the Institute of Medicine. JAMA. 2015;314(23):2501-2.
- Singh H, Bradford A, Goeschel C. Operational Measurement of Diagnostic Safety: State of the Science. Rockville, MD: Agency for Healthcare Research and Quality; April 2020. AHRQ Publication No. 20-0040-1-EF
1. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf. 2014 Sep;23(9):727-31.
2. Singh H, Sittig DF. Advancing the science of measurement of diagnostic error in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-10.
3. Leape LL, Berwick DM, Bates DW. Counting deaths due to medical errors. In Reply. JAMA. 2002;288(19):2405.
4. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013 Aug;22(8):672-80.
5. McDonald KM, Bryce CL, Graber ML. The patient is in: patient involvement strategies for diagnostic error mitigation. BMJ Qual Saf. 2013 Oct; 22(Suppl 2):ii33-ii39.
6. National Quality Forum. Improving Diagnostic Quality and Safety. Sept 19, 2017.
7. National Academies of Sciences, Engineering and Medicine. 2015. Improving Diagnosis in HealthCare. Washington, DC: The National Academies.
8. Newman-Toker DE, Schaffer AC, Yu-Moe CW, Nassery N, Saber Tehrani AS, Clemens GD, Wang Z, Zhu Y, Fanai M, Siegal D. Serious misdiagnosis-related harms in malpractice claims: the “Big Three” – vascular events, infections, & cancers. Diagnosis. 2019 Aug 27; 6(3): 227-40.
Appendix A. Defined terms as used in RFP
Clinical Quality Measure
A tool, method, or mechanism for assessing quality of patient care. Measures are commonly expressed as a proportion or rate, although novel expressions and techniques are evolving (and appreciated.)
Diagnostic Clinical Quality Measure
A measurement or expression that describes quality of diagnosis, such as accuracy, timeliness, efficiency and/or safety of diagnosis (either as a process or an outcome).
An explanation of a patient’s condition based on clinical criteria and scientific evidence, often rooted in an understanding of disease at the chemical, cellular, organ, or system level. In some cases, a diagnosis may exist only as a syndrome, i.e., a description of a set of signs, symptoms, or test results absent a full understanding of underlying pathophysiology.
The steps, or series of steps, used to rule in or rule out disease to arrive at an explanation for symptoms or set of symptoms or a patient’s medical condition.
A multi-phased process for developing a clinical quality measure that includes measure conceptualization, specification, rigorous testing (for validity and reliability), assessment (for scientific acceptability, importance and need for measure, and measure feasibility and scalability), and implementation that is that is guided by input from advisory groups that include stakeholders and patients.
The difference between the intended and actual performance.
Appendix B: Measure specifications. See CMS Measures Management System Blueprint for more details on terms and measure development.
Structure, process, outcome.
The group for whom quality of care is being assessed, with specifications such as age, gender, comorbidities, etc.
Type of score
Such as count, rate/proportion, ratio, categorical, or other.
Define numerator, denominator, exclusions (as appropriate).
Source from which data are obtained for measurement, such as administrative claims data, electronic health records, patient registries, or other novel data sources.
Unit of measure, or unit of analysis
The accountable entity whose performance is being measured, such as an individual clinician, group practice, health plan, or geographic region.
The specific site where care is provided and data generated, such as hospital, ambulatory clinic, emergency department, etc.
Time period in which data are aggregated to calculate the measure result.
Risk adjust for outcome measures
Account for differences in patient case mix.
Appendix C. Grant Deliverables and Activities
1. Participation as a member of a cohort of Moore grantees in activities managed by our technical assistant, Battelle.
Attend cohort activities, such as in-person or virtual kick-off and mid-grant meetings.
Attend webinars and structured group activities.
Participate in setting and monitoring milestones to reach objectives of the grant.
2. Technical Expert Panel (TEP) Report
Recruit and convene a TEP throughout measure development.
Produce a summary of the TEP feedback.
3. Evidence that patients were involved in measure development in the form of a report or summary
Engage patients in measure development.
Summarize patient feedback and describe their impact on the final measure concept.
4. Clinical quality measure(s) with technical specifications, tested and iteratively refined
Provide a preliminary measure concept, algorithm, and initial measure specifications.
Develop a testing plan.
Complete formative testing for validity and feasibility.
Complete quantitative testing to establish internal and external validity.
5. Measure evaluation documentation
Document evidence of measure importance.
Submit evidence of measure validity and reliability.
Submit evidence of measure feasibility.
Submit evidence of measure usability.
Document that the measure meets a need.
Provide evidence of scientific acceptability of measure.
6. Final report of measure concept with specifications after testing
Describe the final measure and all testing results. Include a description of next phases of work for final measure development, endorsement, and/or implementation.
Appendix D. Guide to Application (available for download here)
Criteria for Evaluation
Supplementary Instructions for Application Questions
Formatting text for questions:
Reponses to the application queries should be prepared in accordance with the AMA Manual of Style. The responses can be entered directly into the application site or cut and pasted from a Word document. Each question has a word limit shown in the supplementary guide (Appendix D). The word limit also lights up once you begin to enter text within each textbox.
Define all abbreviations when first mentioned within the application, then use the abbreviation afterwards.
Number references in the order in which they appear in the application. Identify references in parenthesis at the end of a sentence like this (1).
The reference list should be uploaded where requested near the end of the application. The reference list should follow the AMA style and use journal abbreviations as listed in PubMed. List all authors and editors up to six, and if more than six, list the first three followed by “et al”.
Tables and Figures:
Number all tables and figures in the order in which they are cited in the application. Call them out at the most appropriate point in the text, like this: (Table 1), (Figure 1). All tables and figures should be prepared in a single document and uploaded together where prompted.
Address a priority condition or problem
Question 1: What disease category does your measure target? Select one or more of the following:
Question 2: Identify the specific diagnosis you are trying to improve and the problem you are trying to solve. (50-100 words)
Guidance: Priority will be given to proposals that address conditions that are common and that contribute to serious preventable harm when diagnosis is wrong or delayed.
Identify and address a performance gap and assess potential impact of measure
Question 3. Describe the epidemiology of the problem you are addressing. What are the diagnostic challenges and performance gaps that need improvement and which of these will your quality measure address? (250-500 words)
Guidance: A performance gap is the difference between the intended or desired performance and the actual performance achieved. Successful proposals should target areas that have significant and quantifiable variation in performance that are amenable to improvement.
Example response for a hypothetical condition: Disease “X” is a significant cause of death from infection, causing 50,000 deaths each year in the U.S. and contributing to chronic pain and disability in another 250,000 who survive. Diagnosis is difficult because the exam findings are subtle and typical symptoms overlap other common, benign conditions. Review of missed cases often find documentation of important physical findings lacking. Improvements in detection of abnormal physical exam findings will improve the accuracy and timeliness of diagnosis and improve outcomes.
Question 4. If the measure is successful, how would it improve care? Describe the expected impact on patient outcome (survival, quality of life) and/or experience (e.g., convenience, cost, efficiency, comfort). Estimate the potential benefit in both quantitative and qualitative terms. Cite evidence to support your argument. (150-250 words)
Guidance: Example response for a hypothetical example of disease “X”: Smith showed that focused attention to two key findings on physical exam improved the early detection of disease “X”.1 Not all cases will be typical, and some patients may present too late in their course to benefit from rapid evaluation. Assuming a modest improvement in diagnostic accuracy of 20%, the improved exam could save 10,000 lives and improve functional recovery in another 50,000.
Measure concept and specifications
Question 5. State your measure concept(s) succinctly. Define the algorithm, including numerator, denominator, and inclusion and exclusion criteria. Include the specific diagnosis, target population, unit of analysis, the anticipated data source, and the care setting.
If your concept can not be expressed as a ratio, list the terms used to define and calculate the measure.
If you plan to develop more than one measure, please describe each and explain if they are intended to be used independently or contribute to a composite measure or a balanced set of measures. (100-250 words)
Guidance: An acceptable quality measure concept should describe clearly what will be measured and how the measure will be scored. Please refer to Appendix B in the RFP for definition of terms and expected details of the measure concept. Applicants are encouraged to review the CMS Blueprint as they develop their measure concept and complete this application.
Question 6a. Describe and characterize the data source(s)* that will be used to develop and test your measure, including:
Question 6b. Explain your rationale for the data source(s) and defend the appropriateness for your measure. (100-250 words)
Guidance: *Access to this data is a requirement for participation in this grant and should be obtained prior to the start of the project.
Question 7. Describe your plan for testing to assess validity, reliability, feasibility, and usability of your measure concept. (350-500 words)
Engagement of stakeholders, experts, and patients
Question 8. Who are the stakeholders and experts that should be involved in your measure development? Describe your plan to recruit a technical expert panel (TEP) and how you plan to engage them in the design and evaluation of your measure concept. (250-500 words)
Guidance: A TEP is a requirement of this grant activity.
Question 9. How do you plan to get input from patients, their families and caregivers, and/or patient advocates throughout measure development? (100-250 words)
Guidance: Successful measure development should consider the perspective of patients and patient involvement in measure development is a requirement. If needed, our technical assistant can advise you on how to successfully recruit and engage patients.
Generalizable and Acceptable
Question 10. Does the proposed measure assume a standard of care that is supported by evidence and that is generally accepted throughout the medical community? Is the standard widely applicable to a variety of health care settings and geographic areas (e.g., academic and community practice, urban and rural)? Cite the evidence to support your standard. (250-500 words)
Question 11. Is the measure concept unique, or does it add to existing measures in a meaningful way? How do you know? Define your search criteria and sources. Identify, describe, and compare other measures that relate to the targeted condition and justify how your measure addresses an unmet need. (150-500 words)
Question 12. Is there anything about your data source, analytical approach, or project strategy that is new or innovative? If so, describe. (150-250 words)
Guidance: We encourage applicants with traditional methods but are also open to novel ideas and approaches to measuring and improving diagnosis.
Measure Risk and Burden
Question 13a. What are threats to successful development and uptake of your measure? How do you imagine making the measure(s) feasible, generalizable, and usable across different health care settings? (250-500 words)
Question 13b. What is the risk that the potential benefits of the measure may be outweighed by potential harm from unintended consequences of the measure (e.g., undue focus on one area may create neglect of other, equally important areas with competing priorities or force actions that compromise other aspects of care) or added burden generated by the measure itself (e.g., add to clinician load or disrupt workflow)? How do you plan to mitigate these risks? (100-250 words)
Measure uptake and maintenance
Question 14. (optional) Do you plan to develop implementation resources to facilitate uptake and use of the measure? If so, describe. (0-200 words)
Question 15. What is your long-range plan for use of the measure beyond your local institution to improve quality on a regional or national level? What purpose is the measure best suited for? National quality improvement initiatives? CMS accountability program? Merit-based Incentive Payment System (MIPS)? Do you plan to submit for NQF endorsement? If successfully adopted for use by CMS, what is your plan for maintenance and stewardship of the measure? (100-250 words)
Guidance: NQF endorsement attests to rigor in measure development and is a desired long-term outcome of this grant. Our technical assistant will help you develop a scientifically sound measure, provide guidance about NQF endorsement, and help identify pathways for dissemination and use of the measure.
Quality and expertise of team
Question 16. Briefly list and describe your team members. Limit information for each member to their name, area of expertise, and role they will play in the project. (250-500 words)
Guidance: We will review PI and co-investigator CVs for more details. Answers should be restricted to the information requested.
Question 17. What experience does your team have in clinical quality measure development? Describe specific measures your team has developed; provide details about your success with obtaining NQF endorsement and developing measures that have been adopted by CMS, used in MIPS, or implemented in other quality efforts. (100-250 words)
Guidance: While experience with clinical quality measure development is useful and desirable, it is not a prerequisite for funding. This funding opportunity is designed to identify important, interesting, novel, and potentially impactful measure concepts and will pair grantees with a technical assistance partner to optimize their success in measure development.
Project risk and mitigation
Question 18. Every project encounters challenges. Describe anticipated and possible project risks and how you plan to mitigate them (e.g., access to data, IRB approval, timeline concerns, etc.). (100-250 words)
Diversity and Equity
Question 19: Please describe how you will promote equity, inclusion, and diversity through your proposed project and team, and how you have worked to advance equity, inclusion, and diversity in the past. (0-250 words)
Question 20: Does your proposal address diagnostic issues that disproportionally affect certain demographic groups or historically underserved populations? If so, describe. (0-250 words)