Skip to main content

What We Measure Matters: Enhanced Performance Metrics for SNAP and Medicaid Would Promote a More Human-Centered Delivery System

The Supplemental Nutrition Assistance Program (SNAP) and Medicaid help tens of millions of people with low incomes put food on the table and obtain needed medical services. But the traditional measures of program performance say little about the human experience of accessing benefits. For example, how easy is it to apply for benefits? How long must someone wait on the phone to get information about their case? How often do paperwork hurdles trip up a person trying to keep their benefits current? This is true during stable times when the economy is strong, and when challenges like a recession, a pandemic, or increased workload due to policy or system changes present a temporary shock to state systems. Policymakers and people seeking assistance need and deserve a clear understanding of how well the system serves people and where it breaks down.

Congress and the federal agencies providing program oversight should require all states to report a uniform, well-defined set of human-centered metrics for SNAP and Medicaid. The Safety Net Scorecard, prepared by Code for America and the Center on Budget and Policy Priorities (CBPP), provides a comprehensive list of such measures.[1] If adopting the full scorecard is not currently feasible, this report presents a few key metrics, or “vital signs,” that federal policymakers should require states to report. This would provide valuable insights at an important time, as states adjust to the end of pandemic-era federal rules in these programs, and would mark a first step toward future improvements in data collection and reporting.

Federal action is critical to provide comparable data across states and localities that can illustrate how differences in policy, process, and technology affect people’s experiences accessing benefits. States can also help lead the way by adopting their own strong performance measures , to guide their management and decision-making.

End of COVID Policies Adds to Need for Improved Metrics

Several COVID-related policy provisions have ended in SNAP and Medicaid:

  • In Medicaid, states are “unwinding” the continuous coverage provision, which since March 2020 had required states to maintain Medicaid coverage for most enrollees. This spring, states began resuming Medicaid eligibility redeterminations and ending coverage for those found ineligible. This process, which will last through May 2024, is straining a system already weakened by staffing shortages.
  • In SNAP, the end of the public health emergency means the additional flexibilities Congress gave states to manage their workload are ending. So are two key eligibility expansions, for college students and unemployed adults without children in the home; states will therefore need to devote additional resources to properly screen such individuals for eligibility under regular program rules.[2] In addition, the debt limit deal enacted earlier this year extended SNAP’s work-reporting requirement to older adults and includes new exemptions for people experiencing homelessness, veterans, and youth aging out of foster care; these provisions will add to state workload pressures as well.[3]

These changes highlight the need for timely, meaningful performance metrics to guide policymakers through the many challenges facing federal, state, and county governments. The increased workload resulting from the end of COVID-era flexibilities will be difficult for many state and county agencies to handle. Clients may face barriers reaching eligibility workers due to fewer open offices, long call center wait times, or difficulty accessing online functions. Eligible families and individuals could face delays or lose benefits entirely if overwhelmed agencies fail to process renewal paperwork. State and local agencies, the federal government, and advocates need timely information about how these challenges affect the human experience of getting and maintaining benefits in order to remedy these issues as soon as possible.

Even in the best of times, administrative barriers in accessing benefits can make programs less effective at meeting their core purposes of improving individuals’ health, economic, and social well-being. In addition, systems that are difficult for individuals and families to navigate are likely to be inefficient and more costly for government to administer. A recent Office of Management and Budget analysis of economic and health assistance programs found that these administrative burdens “do not fall equally on all entities and individuals, leading to disproportionate underutilization of critical services and programs, as well as unequal costs of access, often by the people and communities that need them the most.”[4]

Existing Measures Tell Little About How Programs Work for Participants

SNAP and Medicaid programs sometimes differ widely from state to state in terms of participation rates, timeliness of benefit delivery, and other metrics, but states report very little information to federal agencies to assess what practices account for this variation. The current SNAP performance metrics over-emphasize payment errors and other improper payments, while saying little about customer service or the human experience of accessing benefits.[5] And while considerable Medicaid data is publicly available, it primarily focuses on participation and how Medicaid health services are used, such as quality of care and cost measures. There are new, temporarily required Medicaid metrics (detailed below) that shed more light on the customer experience — the federal government should build on these temporary requirements to create strong, permanent performance measures.

The federal agencies that administer SNAP and Medicaid publish three types of program measures reported by states (excluding the temporary data reporting in Medicaid discussed more below):

  • descriptive measures about basic program delivery, such as the number of participants or beneficiaries and the amount or types of benefits they receive;
  • “timeliness,” or the share of applications that the state processes within federal rules; and
  • compliance or program integrity measures, such as the combined payment error rates for SNAP and the Payment Error Rate Measurement for Medicaid.[6]

In addition, researchers (under government contract or independently) periodically publish information about program participation rates among eligible individuals and families based on Census Bureau surveys and other ad hoc analyses of program operations.

While essential for understanding the programs’ reach and basic operations, these measures do not provide enough information for state and federal policymakers or the public to assess the experience of individuals seeking assistance from SNAP and Medicaid. Applicants may ultimately secure benefits, but how long did the process take and how many phone calls and trips to local offices were required? Were recipients able to successfully renew their benefits, or did they lose benefits and have to navigate the full application process again because they or the state or county agency failed to complete some step in the complex process?

Safety Net Scorecard Would Enable More Human-Centered Approach

To jump-start a conversation about the types of measures that states and the federal government could adopt as part of a more human-centered approach to SNAP and Medicaid benefit delivery, Code for America and CBPP designed the Safety Net Scorecard in 2021. It includes a comprehensive list of measures that federal and state governments can use to track how effectively safety net programs provide benefits over time and across states or other jurisdictions.

The scorecard measures cover three categories:

  • Equitable access: Are all access points to the program ― including digital, telephone, and in-person services — accessible to all people? How difficult is it to apply or to get questions answered? Are those who apply satisfied with their experience?
  • Effective delivery: How long does it take to receive benefits? How commonly are cases denied due to procedural reasons, as opposed to reasons related to financial eligibility? Can people who remain eligible successfully maintain eligibility?
  • Compassionate integrity: What share of eligible people participate? Do eligible people receive the benefits for which they qualify? How accurate are eligibility and benefit determinations? How smooth is the appeals process?
Image
Equitable Access 
Effective Delivery 
Compassionate Integrity

To meet their core purposes of improving food security and health coverage among low-income households,[7] SNAP and Medicaid must reach all individuals and families who are eligible and do so efficiently, without breaks in coverage when people remain eligible. The types of measures outlined in the Safety Net Scorecard would provide more nuanced information to policymakers about the experience of applying for and maintaining access to benefits.

These measures also would greatly aid states in uncovering inequities in access to benefits. To address such inequities, states need to collect complete and consistent data by race, ethnicity, and language. This has been challenging due to inconsistent reporting by states, applicants’ mistrust and confusion about what’s being asked, and differences in measurement across states. Without the ability to meaningfully disaggregate data, states and federal agencies will struggle to understand where further improvement is needed.

Some states already collect some of the measures we suggest to help manage their operations or make the case to legislators or the public that the programs are well managed. Some measures, such as call center metrics that rely on standardized software, may be more readily available across states than others that can be more challenging to track and measure, such as program “churn.” While individual states can use the measures in the scorecard to assess their own performance in human-centered benefit delivery, ideally key data points would be collected for every state using the same methodologies to permit cross-state comparisons.

In December 2022, Congress took a step in the right direction and passed enhanced Medicaid reporting requirements to help monitor the unwinding of the continuous coverage provision. From April 2023 through June 2024, states must report the following metrics each month for Medicaid:

  • renewal data (i.e., total renewals initiated, ex parte renewals,[8] and the number of individuals terminated);
  • terminations for procedural reasons, such as the state not receiving a completed renewal packet from an enrollee;
  • individuals transferred to the Children’s Health Insurance Program (CHIP);
  • transfers to marketplaces and enrollment in qualified health plans; and
  • call center volume, wait time, and abandonment rate.[9]

These new Medicaid reporting requirements establish a foundation for Congress and the Administration to work toward incorporating human-centered performance metrics in public benefit programs. These new metrics, though temporary, will also allow us to understand some of the challenges (such as inconsistencies in the data or other technical issues) of requiring these kinds of data of all states and of interpreting and using the data.

Adding “Vital Signs” Measures Could Constitute Initial Step

If requiring the full complement of data points in the Safety Net Scorecard is not feasible, at least initially, policymakers should build a few key metrics ― or “vital signs” ― into state program performance measures to provide valuable insights during a period of increased workload and rapid change. This could also lay the groundwork for expansion into other data in the future by informing the types of information and support that states need for successful implementation. The vital signs that can quickly and most effectively begin to track access to health and economic supports are:

  • timeliness of application, renewal, and recertification processing: percent of applications and renewals completed within specified time frames;
  • procedural denials and closures: number and percent of applicants/participants denied, or cases closed, for procedural reasons, including:
    • missed interviews,
    • missing verifications, and
    • unreturned renewal and interim reporting forms;
  • ex parte renewal rates (Medicaid): number and percentage of renewals completed using data sources without requiring action from enrollees;
  • backlogs: number of applications and renewals/recertifications pending beyond the date they were due to be completed;[10] and
  • call center volume, wait times and answer rates.

State agencies should report these metrics monthly, as consistently across states as possible, for both SNAP and Medicaid, and the state data should be made publicly available. Wherever possible, measures should be disaggregated by language, gender, race/ethnicity, and disability status.

Some of these recommended vital sign measures are already reported to federal agencies for Medicaid and/or SNAP, but the data are not made public or reported in a timely manner, or reporting is only temporarily required. (See Appendix B.) This means that in some instances, federal agencies could simply publish the data they already collect from states. If agencies are not publishing currently collected data due to concerns about its quality, they should work through these issues, so the effort states put into submitting data is worthwhile.

The vital sign measures described above would provide a sense of how program operations are faring on the ground and would be relatively feasible for states to implement. Additional measures are worth pursuing over the longer term but may need further testing and discussion with states to determine how best to calculate them and ensure comparability across states. Fortunately, some states have already developed and track these metrics, and in some limited cases publish these types of measures[11] and the Centers for Medicare & Medicaid Services (CMS) also recently published historical data on Medicaid churn by state using available data.[12] This existing data reporting could be a starting point for developing metrics that all states would report. Those metrics include:

  • churn: percent of cases that lose coverage at a renewal or periodic report and re-enroll within the following 90 days;[13]
  • customer satisfaction: how satisfied clients are with their experience;
  • cross-program enrollment: percent of participants in one program who are also enrolled in another program; and
  • participation rates: percent of eligible people the programs are reaching (a monthly basis may not be feasible for this measure).

To implement the metrics outlined above, states and counties would need federal guidance, technical assistance and support, and ideally funding and incentives to expand the amount of data they collect and share on a regular basis. Additionally, the metric development process should engage state and county agency leaders in planning for their full-scale implementation and identifying the types of technical assistance and support states need to collect data that are comparable across states. This would encourage buy-in and a thoughtful implementation approach that could ultimately be successful.

Enhanced Metrics Could Improve Clients’ Experience, Program Administration

The basic outlines of SNAP and Medicaid are similar across the country, yet state policies and processes to administer the programs vary significantly. Federal policy allows for, and in some cases requires, streamlined processes such as lengthening the time between required renewals, using electronic data sources to verify eligibility, and leveraging enrollment in one program to simplify enrollment or renewal in another. The federal government provides substantial financial resources for system modernization and technology improvements. Creating a more efficient process is good government and is crucial to advancing health and racial equity, as the burdens imposed by inefficient processes disproportionately affect people of color. But to understand these access barriers and measure progress in reducing them, we must first have human-centered metrics that better measure the client experience.

The metrics presented here could help states and the federal government shed light on where state processes to deliver benefits succeed or fall short for applicants and beneficiaries. Measuring human-centered program performance for geographic or operational areas (such as administering counties, urban areas, or remote areas) or for demographic groups (such as race/ethnicity, language spoken, disability status, age, gender identity, or sexual orientation) would provide additional information that states and federal agencies could use to ensure that state administrative systems provide benefits to the people who need them and identify localized problems or areas of strong implementation that could provide models to other areas within a state. Using enhanced metrics to compare performance across states could more clearly illustrate which states are top performers and have best practices to share, as well as which states need further attention from state leadership, federal agencies, and state and national advocates.

The metrics presented here could also help states better manage future periods of change and experimentation. During the pandemic, the lack of metrics hamstrung both advocates and state and federal agencies in evaluating the effect of the rapidly changing policy and operational landscape, as the federal government allowed states to suspend some requirements and implement strategies to help individuals retain benefits. Did eliminating the SNAP interview significantly reduce procedural denials? How did the continuous coverage requirement in Medicaid reduce churn and affect call center volume? Our inability to answer these questions made it more difficult for program administrators, policymakers, and advocates to see which strategies worked and which didn’t, limiting the system’s ability to make real-time course corrections.

Coming Opportunities to Advance More Human-Centered Measures

As states “unwind” pandemic provisions such as Medicaid continuous coverage and SNAP workload waivers, we will have access to some public human-centered metrics for Medicaid, such as call center wait times and rates of procedural denials. The temporary Medicaid data collection required as part of unwinding will also allow us to understand the challenges of requiring all states to collect and report these kinds of data and of interpreting and using that data. A stronger, permanent, human-centered data collection system that produces timely, accurate, and actionable data can improve program performance during more normal times and strengthen our ability to respond to urgent events that arise, including natural disasters, economic recessions, or significant changes to a state’s systems or processes.

Progress toward more human-centered measures could occur through federal legislation, federal agency oversight actions, or state innovation. Ultimately, the federal agencies providing program oversight need to require all states to report a uniform, well defined set of human-centered metrics for both SNAP and Medicaid that the federal government would publish consistently and promptly.

Congress revisits SNAP every five years in the farm bill. As debate progresses over the next farm bill, scheduled for 2023, there may be opportunities to include provisions on performance metrics as part of SNAP modernization and oversight. In addition, federal policymakers should extend the state reporting requirements for key Medicaid data to ensure program integrity and add a similar set of metrics in SNAP as part of the farm bill. Momentum in both programs to move in a similar direction could create the kind of environment needed to encourage both the Agriculture Department and CMS to collaborate — in partnership with the White House — to develop ongoing metrics that would benefit both programs and coordinate across the two programs to ensure that metrics are as aligned as possible. The federal agencies also have opportunities within their own authority to make the reporting they require from states more meaningful and available to the public.

There are also opportunities for state agencies, state advocates, and researchers to demonstrate why such metrics are needed, what the ideal set of metrics should be, and how they should be measured. For example, states could implement measures to meet their own program goals — or respond to a state-specific requirement — and show federal policymakers how these measures could drive program improvement.        
 

Appendix A: National Safety Net Scorecard Metrics

Equitable Access

Online Accessibility – Is the online experience of the service simple and easy to use?

  • % of applications submitted via paper, desktop, mobile, landline phone, in person
  • % of applications with at least one verification document submitted electronically among applications where verification was required
  • Of website visitors who begin the application process, % who are unable to successfully create an account, if applicable
  • Of website visitors who begin the application process, % who are unable to successfully complete remote identity proofing, if applicable

Mobile Accessibility – Does the service work easily on a mobile phone?

  • Of applications submitted online, % of applications submitted from mobile, tablet, and desktop
  • % of recertifications and periodic reports completed via mobile, tablet, and desktop

Call Center Accessibility – Is the call center easily accessible to all users?

  • Average # of calls per day
  • Average call wait times (By language)
  • Call abandonment rate: % of calls dropped before the customer speaks to an agent (By language)
  • First call resolution rate: % of issues resolved with one call (By language)

Local Office Accessibility – Are in-person locations easily accessible to all visitors?

  • # of visitors to each local office relative to staffing ratio
  • Average wait times (By language)

Application Burden – How difficult is the application process for the service?

  • Average # of minutes to complete an application and renewal, via desktop and mobile
  • Application completion rate: # of complete, submitted applications compared with applications started via desktop or mobile (By language)

Customer Satisfaction – How satisfied are clients with the application experience?

Measure customer satisfaction rates via user responses to these recommended questions:

  • How easy or difficult was it to complete the application?
  • How confident were you about which programs to apply for?
  • How confident are you that you answered the questions correctly?
  • How confident are you that you know what the next steps in the process are?

Effective Delivery

Application Outcomes – Who is approved and who is denied for benefits?

  • Total application volume: # of cases per week/month
  • % approvals and denials
  • By application type (online, mobile, in person, paper, landline phone)
  • By race/ethnicity, language preferences, and other key demographic
  • By annual household income (e.g. $0, up to 100% FPL, over 100% FPL)
  • By common denial reasons

Procedural Denials – How many applicants are denied for reasons outside of financial eligibility?

  • % of applicants denied for procedural or administrative reasons
  • By denial reason (e.g. missed interview, missing documents)
  • By race/ethnicity, language preferences, and other key demographics

Timeliness – How long does it take for people to receive benefits?

  • Average # of days from application date to determination, for both approvals and denials (by expedited and regular service)
  • Average # of days from case approval to EBT card activation
  • % of Medicaid determinations completed within 24 hours
  • % of SNAP applications determined within the mandated time frame (30 days for regular, 7 days for expedited service)
  • % of Medicaid applications determined within the mandated time frame (45 days for standard application, 60 days for disabled applicants)

Expedited Service (SNAP) – Are clients who qualify receiving reliable expedited service?

  • % of applications processed as expedited
  • % of expedited applications processed within 7 days
  • % of expedited participants whose benefits continue

Interview Completion – Are clients able to complete the required interview?

  • % of interviews completed over the phone
  • # of cases denied for missed interview (By language and other key demographics)

Notifications – Are notifications effective at reaching clients?

  • % of clients opted in for text message
  • % of clients opted in for email
  • % of email notifications opened
  • % of notices returned as undeliverable (by paper and email)

Verifications – Are clients able to submit accurate verification documentation?

  • % of cases that request verification
  • % of cases denied for missing verification (by race/ethnicity, language preferences, and other key demographics)

Renewals – Are clients able to successfully renew their benefits?

  • % of SNAP periodic reports approved and % denied
  • % denied for form not returned
  • % denied for missing verification
  • % of SNAP recertifications approved and % denied
  • % denied for form not returned
  • % denied for missed interview
  • % denied for missing verification
  • % of SNAP periodic reports and recertifications submitted online
  • % Medicaid renewals approved and % denied
  • % renewals completed ex parte (By MAGI / Non-MAGI)
  • % denied for form not returned
  • % denied for failure to submit verification
  • % of cases that lose benefits mid-certification (By reason)

Churn – How often are clients churning off of programs they are eligible for?

  • % of new applicants that received benefits in the past 60 days
  • % of cases with a renewal or report due that then reapply for benefits within the following 30, 60, 90 days (By periodic report vs. recertification)

Compassionate Integrity

Participation Rate – Is the program reaching everyone who’s eligible, with a focus on identifying and prioritizing populations that are often underserved by government?

  • # of SNAP participants compared with the total # of estimated eligible people
  • # of Medicaid participants compared with the total # of estimated eligible people
  • Participation rates by race/ethnicity, language, age group, and other demographic markers
  • Cross-program enrollment rates (e.g., % of Medicaid enrollees who are also enrolled in SNAP, % of SNAP households with a child under 6 who are also enrolled in WIC, etc.)
  • % of applicants who are applying for the first time

Accuracy – How accurate are benefit allotments?

  • Total amount of overpayments and underpayments as a share of issuance
    • % of cases with overpayment
    • % of cases with underpayments
  • Accuracy of determinations
    • % of cases inaccurately denied
    • % of cases inaccurately approved
  • # of overpayments filed against clients
    • % for intentional program violation
    • % for agency error
    • % for household error

Appeals/Hearings – How smooth and responsive is the appeals process?

  • # of appeals made
  • % of appeals where the agency decision is upheld
  • % of hearings where agency decision is reversed

Appendix B:    
Current Reporting and Availability of Recommended Vital Sign Measures

APPENDIX TABLE 1
MeasureReported to federal agency but not publishedReported to federal agency and published with a delayTemporarily required to be reported to federal agency and published for Medicaid unwindingNot currently reported
SNAP: Timeliness of application and recertification processingX – RecertificationsX - Applications  
Medicaid: Timeliness of application and renewal processing X – MAGI applications X – Non-MAGI applications and all renewals
SNAP: Procedural denials and closures   X
Medicaid: Procedural denials and terminationsX – Procedural denials (applications) X – Procedural terminations (renewals)* 
Medicaid: ex parte renewal rates  X 
SNAP: BacklogsX – States do report information about overdue applications and recertifications once they are processed on the FNS Form 366B  X – No information is reported regarding pending overdue applications and recertifications
Medicaid: BacklogsX – Applications X – Renewals* 
SNAP: call center volume, wait times, and answer rates.   X
Medicaid: call center volume, wait times, and answer rates.  X* 

*These Medicaid measures are in performance indicator data that states report to CMS but CMS did not previously publish. It is unclear what data CMS will continue to publish after unwinding is completed.

Note: The assessment here is regarding whether all states report to the federal government and whether the federal government publishes the information. Many states do collect the information, and a few make it publicly available.

Topics:

End Notes

[1] The scorecard and other materials are available at https://codeforamerica.org/programs/social-safety-net/scorecard/. The complete set of metrics is available in Appendix A.

[2] CBPP, “States Are Using Much-Needed Temporary Flexibility in SNAP to Respond to COVID-19 Challenges,” updated January 25, 2023, https://www.cbpp.org/research/food-assistance/states-are-using-much-needed-temporary-flexibility-in-snap-to-respond-to.

[3] Katie Bergh and Dottie Rosenbaum, “Debt Ceiling Agreement’s SNAP Changes Would Increase Hunger and Poverty for Many Older Low-Income People; New Exemptions Would Help Some Others,” CBPP, May 31, 2023, https://www.cbpp.org/research/food-assistance/debt-ceiling-agreements-snap-changes-would-increase-hunger-and-poverty-for.

[4] Office of Management and Budget, “Study to Identify Methods to Assess Equity: Report to the President,” July 20, 2021, https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985-Implementation_508-Compliant-Secure-v1.1.pdf.

[5] One SNAP measure plays an outsized role in state policy decisions and federal oversight: the payment error rates determined through the SNAP Quality Control (QC) system. Payment accuracy is an important measure of SNAP’s performance, but an overemphasis on payment accuracy, without balanced measurement of SNAP’s core goals, risks distorting states’ decision-making and leaving policymakers and the public with inadequate information about how well the program works for households.

[6] U.S. Department of Agriculture (USDA), Food and Nutrition Service, “SNAP Data Tables,” https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snap; USDA, Food and Nutrition Service, “SNAP State Activity Reports,” https://www.fns.usda.gov/pd/snap-state-activity-reports; USDA, Food and Nutrition Service, “FY 2019 Reported Application Processing Timeliness,” May 13, 2022, https://www.fns.usda.gov/snap/fy-2019-reported-application-processing-timeliness; USDA, Food and Nutrition Service, “SNAP Quality Control Annual Reports,” https://www.fns.usda.gov/snap/QC/annual-reports; Centers for Medicare & Medicaid Services, “PERM Error Rate Findings and Reports,” https://www.cms.gov/research-statistics-data-and-systems/monitoring-programs/medicaid-and-chip-compliance/perm/permerrorratefindingsandreport.

[7] Section 2 of the Food and Nutrition Act of 2008 (7 U.S.C. 2011). Specifically, SNAP “permit[s] low-income households to obtain a more nutritious diet through normal channels of trade by increasing food purchasing power for all eligible households who apply for participation.” Section 1901 of the Social Security Act appropriates funds so states can “furnish (1) medical assistance on behalf of families with dependent children and of aged, blind, or disabled individuals, whose income and resources are insufficient to meet the costs of necessary medical services, and (2) rehabilitation and other services to help such families and individuals attain or retain capability for independence or self-care.”

[8] In Medicaid, agencies must first attempt an ex parte renewal, which uses available data sources without requiring an enrollee to take action, before sending a pre-populated renewal form.

[9] The call center abandonment rate is the percentage of calls that disconnect before the call is answered because the state system disconnected the call or the caller hung up. (The answer rate is the percentage of calls answered before either side disconnects.)

[10] It will be important to also collect the number of applications, periodic reports, and recertifications received and processed each month to put the backlog metric into context.

[11] Some state examples that are publicly available include: California Department of Social Services publishes a data dashboard that includes a SNAP (CalFresh) churn measure at https://public.tableau.com/app/profile/california.department.of.social.services/viz/CFdashboard-PUBLIC/Home; the Colorado Department of Human Services publishes data on Medicaid cross enrollment in SNAP by county at https://cdhs.colorado.gov/snap-data; the Massachusetts Department of Transitional Assistance publishes a monthly scorecard that includes a SNAP churn measure at https://www.mass.gov/lists/dta-performance-scorecards.

[12] CMS, “Historic Trends in Medicaid and CHIP Coverage Continuity, Loss, and Churn in 2018,” July 2023, https://www.medicaid.gov/sites/default/files/2023-07/historic-loss-and-churn-07052023.pdf.

[13] Two measures of churn are important to collect; both should be required and collected consistently in order to be meaningful. First, to measure impact on state agencies: percent of new applicants that received benefits in the past 60 days. Second, to measure impact on people churning on and off of benefits: percent of cases with a renewal or report due that then reapply for benefits within the following 30, 60, 90 days (by periodic report versus recertification for SNAP).