A- A+
Alt. Display

# The Effect of the Hospital Readmission Reduction Program on the Duration of Observation Stays: Using Regression Discontinuity to Estimate Causal Effects

## Abstract

Research Objective: Determine whether hospitals are increasing the duration of observation stays following index admission for heart failure to avoid potential payment penalties from the Hospital Readmission Reduction Program.

Study Design: The Hospital Readmission Reduction Program applies a 30-day cutoff after which readmissions are no longer penalized. Given this seemingly arbitrary cutoff, we use regression discontinuity design, a quasi-experimental research design that can be used to make causal inferences.

Population Studied: The High Value Healthcare Collaborative includes member healthcare systems covering 57% of the nation’s hospital referral regions. We used Medicare claims data including all patients residing within these regions. The study included patients with index admissions for heart failure from January 1, 2012 to June 30, 2015 and a subsequent observation stay within 60 days. We excluded hospitals with fewer than 25 heart failure readmissions in a year or fewer than 5 observation stays in a year and patients with subsequent observation stays at a different hospital.

Principal Findings: Overall, there was no discontinuity at the 30-day cutoff in the duration of observation stays, the percent of observation stays over 12 hours, or the percent of observation stays over 24 hours. In the sub-analysis, the discontinuity was significant for non-penalized.

Conclusion: The findings reveal evidence that the HRRP has resulted in an increase in the duration of observation stays for some non-penalized hospitals.

Keywords:
How to Cite: Albritton J, Belnap T, Savitz L. The Effect of the Hospital Readmission Reduction Program on the Duration of Observation Stays: Using Regression Discontinuity to Estimate Causal Effects. eGEMs (Generating Evidence & Methods to improve patient outcomes). 2017;5(3):6. DOI: http://doi.org/10.5334/egems.197
Published on 15 Dec 2017
Accepted on 17 Nov 2017            Submitted on 02 Jun 2017

## Introduction

Due to their ability to eliminate bias and estimate causal treatment effects, randomized controlled trials (RCTs) are considered the gold standard in most published hierarchies or levels of evidence [1, 2]. However, despite offering high quality evidence, RCTs are not without their limitations. First, clinicians may have difficulty applying evidence from carefully controlled RCT environments to the complex real-world settings [3]. Second, and perhaps more importantly, many interventions or research questions present one or more challenges that make an RCT approach impractical. Examples include organizational studies, policy analyses, quality improvement, and studies where it would be unethical to withhold a treatment or intervention from patients. RCTs also require a significant resource investment, making them difficult to apply in traditional care settings.

The challenges associated with RCTs make observational studies common. However, in most cases, these studies provide a lower quality of evidence and are rife with potential bias [2, 3]. Another alternative is the regression discontinuity (RD) design, a quasi-experimental research design that can offer the robustness of an RCT [4]. Although the RD design only works in certain situations, RD can be applied using observational data, requiring a fraction of the investment of an RCT. However, the RD design is rarely used in health services research [5]. Given the increasing abundance of data available in electronic health records (EHR) and medical claims, there is a growing opportunity to use RD to generate evidence in health care, including robust estimations of causal effects. As an example, we demonstrate the use of RD to estimate the impact of the Hospital Readmission Reduction Program (HRRP) on the duration of observation stays for heart failure (HF).

Established as part of the Affordable Care Act, the HRRP reduces Medicare prospective payments (up to 3 percent) for hospitals with more than expected 30-day readmissions [6]. Penalties are based on the excess readmission ratio (ERR), a measure that adjusts for risk and compares hospital readmission rates to the national average. Hospitals with an ERR greater than 1.0 across a 3-year measurement period receive a payment penalty [7]. For example, data from July 1, 2012, to June 30, 2015, are used to determine penalties for fiscal year 2017. The first penalties were levied in 2012 for HF, myocardial infarction, and pneumonia – additional conditions were added later [7].

### Observation Stays

Given the challenges hospitals face in reducing readmissions and the financial pressure of the HRRP, there is concern that hospitals may seek other ways to avoid potential payment reduction penalties. Some have suggested that the HRRP creates a financial incentive to reduce even necessary readmissions, which could result in increased mortality [12]. Alternatively, hospitals could place patients on observation status, a care classification that does not count towards readmission penalties [18, 19]. The latter has been the focus of a growing debate. Although studies have shown that overall rates of readmission for HF and other conditions have been declining for more than a decade, observation stay rates have been rising in this time period as well [20, 21, 22, 23, 24]. Despite these trends, other researchers have shown that the increase in observation stays is too small to account for the decrease in readmissions [20, 25].

Much of the work focused on observation stays uses observational, nonexperimental research designs that limit the ability to draw firm causal links. Additionally, while the decrease in readmissions exceeds the increase in observation stays, the effect on observation stays may not be insignificant. It is also possible that some hospitals work to shift patients to observation status more than others. In fact, hospital characteristics, such as ownership, size, and location, are among the most significant predictors of both the frequency and duration of observation stays among Medicare beneficiaries [26]. Failing to account for heterogeneity could bias results towards null findings. The duration of observation stays has also been increasing over the past several years, despite the fact that observation stays are intended to be no longer than 48 hours [18, 22, 27]. The HRRP could incentivize hospitals to keep patients on observation status longer in an effort to avoid readmissions.

### Heart Failure

We focus on HF for several key reasons. First, HF affects 17 percent of the Medicare population over the age of 65 and the prevalence is increasing [28, 29, 30]. Second, HF is one of the leading causes of 30-day readmissions as nearly a quarter of all hospital stays for HF result in a readmission within 30-days [21]. HF accounted for 7.3 percent of all 30-day readmissions among Medicare beneficiaries in 2011, more than any other condition [31]. Finally, HF was one of the first three conditions targeted by the HRRP, meaning that hospitals have had years to work on finding ways to reduce readmissions.

### Summary

In this article, we use RD to estimate the impact of the HRRP on the duration of observation stays following hospitalization for HF. We hypothesize that the HRRP increases the average duration of observation stays for HF, particularly near the near day 30, where the potential to avoid a 30-day readmission increases. Also, we expect greater evidence of an impact of the HRRP among non-penalized hospitals (ERR ≤ 1) because they may have a more comprehensive strategy to reduce readmissions. Thus, we conduct a sub-group analysis comparing penalized and non-penalized hospitals.

## Methods

### Data

This work was conducted through a partnership with the High Value Healthcare Collaborative (HVHC), a network of 12 learning health care systems across the United States. We included all Medicare patients in hospital referral regions (HRR) represented by HVHC members, 57 percent of the nation’s HRRs. We limited the analysis to patients with an index admission for heart failure from July 1, 2012 to June 30, 2015. The analysis was also limited to patients that had an observation stay between 6 and 55 days after discharge from the hospital. We excluded hospitals with fewer than 25 readmissions in a calendar year as they are not affected by the HRRP. We also excluded hospitals with fewer than 5 observation stays because we were interested in hospitals using observation stays frequently enough to potentially impact their ERR scores. Lastly, we excluded observation stays that occurred at a different hospital than the index admission. The new hospital would not necessarily be incentivized to use observation status to avoid readmissions. After exclusions, the sample consisted of 19,815 observation stays across the 3-year period. Overall, the dataset included 419 hospitals.

### Approach

RD can be used to make causal inferences, even when randomization is not possible or practical [5, 32]. Compared to other quasi-experimental or observational research designs, RD requires fewer, simpler assumptions [4, 32]. The primary requirement of RD is an assignment variable with a seemingly arbitrary cutoff used to determine treatment [32]. If there is little or no difference between subjects on either side of the cutoff, then these subjects can be seen as essentially randomized into treatment and control groups. Trends are plotted on either side of the cutoff; the discontinuity in the trends is an unbiased estimate of the treatment effect [4]. The HRRP applies a 30-day cutoff after which readmissions no longer contribute to reimbursement penalties [12]. In reality, patients readmitted on the 30th day following an index admission are likely not very different from patients readmitted on the 31st day. As a result, the HRRP is a prime example of a policy that can be evaluated using RD.

While several statistical programs (e.g., Stata, R) include packages for conducting RD analysis, the analysis can also be conducted using simple regression equations. Equation 1 shows the general linear form of the RD equation. In Equation 1, β1 is the slope of the line before the cutoff, β2 is the slope of the line after the cutoff, and τ is the treatment effect. X is the assignment variable, c is the treatment cutoff value, and Y is the outcome variable. D is equal to 1 when X ≥ c and D is equal to 0 when X < c. In the absence of a cutoff-based treatment there is no discontinuity and τ will equal 0. Control variables can be included in the analysis, but are not typically necessary to achieve unbiased results [4]. Higher-order equations are also possible as shown in Equation 2, the general quadratic form of the RD equation. Although including higher-order terms can reduce estimate precision, improper functional form produces biased results. Thus, when using parametric equations, Lee and Lemiuex recommend comparing the fit of the linear model to higher-order models to identify the specification that best fits the underlying data [4]. Non-parametric equations represent an alternative approach utilized by many statistical packages [33].

(Equation 1)
$Y=a+\tau D+{\beta }_{1}\left(X-c\right)+{\beta }_{2}D\left(X-c\right)+\epsilon$
(Equation 2)
$Y=a+\tau D+{\beta }_{1}\left(X-c\right)+{\beta }_{2}{\left(X-c\right)}^{2}+{\beta }_{3}D\left(X-c\right)+{\beta }_{4}D{\left(X-c\right)}^{2}+\epsilon$

In addition to functional form, the bandwidth of data used for the analysis can also impact RD results. Because RD is only valid for subjects with assignment variable scores near the cutoff value, the inclusion of data far from the cutoff value could bias results [34]. On the other hand, using a narrow bandwidth reduces the precision of the estimate of the treatment effect. Thus, the optimal bandwidth should minimize the risk of bias while providing precise estimates. Imbens and Kalyanaraman offer one way to empirically estimate the optimal bandwidth (the IK bandwidth) by minimizing the mean squared error [34]. Still, due to questions about the optimal bandwidth and ideal functional form, most researchers recommend using multiple RD models – results that are stable across multiple plausible model specifications will be seen as more reliable than results that are only significant with a particular set of conditions [4].

In addition to requiring a seemingly arbitrary cutoff, RD analysis typically involves testing two key assumptions. First, research subjects must be unable to self-sort [4]. Researchers test for self-sorting by analyzing the density of the data across the cutoff. A jump in the density from one side of the cutoff to the other suggests that some research subjects successfully altered their assignment variable value to fall in the treatment or control group. Self-sorting is unlikely in this analysis because the HRRP affects payments to hospitals, but potential penalties are based primarily on patient behavior and outcomes. Patients are unlikely to choose to present at a hospital on day 31 instead of day 30 in order to prevent the hospital from being penalized. We confirmed that there was no evidence of self-sorting by examining the density in the total number of observation stays and readmissions by days from discharge. The second key assumption is that there are no discontinuities in baseline covariates at the cutoff. Indeed, RD’s greatest strength is its ability to balance observed covariates on either side of the cutoff [5]. At the very least, violation of this assumption indicates that research subjects were able to manipulate the assignment variable to some degree. Our data included the average age of subjects; we found no evidence of a discontinuity. Overall, we found no evidence of a violation of these two key assumptions.

### Analyses

We used the duration of observation stays in hours as our primary outcome measure. Secondary outcome measures included the percent of observation stays that were greater than 12 hours and 24 hours in duration. We conducted RD analysis with the three outcome variables using days from discharge as our assignment variable. We set the cutoff value to 30.5 days, corresponding to the HRRP. The analyses were conducted using linear regression analysis in Stata; we used the ‘sureg’ command for seemingly unrelated regressions to improve the efficiency of the standard errors. We conducted nonparametric RD in R using the ‘rdrobust’ and ‘rdd’ packages. We set alpha to 0.05 for hypothesis tests and applied the Benjamini-Hochberg (BH) procedure to correct the p-value limit for multiple tests [35].

Limiting the analysis to patients with an observation stay between 6 and 55 days after the index admission largely eliminated nonlinearities in the data. Likelihood ratio (LR) tests confirmed that linear models fit the data better than higher-order models for each outcome variable [4]. Thus, we used a linear model with the full bandwidth for the analysis. To test the consistency of any significant results, we also estimated three additional models: 1) linear models with the IK bandwidth; 2) nonparametric models with the full bandwidth; and 3) nonparametric models with the IK bandwidth. We used a rectangular kernel for the parametric models because it has a more straightforward interpretation. We used a triangular kernel for nonparametric models—the triangular kernel weights data based on proximity to the cutoff. We repeated the process for the sub-group analysis, separately estimating the effect of the policy for penalized and non-penalized hospitals. This project was approved the Institutional Review Board at Dartmouth College.

## Results

The results from the main RD analyses are shown in Table 1. We present the magnitude, p-value, and 95 percent confident interval of the discontinuity (i.e., the treatment effect, τ, from Equation 1) from linear models using the full bandwidth. There were no statistically significant discontinuities in the overall analyses (models 1–3). The sub-group analysis also showed insignificant results for hospitals receiving a penalty due to the HRRP (models 4–6). However, the discontinuities for non-penalized hospitals (models 7–9) were statistically significant, suggesting that the HRRP increases the duration of observation stays at non-penalized hospitals. The results of the sub-analysis are also presented graphically in Figure 1. For non-penalized hospitals, the discontinuities from day 30 to day 31 correspond to a reduction in the average duration of observation stays by 0.996 hours (95 percent CI –1.850 to –0.143, P = 0.022), a reduction in the percentage of observation stays longer than 12 hours of 0.752 percentage points (95 percent CI –1.310 to –0.193, P = 0.008), and a reduction in the percentage of observation stays longer than 24 hours of 0.0591 percentage points (95 percent CI –1.087 to –0.095, P = 0.019). As shown in Table 1, the discontinuities in the duration of observation stays and the percent of observation stays greater than 12 hours at non-penalized hospitals remained significant after BH correction for multiple tests (n = 6); the discontinuity in the percent of observation stays greater than 24 hours was just over the adjusted cutoff (P = 0.019 vs. 0.017).

Table 1

Results from regression discontinuity analysis.

coef (t) p-value 95% CI(LL, UL) BHp-value limit

Effect of HRRP on all hospitals
1 – Average duration of observation stays 0.465 0.149 (–0.166, 1.095) 0.067
2 – Percent of observation stays > 12 hrs 0.366 0.159 (–0.143, 0.876) 0.083
3 – Percent of observation stays > 24 hrs 0.236 0.208 (–0.131, 0.604) 0.100
Effect of HRRP on penalized hospitals (ERR > 1)
4 – Average duration of observation stays –0.189 0.724 (–1.237, 0.860) 0.033
5 – Percent of observation stays > 12 hrs –0.148 0.678 (–0.847, 0.551) 0.042
6 – Percent of observation stays > 24 hrs –0.116 0.666 (–0.643, 0.411) 0.050
Effect of HRRP on non-penalized hospitals (ERR ≤ 1)
7 – Average duration of observation stays 0.996 0.022* (0.143, 1.850) 0.025
8 – Percent of observation stays > 12 hrs 0.752 0.008* (0.193, 1.310) 0.008
9 – Percent of observation stays > 24 hrs 0.591 0.019 (0.095, 1.087) 0.017

Notes:

* indicates significance using the BH corrected p-value with α = 0.05.

The BH p-value limit is equal to [α/n * r] where n is the number of tests and r is the p-value significance rank from smallest to largest. For the overall analysis n = 3; for the sub-analysis, n = 6.

Figure 1

Results from Regression Discontinuity Analyses.

Legend

Blue regression lines, “o” plot points: Penalized hospitals (ERR > 1).

Red regression lines, “•” plot points: Non-penalized hospitals (ERR ≤ 1).

1. Results from regression discontinuity analysis using the average duration of observation stays in hours as the outcome variable.
2. Results from the regression discontinuity analysis using the percent of observation stays over 12 hours as the outcome variable.
3. Results from the regression discontinuity analysis using the percent of observation stays over 24 hours as the outcome variable.

## Discussion

These findings show that the HRRP has had an effect on the duration of observation stays for some hospitals. The overall results show that the HRRP has had no impact on the duration of observation stays following index admissions for HF, but the sub-group analysis provides additional granularity. Although there was no effect among penalized hospitals, there was a moderate effect among non-penalized hospitals. Specifically, the results show that the HRRP increases the duration of observation stays at non-penalized hospitals, particularly near the 30-day cutoff.

This study suggests that the use or extension of observation status may be one strategy hospitals adopt to help reduce readmissions for HF. Observation status is certainly appropriate under the right circumstances, but the use of observation status solely to avoid penalties could adversely affect the cost or quality of patient care. Whereas the cost for inpatient care is capped at the inpatient deductible, hospitals bill observation care as an outpatient service with a 20 percent coinsurance. As a result, longer observation stays are associated with higher costs; observation stays greater than 24 hours are significantly higher costs than a corresponding inpatient stay [36]. Additionally, evidence shows that the duration of observation stays has increased over the past decade [23]. This has important implications as observation care does not count towards the 3-day minimum inpatient stay required for Medicare to cover care in skilled nursing facilities (SNF) [22]. As a result, patients on observation status could face higher costs for similar care received in a hospital bed and then find themselves without responsible for the cost of SNF care. The two-Midnight Rule would have addressed this concern by reclassifying observation stays that crossed two midnights as inpatient stays, but the policy was eventually eliminated [37]. Given this new evidence, the increasing utilization and duration of observation status, and the potential impact on patients, we believe the policy change may be necessary to ensure better alignment between hospital incentives and what’s best for the patient. Research on the HRRP and other policies aimed at influencing hospital behavior should remain a high priority.

This analysis has a few key limitations. First, the results are not significant in alternative nonparametric models and models using a shorter bandwidth of data. Although LR tests show that the linear model fits the data well, robust results that remain significant across alternative model specifications are more reliable and resistant to bias [4]. Second, the sub-group analysis suggests that there may be heterogeneous effects; we explored one moderating factor, but other factors likely influence how hospitals respond to policies and incentives. Third, we can only interpret the results for patients near the 30-day cutoff. The impact of the HRRP could be larger or smaller further from the cutoff. Fourth, this paper only focuses on HF; additional work is needed to explore the effect of the HRRP on other targeted conditions. Finally, the HRRP could also increase the utilization of observation stays, a possibility not addressed here. An increase in the overall utilization of observation status would not necessarily lead to an increase in the duration of observation stays. These limitations represent possibilities for future research.

This paper has described the use of RD, a powerful quasi-experimental method used to evaluate interventions or policies where treatment is assigned based on a variable with a seemingly arbitrary cutoff value. Under these circumstances, RD provides an unbiased estimate of the treatment effect. Additionally, RD offers several advantages for the evaluation of complex interventions [38]. First, RD can be used in situations where randomization is not feasible or ethical. Second, RD analysis can be completed using observational data. As a result, RD is capable of answering research questions using data already routinely collected and available in electronic data warehouses and claims databases. Finally, results from RD are based on real world circumstances. Despite the advantages and utility of RD, the approach is still rarely used in health care [5]. Given the increasing amount of readily available health data, we believe researchers should consider whether RD is appropriate before turning to costlier study designs.

## Conclusion

This article demonstrates the use of RD to evaluate the impact of the HRRP on the duration of observation stays. The results provide evidence that some hospitals may extend observation stays to avoid payment penalties. Given the potential impact on patients, additional work evaluating the impact of the HRRP is warranted. Additionally, we believe the analytical method described here is underutilized in health services research. RD has many advantages over the traditional randomized experiment. We recommend other researchers consider the RD design for the evaluation of complex health interventions.

## Competing Interests

The authors have no competing interests to declare.

## References

1. Evans, D. Hierarchy of evidence: a framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing. 2003; 12(1): 77–84. DOI: https://doi.org/10.1046/j.1365-2702.2003.00662.x

2. Balshem, H, Helfand, M, Schünemann, HJ, Oxman, AD, Kunz, R, Brozek, J, et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology. 2011; 64(4): 401–6. DOI: https://doi.org/10.1016/j.jclinepi.2010.07.015

3. Silverman, SL. From randomized controlled trials to observational studies. The American Journal of Medicine. 2009; 122(2): 114–20. DOI: https://doi.org/10.1016/j.amjmed.2008.09.030

4. Lee, DS and Lemieuxa, T. Regression discontinuity designs in economics. Journal of Economic Literature. 2010; 48(2): 281–355. DOI: https://doi.org/10.1257/jel.48.2.281

5. Moscoe, E, Bor, J and Bärnighausen, T. Regression discontinuity designs are underutilized in medicine, epidemiology, and public health: a review of current and best practice. Journal of Clinical Epidemiology. 2015; 68(2): 132–43. DOI: https://doi.org/10.1016/j.jclinepi.2014.06.021

6. James, J. Medicare hospital readmissions reduction program. Health Affairs. 2013; 34(2): 1–5.

7. Department of Health and Human Services. Centers for Medicare & Medicaid Services. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system and fiscal year 2013 rates; hospitals’ resident caps for graduate medical education payment purposes; quality reporting requirements for specific providers and for ambulatory surgical centers. 42 CFR Parts 412, 413, 424, and 476.

8. Joynt, K, Figueroa, J, Oray, J and Jha, A. Opinions on the Hospital Readmission Reduction Program: results of a national survey of hospital leaders. The American Journal of Managed Care. 2016; 22(8): e287.

9. DuGoff, E, Bishop, S and Rawal, P. Hospital readmission reduction program reignites debate over risk adjusting quality measures. Health Aff. 2014.

10. Kansagara, D, Englander, H, Salanitro, A, Kagen, D, Theobald, C, Freeman, M, et al. Risk prediction models for hospital readmission: A systematic review. JAMA. 2011; 306(15): 1688–98. DOI: https://doi.org/10.1001/jama.2011.1515

11. Gu, Q, Koenig, L, Faerberg, J, Steinberg, CR, Vaz, C and Wheatley, MP. The Medicare Hospital Readmissions Reduction Program: Potential unintended consequences for hospitals serving vulnerable populations. Health Services Research. 2014; 49(3): 818–37. DOI: https://doi.org/10.1111/1475-6773.12150

12. McIlvennan, CK, Eapen, ZJ and Allen, LA. Hospital readmissions reduction program. Circulation. 2015; 131(20): 1796–803. DOI: https://doi.org/10.1161/CIRCULATIONAHA.114.010270

13. Fischer, C, Steyerberg, EW, Fonarow, GC, Ganiats, TG and Lingsma, HF. A systematic review and meta-analysis on the association between quality of hospital care and readmission rates in patients with heart failure. American Heart Journal. 2015; 170(5): 1005–17. e2.

14. Mellor, J, Daly, M and Smith, M. Does It Pay to Penalize Hospitals for Excess Readmissions? Intended and Unintended Consequences of Medicare’s Hospital Readmissions Reductions Program. Health Economics. 2016.

15. Saito, M, Negishi, K and Marwick, TH. Meta-analysis of risks for short-term readmission in patients with heart failure. The American Journal of Cardiology. 2016; 117(4): 626–32. DOI: https://doi.org/10.1016/j.amjcard.2015.11.048

16. Hummel, SL, Katrapati, P, Gillespie, BW, DeFranco, AC and Koelling, TM, (eds.). Impact of prior admissions on 30-day readmissions in Medicare heart failure inpatients. Mayo Clinic Proceedings; 2014: Elsevier.

17. Chin, DL, Bang, H, Manickam, RN and Romano, PS. Rethinking Thirty-Day Hospital Readmissions: Shorter Intervals Might Be Better Indicators Of Quality Of Care. Health Affairs. 2016; 35(10): 1867–75. DOI: https://doi.org/10.1377/hlthaff.2016.0205

18. Lind, KD, Noel-Miller, CM, Zhao, L and Schur, C. Observation status: Financial implications for Medicare beneficiaries. AARP Public Policy Institute. 2015.

19. Rothberg, MB and Vakharia, N. Readmissions Are Decreasing—Is It Time to Celebrate? Journal of General Internal Medicine. 2014; 29(10): 1322. DOI: https://doi.org/10.1007/s11606-014-2950-6

20. Gerhardt, G, Yemane, A, Apostle, K, Oelschlaeger, A, Rollins, E and Brennan, N. Evaluating whether changes in utilization of hospital outpatient services contributed to lower Medicare readmission rate. Medicare & Medicaid Research Review. 2014; 4(1). DOI: https://doi.org/10.5600/mmrr.004.01.b03

21. Dharmarajan, K, Hsieh, AF, Lin, Z, Bueno, H, Ross, JS, Horwitz, LI, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013; 309(4): 355–63. DOI: https://doi.org/10.1001/jama.2012.216476

22. Zhao, L, Schur, C, Kowlessar, N and Lind, KD.Rapid Growth in Medicare Hospital Observation Services: What’s Going On? AARP Public Policy Institute. 2013.

23. Feng, Z, Wright, B and Mor, V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Affairs. 2012; 31(6): 1251–9. DOI: https://doi.org/10.1377/hlthaff.2012.0129

24. Weaver, C, Mathews, AW and Mcginty, T. Medicare Rules Reshape Hospital Admissions: Return-visit rate drops, but change in billing tactics skews numbers. The Wall Street Journal. December 1, 2015.

25. Zuckerman, RB, Sheingold, SH, Orav, EJ, Ruhter, J and Epstein, AM. Readmissions, observation, and the hospital readmissions reduction program. New England Journal of Medicine. 2016; 374(16): 1543–51. DOI: https://doi.org/10.1056/NEJMsa1513024

26. Wright, B, Jung, HY, Feng, Z and Mor, V. Hospital, patient, and local health system characteristics associated with the prevalence and duration of observation care. Health Services Research. 2014; 49(4): 1088–107. DOI: https://doi.org/10.1111/1475-6773.12166

27. Mason, D. The unintended consequences of the “observation status” policy. JAMA. 2014; 312(19): 1959–60. DOI: https://doi.org/10.1001/jama.2014.14528

28. Roger, VL. Epidemiology of heart failure. Circulation Research. 2013; 113(6): 646–59. DOI: https://doi.org/10.1161/CIRCRESAHA.113.300268

29. Heidenreich, PA, Trogdon, JG, Khavjou, OA, Butler, J, Dracup, K, Ezekowitz, MD, et al. Forecasting the future of cardiovascular disease in the United States a policy statement from the American heart association. Circulation. 2011; 123(8): 933–44. DOI: https://doi.org/10.1161/CIR.0b013e31820a55f5

30. Centers for Medicare and Medicaid Services. Chronic conditions among Medicare Beneficiaries. Chartbook, 2012 edition. Baltimore, MD; 2012.

31. Hines, AL, Barrett, ML, Jiang, HJ and Steiner, CA. Conditions with the largest number of adult hospital readmissions by payer, 2011. HCUP: Statistical Brief #172. April 2014. Agency for Healthcare Research and Quality, Rockville, MD. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb172-Conditions-Readmissions-Payer.pdf.

32. Bor, J, Moscoe, E, Mutevedzi, P, Newell, M-L and Bärnighausen, T. Regression discontinuity designs in epidemiology: Causal inference without randomized trials. Epidemiology. 2014; 25(5): 729–37. DOI: https://doi.org/10.1097/EDE.0000000000000138

33. Imbens, GW and Lemieux, T. Regression discontinuity designs: A guide to practice. Journal of Econometrics. 2008; 142(2): 615–35. DOI: https://doi.org/10.1016/j.jeconom.2007.05.001

34. Imbens, G and Kalyanaraman, K. Optimal bandwidth choice for the regression discontinuity estimator. The Review of economic studies. 2012; 79(3): 933–959. DOI: https://doi.org/10.1093/restud/rdr043

35. Benjamini, Y and Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. 1995: 289–300. Series B (Methodological).

36. Hockenberry, JM, Mutter, R, Barrett, M, Parlato, J and Ross, MA. Factors associated with prolonged observation services stays and the impact of long stays on patient cost. Health Services Research. 2014; 49(3): 893–909. DOI: https://doi.org/10.1111/1475-6773.12143

37. Health Affairs. Health Policy Brief: The Two-Midnight Rule. Jan 2015.

38. Coly, A and Parry, G. Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs. Academy Health, June 2017.