Access to Healthcare and Voting: The Case of Hospital Closures in Rural America
Access to Healthcare and Voting: The Case of Hospital Closures in Rural America (DOI: https://doi.org/10.1017/S0003055424001035) was written by Christian Cox, Derek A. Epp, and Michael E. Shepherd in 2024. It was published in the American Political Science Review (vol. 119, no. 3).
Building off of the Downs model, hardships like losing access to healthcare should cause a decrease in voter turnout. The authors test for such an effect in the U.S.
The authors identify the geography of hospitals, including which ones closed, using data published by UNC Cecil G. Sheps Center. This covered all closures between 2016 and 2020.
They then use the L2 voter file to identify voter turnout in relation to geographic location. Voters residing in non-rural areas are excluded, as are any whose ZIP code changed within the period of 2016-2020. The 2016, 2018, and 2020 elections yield a three period panel dataset. Individuals without full coverage of vote data were excluded to create a balanced panel. There are 10.5 million voters in the final analytic sample.
Each voter is then matched by geolocation to the hospital closest to their residence. This is done with the set of all hospitals that operated for at least some part of 2016-2020. An individual matched to a hospital that closed at the beginning of 2016 is considered treated for all three elections.
The authors regress hospital closure events on decisions to vote. They use individual voter-level fixed effects. The effect of hospital closures is significant and negative, but very small.
They repeat the analysis with the sample split by partisan identity, by age groups, by income groups, and by interactions of age and economic groups. As expected, the effect is more pronounced among low-income voters over 65. Most notably though the negative effect loses significance among Republicans.
The temporal distance between a hospital closure and an election is then calculated and binned as '9-11 months after the election', '6-8 months after', and so on in 3-month increments, up to '10-12 before the election'. These are now regressed on turnout.
- There is no consistent or significant effect for any 'after' bin, which is not only logically expected but serves as a placebo to validate the model.
- The effect is significant and negative for the '1-3 months before the election' and '4-6 months before' bins. Voters are 4% and 6% less likely to turnout in those time periods respectively.
- The effects past '6 months before' are inconsistent and insignificant.
The authors then use matching on the basis of turnout history, age, race, gender, and household income, to pair affected voters with unaffected voters. Difference of means tests demonstrate that affected voters are less likely to vote.
They also investigate logged spatial distance to the next-nearest hospital as a predictor, and whether hospital closures predict panel attrition (i.e., missingness of vote data), and introducing county-year and ZIP code-year averages as predictors.
Reading Notes
If the 'after election' bins are to be considered a placebo group, then I expect to see statistical testing for the difference between the treatment and control effects, but the authors only give tests against the null.
Given the size of this data set, I would prefer to see county or ZIP code-level fixed effects in addition to the individual voter-level ones. I expect this would be more valuable than including annual averages as predictors, at either level. And actually, I don't know that I agree with using annual averages as predictors at all; my intuition is that these should be averages across the entire period. If there are meaningful changes to demographics in the time period, the average change should be used as a predictor.
I have concerns with how inconsistent the effect is across subgroups that I expect to still correlate. Biggest issue is how the effect is insignificant among Republicans.
