The effect of online interviews on the University of Michigan Survey of Consumer Sentiment

The effect of online interviews on the University of Michigan Survey of Consumer Sentiment was written by Ryan Cummings and Ernie Tedeschi and published online by the Briefing Book in 2024.

This was written in reaction to a methods change in the Survey of Consumers, and specifically in response to their claims in the corresponding technical report. Authors find that the survey administrators insufficiently assessed method effects.

The authors looked at weighted proportions of survey mode in each age group. "[O]nline respondents were more likely to be older in those transition months".

Designing a model like: s = β0 + β1online + ... + ε where s is an individual respondent's sentiment score, online is a dummy indicator for the survey mode, β1 is the estimated method effect, and obfuscated in the ... is a vector of controls. (The authors actually confusingly use nonstandard letter in their model specification and mix scalar and matrix formulation, but whatever.) Model is estimated with OLS.

The authors estimate a method effect of -8.86 with a SE of 2.39.

The authors replicate this method on the subindices ("current conditions index" and "expectations index") and find effects (SEs) of -18.67 (3.35) and -2.55 (2.59; not significant) respectively. The mode "primarily affects the final index through the respondent’s assessment of current conditions, not through their expectations."

Drilling into questions themselves, the authors note a negative trend for the transition period in estimates for durable goods and personal financial conditions. Furthermore, "NA" responses have spiked massively. "These results suggest that online respondents appear to be less engaged, or at a minimum less willing to justify their answer."

The authors propose an adjustment to estimates going forward, simply adding the estimated method effect. "There is precedent for such a level adjustment: since the early 1970s, UMich has applied a constant to the sentiment, current conditions, and expectations indices every month to account for a sample design change that took place in the 1950s".

The authors evaluated this adjusted index against the Morning Consult's Index of Consumer Sentiment. Plotted together, they visually appear very similar.

Reading Notes

I am, as always, deeply skeptical of these 'score' methodologies that firms push. And that skepticism is inherited by analyses of such scores like this.

Fundamentally, the 'score' comes from discrete categorical responses rather than continuous scales. I have reservations with OLS estimation.

I would be interested to see if there's any meaningful method effect on binned scores. I would also be interested in seeing if a multinomial logit model could be fit for the underlying questions.


CategoryRicottone

TheEffectOfOnlineInterviewsOnTheUniversityOfMichiganSurveyOfConsumerSentiment (last edited 2025-03-29 04:28:09 by DominicRicottone)