Puzzling Answers to Crosswise Questions: Examining Overall Prevalence Rates, Response Order Effects, and Learning Effects
Keywords:crosswise model, randomized response, social desirability bias, primacy effects, learning effects, panel conditioning, privacy concerns
AbstractThis validation study on the crosswise model (CM) examines five survey experiments that were implemented in a general population survey. Our first crucial result is that in none of these experiments was the crosswise model able to verifiably reduce social desirability bias. In contrast to most previous CM applications, we use an experimental design that allows us to distinguish a reduction in social desirability bias from heuristic response behaviour, such as random ticking, leading to false positive or false negative answers. In addition, we provide insights on two potential explanatory mechanisms that have not yet received attention in empirical studies: primacy effects and panel conditioning. We do not find consistent primacy effects, nor does response quality improve due to learning when respondents have had experiences with crosswise models in past survey waves. We interpret our results as evidence that the crosswise model does not work in general population surveys and speculate that the question format causes mistrust in participants.
How to Cite
Walzenbach, S., & Hinz, T. (2022). Puzzling Answers to Crosswise Questions: Examining Overall Prevalence Rates, Response Order Effects, and Learning Effects. Survey Research Methods, 17(1), 1–13. https://doi.org/10.18148/srm/2023.v17i1.8010
Copyright (c) 2023 Sandra Walzenbach, Thomas Hinz
This work is licensed under a Creative Commons Attribution 4.0 International License.