Sensitive Questions in Online Surveys: An Experimental Evaluation of Different Implementations of the Randomized Response Technique and the Crosswise Model

Marc Höglinger, Ben Jann, Andreas Diekmann

Abstract


Self-administered online surveys may provide a higher level of privacy protection to respondents than surveys administered by an interviewer. Yet, studies indicate that asking sensitive questions is problematic also in self-administered surveys. Because respondents might not be willing to reveal the truth and provide answers that are subject to social desirability bias, the validity of prevalence estimates of sensitive behaviors from online surveys can be challenged. A well-known method to overcome these problems is the Randomized Response Technique (RRT). However, convincing evidence that the RRT provides more valid estimates than direct questioning in online surveys is still lacking. We therefore conducted an experimental study in which different implementations of the RRT, including two implementations of the so-called crosswise model, were tested and compared to direct questioning. Our study is an online survey (N = 6,037) on sensitive behaviors by students such as cheating in exams and plagiarism. Results vary considerably between different implementations, indicating that practical details have a strong effect on the performance of the RRT. Among all tested implementations, including direct questioning, the unrelated-question crosswise-model RRT yielded the highest estimates of student misconduct.

Keywords


Sensitive Questions, Online Survey, Randomized Response Technique, Crosswise Model, Plagiarism

Full Text:

PDF


DOI: http://dx.doi.org/10.18148/srm/2016.v10i3.6703

Copyright (c) 2016 Marc Höglinger, Jann Ben, Andreas Diekmann

Hosted by the Library of the University of Konstanz