Survey Research Methods https://ojs.ub.uni-konstanz.de/srm <p>Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research.</p> European Survey Research Association en-US Survey Research Methods 1864-3361 <p><a href="https://ojs.ub.uni-konstanz.de/srm/copyright" target="_blank" rel="noopener">Copyright Notice</a></p> What Parcel Tax Records Tell Us About Homeownership Measurement in Surveys https://ojs.ub.uni-konstanz.de/srm/article/view/7904 <div class="page" title="Page 2"> <div class="layoutArea"> <div class="column"> <p><strong>Goal.</strong> This research aims to understand the measurement error in self-reported homeownership data collected by surveys.<br><strong>Methods.</strong> The analysis focuses on Detroit as a case study. We use legal ownership status in administrative records (the city of Detroit parcel tax records) as the benchmark to validate self- reported ownership status collected from a survey (the Detroit Metro Area Communities Study). We compare data from two question formats, which measure ownership at the household level and at the individual level, respectively. We also study the associations between sociodemographic characteristics and measurement errors in the self-reported ownership. <br><strong>Results</strong>. The results suggest that 1) respondents do not always interpret the ownership questions as was intended, 2) the reported ownership status is sensitive to question formats, 3) the risk of measurement error appears to be heterogeneous in the population.<br><strong>Implications.</strong> The results challenge the assumption that homeownership is a standard fact, the reporting of which is not impacted by how it is measured. The findings are useful for understanding discrepancies across survey results and for advising how to craft homeownership questions in surveys.</p> </div> </div> </div> Shiyu Zhang James Wagner Elisabeth Gerber Jeffrey Morenoff Copyright (c) 2022 Shiyu Zhang, James Wagner, Elisabeth Gerber, Jeffrey Morenoff https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 133 145 10.18148/srm/2022.v16i2.7904 Observing Interviewer Performance in Slices or by Traces: A Comparison of Methods to Predict Interviewers’ Individual Contributions to Interviewer Variance https://ojs.ub.uni-konstanz.de/srm/article/view/7672 <p>The interviewing practice of survey interviewers has long been recognized as an important contributor to measurement error in survey data. In the current article, we compare two approaches that can be used to identify interviewers whose task performance might be inadequate and damaging to data quality. The first approach assesses interviewing behavior through the use of audio-recorded interviews. Behavioral assessments capture actual behavior in an interview, but typically rely only on “slices” of observed behavior. The second approach is based on interview time paradata, a type of “trace” data that can easily be aggregated in summary measurements such as average interview speed at the interviewer level. In the current study, we use data from the Dutch-speaking subsample of interviewers employed in two survey rounds of the European Social Survey in Belgium to evaluate how successful the two above approaches are for predicting interviewers’ contributions to interviewer variance. The results show that interviewers who deviate from a larger number of standardized interviewing practices in one (early) audio-recorded interview, as well as those who tend to accelerate their interviewing speed over the course of an interview, tend to contribute more to interviewer variance. The two types of performance assessments appear to be independent, additive predictors of interviewers’ variance contributions. While statistically significant, the effects are nevertheless modest in size. The implication for practice is that interviewer monitoring would benefit from well-considered combinations of both behavioral and paradata-based assessments.</p> Celine Wuyts Geert Loosveldt Copyright (c) 2022 Celine Wuyts, Geert Loosveldt https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 147 163 10.18148/srm/2022.v16i2.7672 Boosting Survey Response Rates by Announcing Undefined Lottery Prizes in Invitation Email Subject Lines Evidence from a Global Randomized Controlled Trial https://ojs.ub.uni-konstanz.de/srm/article/view/7651 <p>We test whether stating the possibility of winning an unspecified prize (announcing undefined lottery prizes, AULP, henceforth) in the subject line of a survey invitation email increases contact and response rates. As adherence to Muslim cultural norms and Islamic religious values might negatively affect the response to lottery-style prize assignment, we further explore whether the magnitude of those impacts differs for respondents located in Organization of Islamic Cooperation (OIC) member states. We also test for potential impacts of providing translations in a national/official language of the state the potential respondents’ organization is located in (in addition to the English default version) as well as for non-response bias and compromised response quality. To these ends, we conduct a unique randomized controlled trial on a global sample of over 5,000 key staff members of microfinance institutions. Our analysis yields four main results. First, on average, contact and response rates are significantly higher for the AULP treatment group. Second, we find intuitive regional heterogeneity in the impact of our treatment, i.e. AULP, between OIC member states and non-member states. Third, though our findings reveal a positive impact of providing translations in a relevant national/official language, in line with the literature on international survey research, translations do not generally boost AULP treatment effects, on average. Last, quality checks provide no evidence of AULP leading to non-response bias or lower response quality, as defined by various standard metrics.</p> Syedah Ahmad Robert Lensink Annika Mueller Copyright (c) 2022 Syedah Ahmad, Robert Lensink, Annika Mueller https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 165 206 10.18148/srm/2022.v16i2.7651 The Role of the Interviewer in Producing Mode Effects: Results From a Mixed Modes Experiment Comparing Face-to-Face, Telephone and Web Administration https://ojs.ub.uni-konstanz.de/srm/article/view/7771 <p>The presence of an interviewer (face-to-face or via telephone) is hypothesized to motivate respondents to generate an accurate answer and reduce task difficulty, but also to reduce the privacy of the reporting situation. To study this, we used respondents from an existing face-to-face probability sample of the general population who were randomly assigned to face-to-face, telephone and web modes of data collection. The prevalence of indicators of satisficing (e.g., non-differentiation, acquiescence, middle category choices and primacy and recency effects) and socially desirable responding were studied across modes. Results show differences between interviewer modes and web in levels of satisficing (non-differentiation, acquiescence and middle category choices) and in socially desirable responding. There were also unexpected findings of (1) different ways of satisficing by mode and (2) of a telephone mode primacy/positivity effects.</p> Steven Hope Pamela Campanelli Gerry Nicolaas Peter Lynn Annette Jäckle Copyright (c) 2022 Steven Hope, Pamela Campanelli, Gerry Nicolaas, Peter Lynn, Annette Jäckle https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 207 226 10.18148/srm/2022.v16i2.7771 Answer Refused: Exploring How Item Non-response on Domestic Abuse Questions in a Social Survey Affects Analysis https://ojs.ub.uni-konstanz.de/srm/article/view/7823 <p>We explore the pattern, potential drivers, and implications of item non-response on survey questions about domestic abuse. We draw on a longitudinal representative prospective survey on children and their families in Scotland (N:3646) and use multivariate regression models to look at non-response on domestic violence questions among mothers of young children. By triangulating data from multiple survey sweeps we hypothesise that item non-response may be due to mothers experiencing violence, and we observe that factors which predict experiencing violence also predict item non-response. We compare conservative and generous dependent variables on domestic abuse prevalence and find that both yield similar results in multivariate models, but that the actual social gradient of domestic violence is likely to be steeper than we can see in survey data. We discuss the ethical implications of imputing missing data and argue that sometimes it is unethical to do so.</p> Valeria Skafida Fiona Morrison John Devaney Copyright (c) 2022 Valeria Skafida, Fiona Morrison, John Devaney https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 227 240 10.18148/srm/2022.v16i2.7823 Comparing Probability-Based Surveys and Nonprobability Online Panel Surveys in Australia: A Total Survey Error Perspective https://ojs.ub.uni-konstanz.de/srm/article/view/7907 <p>In this paper we report the findings from our study which was undertaken to learn if the findings of Chang et al. (2009), Yeager et al. (2011), Erens et al. (2014), MacInnis et al. (2018) and Cornesse et al. (2020) would be replicated in Australia. Our Australian Online Panels Benchmarking Study (AOPBS) involved administering the same questionnaire across eight independent national Australian samples aiming to achieve approximately 600 completed questionnaires/interviews from each sample. The questionnaire was administered by the Social Research Centre (SRC), a subsidiary of the Australian National University (ANU), to three probability samples, and to five nonprobability samples drawn from the online panels operated by five independent nonprobability online panel providers. A dual-frame telephone sampling methodology was used for two of the probability surveys and the third used an address-based sampling (ABS) frame. The target population for the AOPBS was persons aged 18 years and older living in Australia who were fluent in English. The three probability sample surveys likely had little Coverage Error, a known amount of Sampling Error, a nonignorable amount of Nonresponse Error, little Adjustment Error, and a small-to-modest amount of Measurement Error. Overall, the three probability samples as a group were less biased on the substantive measures and had less variance from the benchmark values, compared to the nonprobability surveys.&nbsp; The five nonprobability surveys likely had a nonignorable amount of Coverage Error, an unknowable amount of Sampling Error, a nonignorable amount of Nonresponse Error, unknown Adjustment Error, and a small-to-modest amount of Measurement Error. Overall, the five nonprobability panel surveys as a group were more biased on the substantive measures and had more variance from the benchmark values, compared to the probability surveys.&nbsp; Our AOPBS study replicates very closely with previous comparison studies conducted in other countries.</p> Paul John Lavrakas Darren Pennay Dina Neiger Benjamin Phillips Copyright (c) 2022 Paul John Lavrakas, Darren Pennay, Dina Neiger, Benjamin Phillips https://creativecommons.org/licenses/by-nc/4.0 2022-08-10 2022-08-10 16 2 241 266 10.18148/srm/2022.v16i2.7907