Survey Research Methods https://ojs.ub.uni-konstanz.de/srm <p>Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research.</p> en-US Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By virtue of their appearance in this open access journal, users can use, reuse and build upon the material published in the journal but only for non-commercial purposes and with proper attribution. SurveyResearchMethods@uni-konstanz.de (Andre Pirralha) SurveyResearchMethods@uni-konstanz.de (Publication Officer) Sat, 10 Aug 2019 16:49:27 +0000 OJS 3.1.2.1 http://blogs.law.harvard.edu/tech/rss 60 Capturing Multiple Perspectives in a Multi-actor Survey: The Impact of Parental Presence During Child Interviews on Reporting Discrepancies https://ojs.ub.uni-konstanz.de/srm/article/view/7419 Third-party presence is considered a potential threat to the quality of sensitive information gathered in face-to-face interviews. Issues arising from interference and reduced privacy due to bystander presence appear particularly pressing in child surveys: Parental presence is quite common and likely more pervasive as compared to other interviewee-bystander constellations. Focusing on surveys designed to capture multiple perspectives on the same issues, a key question is whether child interviews – in addition to parent information – can provide an independent opinion if parents are present during the interview. Using longitudinal multi-actor data from the German Family Panel (pairfam), the present study evaluates the impact of parental presence on child-parent discrepancies in survey reports on children’s problem behaviors and difficulties in the parent-child relationship. The longitudinal analysis of child-parent dyads allows for a more extensive consideration of selection processes of parental presence as compared to cross-sectional approaches. While descriptive results suggest that parent and child reports are more similar when parents are present, fixed-effects regression analyses do not find any effects of changes in parental presence on reporting discrepancies within child-parent dyads. Bettina Müller Copyright (c) 2019 Bettina Müller https://ojs.ub.uni-konstanz.de/srm/article/view/7419 Fri, 03 May 2019 00:00:00 +0000 Multivariate Tests for Phase Capacity https://ojs.ub.uni-konstanz.de/srm/article/view/7370 To combat the potentially detrimental effects of nonresponse, most surveys repeatedly follow-up with nonrespondents, often targeting a response rate or predetermined number of completes. Each additional recruitment attempt generally brings in a new wave of data, but returns gradually diminish over the course of a static data collection protocol. This is because each subsequent wave tends to contain fewer and fewer new responses, thereby rendering smaller and smaller changes in point estimates. Consequently, point estimates calculated from the accumulating data begin to stabilize. This is the notion of phase capacity, suggesting some form of design change is warranted, such as switching modes, increasing the incentive, or simply discontinuing nonrespondent follow-up. Phase capacity testing methods that have appeared in the literature to date are generally only applicable to a single point estimate. It is unclear how to proceed if conflicting results are obtained following independent tests on two or more point estimates. The purpose of this paper is to introduce two multivariate phase capacity tests, each designed with the aim of providing a universal, yes-or-no phase capacity determination for a battery of point estimates. The two competing methods’ performance is compared via simulation and application using data from the 2011 Federal Employee Viewpoint Survey. Taylor H Lewis Copyright (c) 2019 Taylor H Lewis https://ojs.ub.uni-konstanz.de/srm/article/view/7370 Sat, 10 Aug 2019 00:00:00 +0000 Within-household selection of target-respondents impairs demographic representativeness of probabilistic samples: evidence from seven rounds of the European Social Survey https://ojs.ub.uni-konstanz.de/srm/article/view/7383 This paper examines the demographic representativeness of different types of probabilistic samples based on the results of seven rounds of the European Social Survey. Focusing on the distinction between personal-register and non-personal-register samples, it demonstrates that the latter exhibit systematically larger gender- and age-biases. Expanding upon a ‘gold standard’ evaluation based on external criteria derived from Eurostat population statistics, an internal criteria analysis leads to the conclusion that the inferior quality of surveys involving interviewer-driven within-household selection of target respondents results from interviewer discretion. Such interference results in the selection of individuals with higher levels of readiness and availability, which superficially improves survey outcome rates while yielding samples of actually inferior quality. The internal-criteria approach provides a straightforward and undemanding way of monitoring representativeness of samples, and proves especially handy when it comes to large cross-country projects, as it requires no data external to the survey results, and allows for comparing surveys regardless of possible differences in sampling frames, sampling design and fieldwork execution procedures. Piotr Jabkowski, Piotr Cichocki Copyright (c) 2019 Piotr Jabkowski, Piotr Cichocki https://ojs.ub.uni-konstanz.de/srm/article/view/7383 Sat, 10 Aug 2019 00:00:00 +0000 Does mode of administration impact on quality of data? Comparing a traditional survey versus an online survey via a Voting Advice Application https://ojs.ub.uni-konstanz.de/srm/article/view/7392 This paper compares two modes of administering an election survey: a traditional, door-to-door survey and an identical online version promoted via a Voting Advice Application. Whereas online political surveys are known to suffer from self-selection bias of politically interested respondents, traditional surveys are plagued with socially desirable responding and are susceptible to the effects of satisficing and other fatigue-related effects. Using a propensity score matching methodology, we examine the extent to which such differences exist between the two modes of administration. While we report mixed findings regarding the structure of respondents’ answer patterns, significant differences emerged in relation to social desirability bias with the offline group being more ‘affected’ than the online group. Vasiliki Triga, Vasilis Manavopoulos Copyright (c) 2019 Vasiliki Triga, Vasilis Manavopoulos https://ojs.ub.uni-konstanz.de/srm/article/view/7392 Wed, 20 Mar 2019 00:00:00 +0000 Doing a Time Use Survey on Smartphones Only: What Factors Predict Nonresponse at Different Stages of the Survey Process? https://ojs.ub.uni-konstanz.de/srm/article/view/7385 The increasing use of smartphones opens up opportunities for novel ways of survey data collection, but also poses new challenges. Collecting more and different types of data means that studies can become increasingly intrusive. We risk over-asking participants, leading to nonresponse. This study documents nonresponse and nonresponse bias in a smartphone-only version of the Dutch Time Use Survey (TUS). Respondents from the Dutch LISS panel were asked to perform five sets of tasks to complete the whole TUS: 1) accept an invitation to participate in the study and install an app, 2) fill out a questionnaire on the web, 3) participate in the smartphone time use diary on their smartphone, 4) answer pop-up questions and 5) give permission to record sensor data (GPS locations and call data). Results show that 42.9% of invited panel members responded positively to the invitation to participate in a smartphone survey. However, only 28.9% of these willing panel members completed all stages of the study. Predictors of nonresponse are somewhat different at every stage. In addition, respondents who complete all smartphone tasks are different from groups who do not participate at some or any stage of the study. By using data collected in previous waves we show that nonresponse leads to nonresponse bias in estimates of time use. We conclude by discussing implications for using smartphone apps in survey research. Anne Elevelt, Peter Lugtig, Vera Toepoel Copyright (c) 2019 Anne Elevelt, Peter Lugtig, Vera Toepoel https://ojs.ub.uni-konstanz.de/srm/article/view/7385 Thu, 11 Apr 2019 00:00:00 +0000 Can Nonprobability Samples be Used for Social Science Research? A cautionary tale https://ojs.ub.uni-konstanz.de/srm/article/view/7262 <p>Survey researchers and social scientists are trying to understand the appropriate use of nonprobability samples as substitutes for probability samples in social science research. While cognizant of the challenges presented by nonprobability samples, scholars increasingly rely on these samples due to their low cost and speed of data collection. This paper contributes to the growing literature on the appropriate use of nonprobability samples by comparing two online non-probability samples, Amazon’s Mechanical Turk (MTurk) and a Qualtrics Panel, with a gold standard nationally representative probability sample, the GSS. Most research in this area focuses on determining the best techniques to improve point estimates from nonprobability samples, often using gold standard surveys or census data to determine the accuracy of the point estimates. This paper differs from that line of research in that we examine how probability and nonprobability samples differ when used in multivariate analysis, the research technique used by many social scientists. Additionally, we examine whether restricting each sample to a population well-represented in MTurk (Americans age 45 and under) improves MTurk’s estimates. We find that, while Qualtrics and MTurk differ somewhat from the GSS, Qualtrics outperforms MTurk in both univariate and multivariate analysis. Further, restricting the samples substantially improves MTurk’s estimates, almost closing the gap with Qualtrics. With both Qualtrics and MTurk, we find a risk of false positives. Our findings suggest that these online nonprobability samples may sometimes be ‘fit for purpose,’ but should be used with caution.</p> Elizabeth S. Zack, John Kennedy, J. Scott Long Copyright (c) 2019 Elizabeth S. Zack, John Kennedy https://ojs.ub.uni-konstanz.de/srm/article/view/7262 Wed, 19 Jun 2019 00:00:00 +0000