https://ojs.ub.uni-konstanz.de/srm/issue/feed Survey Research Methods 2019-09-16T11:51:55+00:00 Andre Pirralha SurveyResearchMethods@uni-konstanz.de Open Journal Systems <p>Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research.</p> https://ojs.ub.uni-konstanz.de/srm/article/view/7419 Capturing Multiple Perspectives in a Multi-actor Survey: The Impact of Parental Presence During Child Interviews on Reporting Discrepancies 2019-08-10T16:54:23+00:00 Bettina Müller bettina.mueller@soziologie.uni-muenchen.de Third-party presence is considered a potential threat to the quality of sensitive information gathered in face-to-face interviews. Issues arising from interference and reduced privacy due to bystander presence appear particularly pressing in child surveys: Parental presence is quite common and likely more pervasive as compared to other interviewee-bystander constellations. Focusing on surveys designed to capture multiple perspectives on the same issues, a key question is whether child interviews – in addition to parent information – can provide an independent opinion if parents are present during the interview. Using longitudinal multi-actor data from the German Family Panel (pairfam), the present study evaluates the impact of parental presence on child-parent discrepancies in survey reports on children’s problem behaviors and difficulties in the parent-child relationship. The longitudinal analysis of child-parent dyads allows for a more extensive consideration of selection processes of parental presence as compared to cross-sectional approaches. While descriptive results suggest that parent and child reports are more similar when parents are present, fixed-effects regression analyses do not find any effects of changes in parental presence on reporting discrepancies within child-parent dyads. 2019-05-03T00:00:00+00:00 Copyright (c) 2019 Bettina Müller https://ojs.ub.uni-konstanz.de/srm/article/view/7370 Multivariate Tests for Phase Capacity 2019-08-10T16:54:21+00:00 Taylor H Lewis Taylor.Lewis@opm.gov To combat the potentially detrimental effects of nonresponse, most surveys repeatedly follow-up with nonrespondents, often targeting a response rate or predetermined number of completes. Each additional recruitment attempt generally brings in a new wave of data, but returns gradually diminish over the course of a static data collection protocol. This is because each subsequent wave tends to contain fewer and fewer new responses, thereby rendering smaller and smaller changes in point estimates. Consequently, point estimates calculated from the accumulating data begin to stabilize. This is the notion of phase capacity, suggesting some form of design change is warranted, such as switching modes, increasing the incentive, or simply discontinuing nonrespondent follow-up. Phase capacity testing methods that have appeared in the literature to date are generally only applicable to a single point estimate. It is unclear how to proceed if conflicting results are obtained following independent tests on two or more point estimates. The purpose of this paper is to introduce two multivariate phase capacity tests, each designed with the aim of providing a universal, yes-or-no phase capacity determination for a battery of point estimates. The two competing methods’ performance is compared via simulation and application using data from the 2011 Federal Employee Viewpoint Survey. 2019-08-10T00:00:00+00:00 Copyright (c) 2019 Taylor H Lewis https://ojs.ub.uni-konstanz.de/srm/article/view/7383 Within-household selection of target-respondents impairs demographic representativeness of probabilistic samples: evidence from seven rounds of the European Social Survey 2019-09-16T11:51:55+00:00 Piotr Jabkowski pjabko@amu.edu.pl Piotr Cichocki pcichoc@amu.edu.pl This paper examines the demographic representativeness of different types of probabilistic samples based on the results of seven rounds of the European Social Survey. Focusing on the distinction between personal-register and non-personal-register samples, it demonstrates that the latter exhibit systematically larger gender- and age-biases. Expanding upon a ‘gold standard’ evaluation based on external criteria derived from Eurostat population statistics, an internal criteria analysis leads to the conclusion that the inferior quality of surveys involving interviewer-driven within-household selection of target respondents results from interviewer discretion. Such interference results in the selection of individuals with higher levels of readiness and availability, which superficially improves survey outcome rates while yielding samples of actually inferior quality. The internal-criteria approach provides a straightforward and undemanding way of monitoring representativeness of samples, and proves especially handy when it comes to large cross-country projects, as it requires no data external to the survey results, and allows for comparing surveys regardless of possible differences in sampling frames, sampling design and fieldwork execution procedures. 2019-08-10T00:00:00+00:00 Copyright (c) 2019 Piotr Jabkowski, Piotr Cichocki https://ojs.ub.uni-konstanz.de/srm/article/view/7392 Does mode of administration impact on quality of data? Comparing a traditional survey versus an online survey via a Voting Advice Application 2019-08-10T16:49:25+00:00 Vasiliki Triga vasiliki.triga@cut.ac.cy Vasilis Manavopoulos v.manavopoulos@cut.ac.cy This paper compares two modes of administering an election survey: a traditional, door-to-door survey and an identical online version promoted via a Voting Advice Application. Whereas online political surveys are known to suffer from self-selection bias of politically interested respondents, traditional surveys are plagued with socially desirable responding and are susceptible to the effects of satisficing and other fatigue-related effects. Using a propensity score matching methodology, we examine the extent to which such differences exist between the two modes of administration. While we report mixed findings regarding the structure of respondents’ answer patterns, significant differences emerged in relation to social desirability bias with the offline group being more ‘affected’ than the online group. 2019-03-20T00:00:00+00:00 Copyright (c) 2019 Vasiliki Triga, Vasilis Manavopoulos https://ojs.ub.uni-konstanz.de/srm/article/view/7385 Doing a Time Use Survey on Smartphones Only: What Factors Predict Nonresponse at Different Stages of the Survey Process? 2019-08-10T16:49:26+00:00 Anne Elevelt a.elevelt@uu.nl Peter Lugtig p.lugtig@uu.nl Vera Toepoel v.toepoel@uu.nl The increasing use of smartphones opens up opportunities for novel ways of survey data collection, but also poses new challenges. Collecting more and different types of data means that studies can become increasingly intrusive. We risk over-asking participants, leading to nonresponse. This study documents nonresponse and nonresponse bias in a smartphone-only version of the Dutch Time Use Survey (TUS). Respondents from the Dutch LISS panel were asked to perform five sets of tasks to complete the whole TUS: 1) accept an invitation to participate in the study and install an app, 2) fill out a questionnaire on the web, 3) participate in the smartphone time use diary on their smartphone, 4) answer pop-up questions and 5) give permission to record sensor data (GPS locations and call data). Results show that 42.9% of invited panel members responded positively to the invitation to participate in a smartphone survey. However, only 28.9% of these willing panel members completed all stages of the study. Predictors of nonresponse are somewhat different at every stage. In addition, respondents who complete all smartphone tasks are different from groups who do not participate at some or any stage of the study. By using data collected in previous waves we show that nonresponse leads to nonresponse bias in estimates of time use. We conclude by discussing implications for using smartphone apps in survey research. 2019-04-11T00:00:00+00:00 Copyright (c) 2019 Anne Elevelt, Peter Lugtig, Vera Toepoel https://ojs.ub.uni-konstanz.de/srm/article/view/7262 Can Nonprobability Samples be Used for Social Science Research? A cautionary tale 2019-08-10T16:54:22+00:00 Elizabeth S. Zack eszack@umail.iu.edu John Kennedy kennedyj@indiana.edu J. Scott Long jslong@indiana.edu <p>Survey researchers and social scientists are trying to understand the appropriate use of nonprobability samples as substitutes for probability samples in social science research. While cognizant of the challenges presented by nonprobability samples, scholars increasingly rely on these samples due to their low cost and speed of data collection. This paper contributes to the growing literature on the appropriate use of nonprobability samples by comparing two online non-probability samples, Amazon’s Mechanical Turk (MTurk) and a Qualtrics Panel, with a gold standard nationally representative probability sample, the GSS. Most research in this area focuses on determining the best techniques to improve point estimates from nonprobability samples, often using gold standard surveys or census data to determine the accuracy of the point estimates. This paper differs from that line of research in that we examine how probability and nonprobability samples differ when used in multivariate analysis, the research technique used by many social scientists. Additionally, we examine whether restricting each sample to a population well-represented in MTurk (Americans age 45 and under) improves MTurk’s estimates. We find that, while Qualtrics and MTurk differ somewhat from the GSS, Qualtrics outperforms MTurk in both univariate and multivariate analysis. Further, restricting the samples substantially improves MTurk’s estimates, almost closing the gap with Qualtrics. With both Qualtrics and MTurk, we find a risk of false positives. Our findings suggest that these online nonprobability samples may sometimes be ‘fit for purpose,’ but should be used with caution.</p> 2019-06-19T00:00:00+00:00 Copyright (c) 2019 Elizabeth S. Zack, John Kennedy