The online version of this article (https://doi.org/10.18148/srm/2025.v19i2.8273) contains supplementary material.
Survey response rates have been declining over the years and across countries (de Leeuw & de Heer, 2002; Luiten et al., 2020; Stoop, 2005). This decline is well-documented in the United States (Atrostic et al., 2001; Curtin et al., 2005; Dutwin & Lavrakas, 2017; Williams & Brick, 2017) and Europe (Beullens, et al., 2018; de Leeuw et al., 2018). The trend has raised significant concerns among survey specialists and poses serious challenges for researchers and policymakers who depend on survey data to inform decisions and understand social phenomena (cf. National Research Council, 2013; De Leeuw et al., 2020). Understanding the causes and consequences of this decline is crucial for preserving the quality and representativeness of survey-based research.
Research on the causes of and strategies for preventing nonresponse has primarily focused on survey design and implementation (e.g., Dillman, 1978; Dillman et al., 2014), the use of incentives (e.g., Singer & Ye, 2013), interviewer behavior (e.g., Morton-Williams, 1993; Groves et al., 1992), and respondent characteristics (e.g., Stoop, 2005). In contrast, less attention has been given to the broader survey climate and respondents’ attitudes toward surveys even though they are often cited as key theoretical concepts (e.g., Groves & Couper, 1998; Loosveldt & Joye, 2016; Lyberg & Lyberg, 1991). While several comprehensive ‘surveys on surveys’ have been conducted (Goyder, 1986; Kim et al., 2011; Loosveldt & Storms, 2008), inconsistent measurements across these studies make it difficult to compare survey attitudes across different surveys, time periods, and countries, and limits our ability to perform trend analyses (Goyder, 1986; Kim et al., 2011; Loosveldt & Storms, 2008). The scarcity of empirical data on survey attitudes and their effect on nonresponse rates can thus be attributed to the absence of a reliable instrument for measuring such attitudes.
To address this gap, de Leeuw et al. (2019) developed the Survey Attitude Scale (SAS) based on an extensive literature review. The goal was to develop a brief and reliable instrument for measuring survey attitude across countries that is easy to implement in ongoing surveys, and suitable for online and mixed-mode studies.
The literature review of studies on survey attitudes and opinions identified three distinct theoretical dimensions: two that positively influence respondents’ intentions to participate in surveys, and one that has a negative impact (Cialdini, 1984; Dillman et al., 2014; Groves, 1989; Groves & Couper, 1998; Stoop et al., 2010). The first dimension, survey enjoyment, reflects respondents’ perceptions of surveys as a positive and enjoyable experience, as discussed by Cialdini (1984) and Dillman (1978). The second dimension points to a positive survey climate and emphasizes the subjective importance and value that respondents attribute to surveys, as noted by Rogelberg et al. (2001). The third dimension indicates a negative survey climate; surveys are perceived y respondents as a burden, which has a negative influence on motivation and participation (Goyder, 1986; Schleifer, 1986).
For each dimension, three questions were selected based on their performance in prior nonresponse studies and ‘surveys on surveys’ (Groves et al., 1992; Groves et al., 2000; Hox et al., 1995; Loosveldt & Storms, 2008; Rogelberg et al., 2001; Singer et al., 1998; Schleifer, 1986; Stocké, 2006), resulting in a nine-item scale. Three questions per dimension is necessary for conducting statistical analyses on measurement equivalence across countries (e.g., Bollen, 1989). For a detailed account of the SAS development and item selection process, refer to De Leeuw et al. (2022).
Whether the SAS is an effective and appropriate instrument for measuring survey attitudes depends on its reliability and validity. To assess this, the SAS was implemented in three probability-based panel studies: the German GESIS and PPSM panels and the Dutch LISS panel. The scale showed a replicable three-dimensional factor structure—survey enjoyment, survey value, and survey burden. Moreover, measurement equivalence was established cross-culturally between the Netherlands and Germany, and, for the German GESIS panel, measurement equivalence was also confirmed between the online and paper mail modes (de Leeuw et al., 2019). The reliability of the subscales of survey enjoyment, value, and burden was satisfactory, and there were clear indications of construct validity. Furthermore, positive correlations between the survey attitude subscales and respondents’ willingness to participate in future surveys suggest predictive validity for the SAS (de Leeuw et al., 2022). Fiedler et al. (2022) tested the SAS in four online studies involving young, highly educated German students. They replicated the latent structure of the SAS across all samples and found that factor loadings and reliability of the scores supported the theoretical framework.
Beyond assessing reliability and validity, three research questions will be addressed to evaluate the effectiveness of the SAS in understanding survey nonresponse and panel dropout.
The first research question (RQ1) is: are respondents’ survey attitudes as measured by the SAS stable across waves (Kenny & Zautra, 2001)? That is, to what extent are respondents’ survey attitudes consistent over time, as opposed to being influenced by the situation in which they are measured (e.g., survey context, Loosveldt & Joye, 2016)? Understanding the stability of survey attitudes is both practically and theoretically important. If respondents’ survey attitudes remain stable over time, we can measure them at a single point and use this data to profile subpopulations and develop targeted strategies to address nonresponse and panel dropout (Lynn, 2015). Conversely, if respondents’ survey attitudes vary across measurement occasions, we need to measure them at each wave. Moreover, to better understand respondents’ nonresponse behavior, researchers must account for not only individual differences in survey attitudes but also situational factors affecting survey attitudes, and interactions between individual attitudes and the measurement situation (cf. Dillman, 2020).
The second research question (RQ2) is: how effective is the SAS in explaining survey nonresponse and panel dropout beyond well-established predictors, such as respondents’ psychographic and sociodemographic profiles (Groves & Couper, 1998; Stoop et al., 2010)? If respondents’ survey attitudes primarily explain their nonresponse behavior through their association with psychographic and sociodemographic predictors, survey attitudes can be considered mediating mechanisms that help us understand how psychographic and sociodemographic characteristics influence nonresponse. However, if survey attitudes are unique characteristics of respondents, they provide distinct explanatory insights and may also offer additional predictive value.
The third research question (RQ3) is: how effective is the SAS in forecasting survey nonresponse and panel dropout beyond well-established predictors? If survey attitudes are key elements in the nonresponse puzzle, assessing their predictive validity and power will clarify their practical value in addressing declining response rates.
We conducted three studies to address these questions. We first introduce the dataset used across these studies and then present each study in detail. The paper concludes with a summary and discussion of the results.
We used data from six waves from the LISS panel (Longitudinal Internet studies for the Social Sciences), which is administered and managed by the non-profit research institute Centerdata (Tilburg University, the Netherlands). The LISS panel is a probability-based online panel of the Dutch population, established in 2007, the first wave was implemented in 2008. To compensate for panel dropout, refreshment samples were drawn from the Dutch population in 2009 and 2011. These additional cases are included in our analyses, treating waves preceding their panel membership as missing values. More information about the LISS panel can be found at www.lissdata.nl. For a description of the LISS panel, see Scherpenzeel & Das (2010).
Between 2008 and 2013, the SAS was administered as part of the annual Core Study on Personality (CentERdata, 2022). In total, 9960 LISS respondents completed the SAS at least once during this period. These panel data also include background variables describing respondents’ psychographic and sociodemographic profiles. In addition, CentERdata provided meta-data on survey participation, including the number of sent invitations and completed questionnaires for each participant in the LISS panel from 2008 to 2015. Table 1 presents descriptive statistics for all variables used. The variables are unstandardized to retain their original meanings and to allow for a more direct interpretation of the regression coefficients.
Table 1 Operationalization and descriptive statistics of all variables used
Variable | Operationalization | Mean | SD | Min | Max |
LISS files used and syntax are documented in the supplementary documentation to this paper. All variables are unstandardized a Variables selected by experts; see section 4.1 | |||||
Completed | Number of completed interviews per year | 31 | 19 | 0 | 93 |
Invited | Number of invitations to participate in a survey per year | 43 | 16 | 1 | 95 |
Wave | 2008 = 0, 2015 = 7 | 2.92 | 2.25 | 0 | 7 |
Covariates of survey (non)response | |||||
Femalea | Female = 1, male = 0 | 0.53 | 0.49 | 0 | 1 |
Agea | Age in years at first wave | 45.1 | 16.1 | 16 | 95 |
Educationa | School diplomas recoded into years spent in the educational system | 12.72 | 3.38 | 6 | 18 |
Migranta | Non-Dutch = 1, Dutch = 0 | 0.12 | 0.32 | 0 | 1 |
Type of dwellinga | Self-owned = 1, Other = 0 | 0.75 | 0.43 | 0 | 1 |
Household incomea | Monthly household income in Euro after taxes | 3098 | 5569 | 0 | 299,660 |
Urbanizationa | Based on surrounding address density (not urban = 1, extremely urban = 5) | 2.98 | 1.27 | 1 | 5 |
SimPCa | Computer and/or internet connection provided by LISS = 1, not = 0 | 0.06 | 0.23 | 0 | 1 |
Household sizea | Number of household members | 2.81 | 1.37 | 1 | 9 |
Social trusta | You can’t be too careful = 0, most people can be trusted = 10 | 6.07 | 2.11 | 0 | 10 |
Votera | Respondent voted in at least one national election = 1, not = 0 | 0.89 | 0.31 | 0 | 1 |
Dissatisfaction with leisure timea | Dissatisfaction with amount of available leisure time (entirely satisfied = 0, entirely dissatisfied = 10) | 2.99 | 2.14 | 0 | 10 |
Agreeablenessa | Big-5: Agreeableness score (very inaccurate/not agreeable at all = 1, very accurate/very agreeable = 5) | 3.87 | 0.49 | 1 | 5 |
Survey Attitude Scale | |||||
Enjoyment: mean | Person-mean of survey enjoyment across waves (tot. disagree = 1, tot. agree = 7) | 4.67 | 0.72 | 1 | 7 |
Enjoyment: deviation | Deviation from the person-mean of survey enjoyment at each wave | −0.001 | 0.97 | −5.20 | 5.10 |
Value: mean | Person-mean of survey value | 5.58 | 0.57 | 1 | 7 |
Value: deviation | Deviation from the person-mean | −0.01 | 0.84 | −5.51 | 3.74 |
Burden: mean | Person-mean of survey burden | 3.06 | 0.62 | 1 | 7 |
Burden: deviation | Deviation from the person-mean | 0.01 | 0.98 | −3.85 | 5.18 |
The first research question (RQ1) examines the extent to which survey attitudes remain stable within individuals or vary depending on the situation (cf. Loosveldt & Joye, 2016; Loosveldt & Storms, 2008; Lynn, 2015). We employ a latent state-trait variance decomposition model to examine this question. The model distinguishes between a trait component, which reflects stability in differences between individuals over time, and a state component, which captures variations within individuals across measurement occasions (Zijlmans & Hamaker, 2014). The approach is grounded in latent state-trait theory (Steyer et al., 1992, 1999), which considers that measurements are not conducted in a situational vacuum. Instead, sources of variance in (psychological) measurement includes both individual differences and situational factors as well as interactions between persons and situations.
We use a multi-state single-trait multi-method model (Kenny & Zautra, 2001; Schmitt & Steyer, 1993) as represented in Fig. 1. This model decomposes observed variables into a trait component capturing stability across time and situations, and a state component reflecting within-individual temporal variations (Zijlmans & Hamaker, 2014). The model uses these variance decompositions to compute two coefficients that measure the stability of survey attitude for each observed variable: (1) the consistency coefficient, which quantifies the proportion of observed variance attributable to true individual differences, and (2) the occasion specificity coefficient, which specifies the proportion of observed variance due exclusively to situational differences among individuals.
Fig. 1 The latent state-trait model for the survey attitude scale. JOYt, VALt and BURt represent survey enjoyment, survey value and survey burden at wave t, respectively. ξi is the unique trait factor for indicator i. ηt is the common state at wave t with common state variance ζt. τ represents the common trait. Unique states (εit) as well as the factorial structure of waves 4 and 5 are omitted for reasons of clarity (indicated though a dashed line).
To estimate these coefficients, we apply the model to all six waves (2008–2013) of the LISS panel in which the SAS was included, taking the three subscales of the SAS as separate indicators. We calculate the average subscale score for each person at each panel wave and use these scores as observed variables in the model. These scores are indicated in Fig. 1 as JOY1, VAL1, BUR1, and so on.
To compare respondents’ survey attitude across time, measurement invariance over time is required. We therefore evaluate the latent state-trait model’s fit at various levels of measurement invariance. The weakest form of measurement invariance is configural invariance in which the factor structure remains consistent across measurement occasions. Metric invariance also restricts factor loadings to be equal across time points. Finally, scalar invariance further imposes that intercepts are equal across time points, allowing for valid comparisons of latent means (Vandenberg & Lance, 2000).
Missing values due to attrition are assumed to be missing at random and are dealt with using full information maximum likelihood. We evaluate model fit using the following indices: the Chi-square test, RMSEA, CFI, TLI, and SRMR. Given our large sample size (N = 9951), a Chi-square difference test for nested models is overly strict. We therefore primarily rely on the comparative fit index (CFI), with a difference greater than 0.01 indicating a significant difference (Cheung & Rensvold, 2002). We use Mplus (Muthén & Muthén, 1998–2017) to obtain estimates and fit indices. For further details on the analyses, see Bons (2015).
Table 2 reports the fit indices of three models with increasing measurement equality constraints. The results indicate that scalar invariance across waves can be assumed since ∆CFI is smaller than 0.01 between model 3 (scalar invariance) and model 1 (configural invariance). Accordingly, we can fit a second-order latent state trait model with scalar invariance constraints (equal factor loadings and intercepts over time) to assess the stability of the SAS across waves. All fit indices show that this model fits well: RSMEA = 0.025 (CI: 0.024; 0.027), SRMR = 0.055, CFI = 0.973 and TLI = 0.972.
Table 2 Measurement invariance over time: model-fit indices by type of measurement invariance (RQ1)
Invariance type | Df | CFI | Cumulative ∆CFI | TLI | RMSEA | SRMR | |
Model 1: configural | 800.2 | 131 | 0.981 | – | 0.977 | 0.023 | 0.043 |
Model 2: metric | 820.4 | 141 | 0.980 | 0.001 | 0.979 | 0.022 | 0.047 |
Model 3: scalar | 1098.8 | 151 | 0.973 | 0.008 | 0.972 | 0.025 | 0.055 |
Assessing the stability of the SAS, Table 3 reports the consistency and wave specificity coefficients. Consistency coefficients range from 0.37 to 0.62, indicating that a moderate to large proportion of the variance is attributable to enduring, trait-like differences between respondents. Wave specificity coefficients range from 0.06 to 0.33, indicating that a small to moderate portion of the variance is attributable to situational, state-like differences within respondents across waves. On average, the trait aspect of the SAS is 4.1 times higher for survey enjoyment, 2.0 times higher for survey value, and 6.4 times higher for survey burden than its state aspect.
Table 3 Consistency and specificity coefficients estimated by the latent state-trait model (RQ1)
Wave | Survey Enjoyment | Survey Value | Survey Burden | |
Consistency | 1 | 0.53 | 0.46 | 0.37 |
2 | 0.60 | 0.52 | 0.43 | |
3 | 0.60 | 0.54 | 0.42 | |
4 | 0.61 | 0.55 | 0.42 | |
5 | 0.62 | 0.55 | 0.45 | |
6 | 0.60 | 0.56 | 0.44 | |
Specificity | 1 | 0.19 | 0.33 | 0.08 |
2 | 0.15 | 0.27 | 0.07 | |
3 | 0.13 | 0.24 | 0.06 | |
4 | 0.15 | 0.27 | 0.07 | |
5 | 0.13 | 0.24 | 0.06 | |
6 | 0.13 | 0.25 | 0.06 |
Averaged across the subscales, about two thirds of the variance captured by the SAS indicates stable (trait) aspects of respondents’ survey attitude, and one third indicates situational (state) aspects.
Research question 2 (RQ2) investigates how both trait- and state-aspects of the SAS contribute to explaining survey nonresponse and panel dropout, beyond the psychographic and sociodemographic predictors commonly included as predictors in nonresponse studies. We draw again on all six waves (2008–2013) of the LISS panel that included the Survey Attitude Scale (SAS) and treat the three SAS subscales as separate indicators.
To differentiate between trait and state components of the SAS, we calculate person-means across waves and deviations from these means for each subscale of the SAS. We then explore two aspects of individuals’ nonresponse patterns: nonresponse at any given panel wave and panel dropout.
To measure nonresponse, we compute the number of completed interviews per year for each panel member relative to the number of invitations they received. On average, panel members completed 31 interviews per year, with a standard deviation (SD) of 19, or 0.68 interviews (SD = 0.34) per invitation. Approximately 60 % of the variance in nonresponse is between individuals (intra-class correlation = 0.60), while about 40 % is within individuals over time.
To measure panel dropout, we label panel members as having dropped out if they ceased responding to panel invitations at any point during the observed period. For instance, if a respondent completed their last questionnaire in 2009, they are classified as having dropped out in that year.
To be a valuable indicator, the SAS should outperform the psychographic and sociodemographic variables commonly used in nonresponse studies (e.g., Brehm, 1993; Goyder, 1987; Groves, 1989; Groves & Couper, 1998; Stoop, 2005). In addition to the SAS, the LISS panel includes a rich array of demographic (e.g., age, sex, education), psychological (e.g., Big Five personality traits), and sociological variables (e.g., trust).
We rely on expert opinions to identify the most important covariates of survey nonresponse and panel dropout. Before analyzing the data, we presented a list of all available covariates to 31 international experts in survey methodology and statistics. These variables were chosen based on a comprehensive literature review of nonresponse indicators (e.g., Groves & Couper, 1998; Stoop, 2005; Stoop et al., 2010) and their availability in the LISS panel. We then asked the experts to rate each variable’s relevance to nonresponse and attrition. The consensus among experts was high (intercoder reliability = 0.88). We then included the 13 highest-rated variables in our model. Most of these variables were part of the yearly core questionnaire and thus measured annually; for those measured monthly, we used the last value of the year. Descriptive statistics for all employed variables are provided in Table 1. For further details, please refer to the Appendix.
To examine survey nonresponse, we use negative binomial regression (NBR), as linear regression can produce inefficient, inconsistent, and biased estimates with count data (Hox et al., 2017). We specify the NBR as a multilevel model in which repeated measures across years (level 1) are nested within individuals (level 2) to model trends over time. The dependent variable is the annual count of completed interviews, with the annual count of invitations included as an offset parameter to account for differences in invitations across respondents and waves.
To analyze panel dropout, we use discrete-time survival analysis, which models the conditional probability to drop out at wave t, given that a respondent is still in the panel. In both analyses, we estimate robust standard errors to account for the clustering by households and employ multiple imputation to account for missing data1. We used Stata 15 (Stata Corp, 2017) for both analyses.
We examine the explanatory power of the SAS for survey nonresponse by comparing four models that are presented in Table 4. The first model (M0) includes only wave as an explanatory variable. The second model (M1) builds on M0 by adding the psychographic and sociodemographic variables. The third model (M2) builds on M0 by incorporating the trait and state components for each SAS subscale. Finally, the fourth model (M3) combines M1 and M2 by including both the psychographic and sociodemographic covariates and the SAS. Note that regression coefficients are exponentiated, so coefficients greater than 1 indicate positive effects, while those less than 1 indicate negative effects.
Table 4 Longitudinal negative binomial regression explaining survey (non)response (RQ3)
Dependent variable: Number of completed interviews per year | M0: Wave only | M1: Covariates | M2: SAS | M3: Cov. + SAS | ||||
Exp(B) | SE | Exp(B) | SE | Exp(B) | SE | Exp(B) | SE | |
Dispersion parameter and Var(u) are not exponentiated * p< 0.05, ** p< 0.01, *** p< 0.001 | ||||||||
Intercept | 0.591*** | 0.006 | 0.426*** | 0.031 | 0.335*** | 0.035 | 0.283*** | 0.034 |
Wave | 0.947*** | 0.002 | 0.947*** | 0.002 | 0.946*** | 0.002 | 0.946*** | 0.002 |
Covariates of survey (non)response | ||||||||
Female | – | – | 1.068*** | 0.016 | – | – | 1.051*** | 0.016 |
Age | – | – | 1.009*** | 0.001 | – | – | 1.008*** | 0.001 |
Education (years) | – | – | 0.997 | 0.002 | – | – | 0.998 | 0.002 |
Education squared | – | – | – | – | ||||
Migrant | – | – | 0.921 | 0.048 | – | – | 0.923 | 0.047 |
Self-owned dwelling | – | – | 1.022 | 0.020 | – | – | 1.032 | 0.020 |
Household income | – | – | 1.000 | > 0.00 | – | – | 1.000 | > 0.00 |
Urbanization | – | – | 0.998 | 0.007 | – | – | 0.996 | 0.007 |
SimPC | – | – | 1.031 | 0.027 | – | – | 0.996 | 0.025 |
Household size | – | – | 0.985* | 0.006 | – | – | 0.986* | 0.006 |
Social trust | – | – | 1.002 | 0.003 | – | – | 1.002 | 0.003 |
Voter | – | – | 1.08 | 0.046 | – | – | 1.07 | 0.044 |
Dissatisfaction with leisure time | – | – | 0.989*** | 0.002 | – | – | 0.99*** | 0.002 |
Big 5: Agreeableness | – | – | 0.981* | 0.008 | – | – | 0.966*** | 0.008 |
Survey attitude scale | ||||||||
Enjoyment: mean | – | – | – | – | 1.123*** | 0.015 | 1.113*** | 0.015 |
Enjoyment: deviation | – | – | – | – | 1.027*** | 0.004 | 1.027*** | 0.004 |
Value: mean | – | – | – | – | 1.046* | 0.018 | 1.036* | 0.018 |
Value: deviation | – | – | – | – | 1.009* | 0.004 | 1.011* | 0.004 |
Burden: mean | – | – | – | – | 0.934*** | 0.013 | 0.94*** | 0.012 |
Burden: deviation | – | – | – | – | 0.987** | 0.004 | 0.988** | 0.004 |
Other parameters | ||||||||
Dispersion parameter | −2.189 | 0.074 | −2.184 | 0.074 | −2.194 | 0.074 | −2.198 | 0.075 |
Variance of random intercept | 0.697 | 0.024 | 0.643 | 0.023 | 0.662 | 0.023 | 0.617 | 0.022 |
R2 (level 2) | 0.011 | – | 0.077 | – | 0.050 | – | 0.115 | – |
N (person-years) | 39,622 | – | 39,622 | – | 39,622 | – | 39,622 | – |
M0 shows that response rates decline by approximately 6% per year ({1 − 0.947}/0.947 = 6%). The variance of the random intercept suggests substantial variation across individuals (SD = 0.83).
M1 shows that women, voters, and older panel participants tend to respond more frequently. Conversely, participants who live in larger households, are less satisfied with their available leisure time, or score higher on agreeableness in the Big Five personality traits tend to respond less often2. Except for agreeableness, the effects of these nonresponse predictors align with expectations.
M2 includes only the SAS and indicates that panel participants are more likely to respond to surveys they find more enjoyable, valuable, and less burdensome. In particular, the trait aspects of participants’ survey attitudes (i.e., person-means across panel waves) are strong predictors of survey participation. For example, a respondent who perceives survey participation as one unit more enjoyable (on a scale from 1 to 7) is estimated to complete about 12% more interviews per year, as indicated by a person mean regression coefficient of 1.123. Conversely, a one-unit increase in perceived survey value is associated with only a 3% increase in completed interviews, while a one-unit increase in perceived survey burden corresponds to a 7% decrease in completed interviews. Changes in survey attitude across panel waves, which reflect situational aspects, have a less pronounced but still significant impact on survey participation. These results follow a familiar pattern: changes in survey enjoyment influence participation more than changes in survey value or burden. Effect sizes range from −1% to +3% per unit change in the SAS. M2 demonstrates that the SAS explains variance in survey nonresponse. It does so most successfully by its stable (trait) component.
Finally, M3 combines M1 and M2 to determine whether the explanatory power of the SAS persists when accounting for psychographic and sociodemographic variables commonly associated with survey nonresponse. It does. The regression coefficients of the SAS and the nonresponse predictors in M3 are nearly identical to those in M1 (which includes only psychographic and sociodemographic predictors) and M2 (which includes only the SAS). This suggests little overlap between respondents’ psychographic and sociodemographic profiles and their survey attitude. This is further evidenced by the fact that the proportion of variance explained in M3 () is approximately the sum of the variance explained in M1 () and M2 (). In conclusion, respondents’ survey attitude as measured by the SAS offers a unique and valuable contribution to understanding survey nonresponse.
We examine the explanatory power of the SAS with respect to panel dropout with three survival models. Model 1 (M1) includes the psychographic and sociodemographic variables linked to panel dropout. Model 2 (M2) includes both trait and state components of the three subscales of the SAS. Model 3 (M3) integrates M1 and M2 to include both psychographic and sociodemographic variables and the SAS.
The results are reported in Table 5. Coefficients are exponentiated, with values greater than 1 indicating a positive relationship with panel dropout, and values less than 1 indicating a negative relationship.
Table 5 Survival analysis explaining panel dropout (RQ3)
Dependent variable: Dropout | M1: Covariates | M2: SAS | M3: Cov. + SAS | |||
Exp(B) | SE | Exp(B) | SE | Exp(B) | SE | |
Time-constant variables are female, age at first wave, and migrant * p< 0.05, ** p< 0.01, *** p< 0.001 | ||||||
Intercept | 0.082*** | 0.016 | 0.321*** | 0.063 | 0.215*** | 0.058 |
Wave | 2.447*** | 0.085 | 2.586*** | 0.124 | 2.631*** | 0.126 |
Wave squared | 0.855*** | 0.005 | 0.847*** | 0.008 | 0.845*** | 0.008 |
Covariates of survey (non)response | ||||||
Female | 0.965 | 0.028 | – | – | 0.966 | 0.030 |
Age | 0.999 | 0.001 | – | – | 1.001 | 0.001 |
Education (years) | 0.983** | 0.005 | – | – | 0.98*** | 0.006 |
Education squared | ||||||
Migrant | 1.002 | 0.061 | – | – | 0.995 | 0.064 |
Self-owned dwelling | 1.004 | 0.047 | – | – | 0.975 | 0.048 |
Household income | 1.000** | > 0.00 | – | – | 1.000** | > 0.00 |
Urbanization | 1.004 | 0.016 | – | – | 1.014 | 0.017 |
SimPC | 0.510*** | 0.050 | – | – | 0.564*** | 0.060 |
Household size | 0.988 | 0.016 | – | – | 0.984 | 0.016 |
Social trust | 0.997 | 0.011 | – | – | 1.000 | 0.012 |
Voter | 0.703*** | 0.041 | – | – | 0.701*** | 0.043 |
Dissatisfaction with leisure time | 1.033** | 0.011 | – | – | 1.022 | 0.012 |
Big 5: Agreeableness | 1.037 | 0.045 | – | – | 1.286*** | 0.066 |
Survey attitude scale | ||||||
Enjoyment: mean | – | – | 0.740*** | 0.02 | 0.736*** | 0.020 |
Enjoyment: deviation | – | – | 0.950 | 0.038 | 0.944 | 0.038 |
Value: mean | – | – | 0.844*** | 0.027 | 0.833*** | 0.028 |
Value: deviation | – | – | 0.862** | 0.037 | 0.845*** | 0.038 |
Burden: mean | – | – | 1.160*** | 0.030 | 1.153*** | 0.030 |
Burden: deviation | – | – | 1.026 | 0.033 | 1.026 | 0.034 |
N (person-years) | 39,622 | – | 39,622 | – | 39,622 | – |
M1 reveals that panel dropout increases over time, as evidenced by the significant coefficients for both wave and wave squared. Comparing M1 in Table 4 and 5, we see that different sets of covariates explain survey nonresponse and panel dropout. Panel participants with higher education, those who were provided with internet and computer equipment upon joining the panel (SIMPC), and those who voted in national elections experience lower dropout rates. Conversely, dropout rates are higher among participants with greater household income and those dissatisfied with their leisure time. In contrast with nonresponse, age, gender, and household size do not predict panel dropout.
Consistent with the results for survey nonresponse, M2 shows that surveys perceived as more enjoyable, valuable, and less burdensome are associated with lower dropout rates. For instance, a respondent who finds surveys one unit more enjoyable (on a scale from 1 to 7) is estimated to be 26% less likely to drop out at each wave. Panel dropout, like nonresponse, is primarily influenced by the enduring (trait) aspects of survey attitude. However, situational aspects that prompt changes in survey value also affect dropout. For example, if participants perceive surveys as one unit less valuable than usual, they are 14% more likely to drop out.
M3 demonstrates that the explanatory power of the SAS remains significant even when accounting for psychographic and sociodemographic predictors of panel dropout. The regression coefficients for the SAS in M3 are nearly identical to those in M1 and M2, with the exception of agreeableness, which turns significant and increases in magnitude in M3. This again indicates minimal overlap in the explanatory power between survey attitude and other predictors of panel dropout.
Taken together, respondents’ survey attitudes, as measured by the SAS, explain both nonresponse and panel dropout beyond what can be accounted for by their psychographic and sociodemographic profiles. In particular, perceived survey enjoyment and survey burden are notable predictors of both outcomes. The enduring (trait) aspects of survey attitude are more effective in explaining nonresponse and panel dropout than the situational (state) aspects. However, changes in perceived survey value also explain panel dropout and show an impact comparable to the effect of respondents’ overall perception of survey value.
Finally, in research question 3 (RQ3) we examine the SAS’s ability to forecast survey nonresponse and panel dropout on new data, thereby extending beyond the validity tests conducted in prior studies (De Leeuw et al., 2019; 2022). Specifically, we compute the predictive validity of the SAS through out-of-sample forecasts on data from 2014 and 2015. The SAS was included in the core questionnaire of the LISS panel from 2008 to 2013. In addition, CentERdata provided us with the number of invitations and completed questionnaires for the LISS panel in 2014 and 2015, which enabled us to calculate the survey nonresponse and panel dropout rates for those years.
We assess the predictive validity of the SAS by examining how well the explanatory models from Study 2—fitted on data from 2008 to 2013—can accurately forecast survey nonresponse and panel dropout in 2014 and 2015. For survey nonresponse, we measure the predictive performance of the negative binomial models by comparing the forecasted response rates with the observed rates in 2014 and 2015. For panel dropout, we evaluate the predictive performance of the survival models by calculating the accuracy in predicting which respondents from 2013 continue to participate and which drop out in 2014 and 2015. Note that predictive accuracy is therefore determined at an aggregate (survey) level rather than at the individual level.
We examine the predictive performance of four models: M1 (covariates only), M2 (trait component of the SAS only), M3 (covariates and trait component of the SAS), and M4 (covariates and both trait and state components of the SAS). To determine whether incorporating more and more recent information improves predictions, we calculate the trait component of the SAS across three timeframes: 2008, 2008–2010, and 2008–2013. We use data from 2013 for the state component of the SAS and the covariates. Finally, to predict survey nonresponse, we incorporate the number of invitations in 2014 and 2015 as offset parameters in the negative binomial model.
Table 6 reports the correlation ρ between the model-predicted and observed response rates in 2014 and 2015. A higher correlation indicates better predictive accuracy.
Table 6 Correlation ρ between model-predicted and observed response rates in 2014 and 2015 (RQ3)
M1: Covariates | M2: SAS (trait) | M3: Covariates + SAS (trait) | M4: Covariates + SAS (trait and state) | |||
Calculation base | 2013 | 08 | 08–10 | 08–13 | 08–13 | 08–13 |
Response rate in 2014 | 0.364 | 0.237 | 0.239 | 0.242 | 0.363 | 0.372 |
Response rate in 2015 | 0.304 | 0.162 | 0.178 | 0.181 | 0.305 | 0.311 |
The results indicate that respondents’ psychographic and sociodemographic profiles (M1) are more effective at predicting survey nonresponse than the SAS (M2). The correlation between predicted and observed response rates for M1 is 0.36 in 2014 and 0.30 in 2015, compared to 0.24 in 2014 and 0.18 in 2015 for M2. M2 shows that the trait component of the SAS reaches most of its predictive power with just one measurement, with little to no improvement when including additional waves. M3 indicates that combining the SAS trait component with respondents’ covariates does not enhance predictive accuracy beyond what M1 achieves. However, M4 shows that including the state component of the SAS improves predictive performance somewhat. Therefore, survey nonresponse is best predicted by respondents’ psychographic and sociodemographic profiles and the state component of the SAS, which are both based on the most recent data from 2013. Therefore, even though the improvement from including the state component is modest, the findings suggest that situational aspects of respondents’ survey attitudes in one wave contribute to predicting survey nonresponse in subsequent waves.
Table 7 presents the percentage of correctly predicted panel dropouts and stayers for 2014 and 2015 among respondents who participated in 2013. In this subsample, 708 out of 4706 respondents dropped out of the panel. A random selection of 708 dropouts would therefore yield an average predictive accuracy of 15 %. However, instead of predicting dropout randomly, we predict the 708 respondents with the highest model-predicted log-hazard rates to drop out.
Table 7 Percentage of accurate predictions for remaining in the panel or dropping out (RQ3)
M1: Covariates | M2: SAS (trait) | M3: Cov. + SAS (trait) | M4: Cov. + SAS (trait & state) | |||
Calculation base | 2013 | 2008 | 08–10 | 08–13 | 08–13 | 08–13 |
% dropout 2014–15 | 72 | 69 | 72 | 76 | 77 | 77 |
The results indicate that respondents’ survey attitudes are more effective at predicting panel dropout than the 13 covariates representing their psychographic and sociodemographic profiles. Using respondents’ covariates from 2013, M1 accurately predicts 72% of cases. In contrast, M2 achieves an accuracy of 76% when the trait component of the SAS is calculated using all available data from 2008 to 2013. For the trait component to outperform the covariates, it must be based on data from at least three waves. M3 reveals that adding covariates to the SAS does not improve predictive accuracy beyond M2. Similarly, M4 shows that incorporating the state component of the SAS as well does not further improve accuracy.
To conclude, while respondents’ psychographic and sociodemographic profiles are better forecasting nonresponse at single waves, the SAS—in particular the enduring (trait) aspects of respondents’ survey attitudes—are better at forecasting overall panel dropout.
Survey nonresponse has been increasing across countries and over time, posing a significant challenge for survey-based research. This rise in nonresponse cannot be fully explained by changes in survey design, technology, or sociodemographic composition. As early as 1991, Lyberg and Lyberg introduced the concept of ‘survey climate’ to describe these nonresponse trends in Sweden. Although in the past several ‘surveys on surveys’ have investigated potential indicators of survey climate, such as public opinion about surveys and reasons for (non)participation in surveys, these studies often relied on non-comparable questionnaires and lengthy, interviewer-driven surveys.
De Leeuw et al. (2019) developed the Survey Attitude Scale (SAS) to offer a short and reliable instrument for measuring survey attitudes across survey modes (e.g., interviews, self-administered, online, and paper-and-pencil) and across countries. The SAS consists of three subscales: ‘survey enjoyment,’ which reflects the intrinsic, individual perception of surveys as a positive experience; ‘survey value,’ which reflects the subjective importance and value of surveys and point to a positive survey climate, and ‘survey burden,’ which reflects a negative survey climate. Previous research demonstrated satisfactory reliability, evidence of construct validity, and evidence of measurement equivalence between Germany and the Netherlands, as well as between online and offline modes.
This article further investigates the usefulness of the SAS in understanding and addressing survey nonresponse through three studies, which each tackle a key question: To what extent is survey attitude a stable, respondent-specific trait as compared to being influenced by situational factors? How effectively does the SAS explain survey nonresponse and panel dropout? And, how accurate is it in forecasting nonresponse and panel dropout in out-of-sample contexts?
In Study 1, we employ latent trait-state analysis to assess the stability of the SAS. The findings reveal that approximately two-thirds of the variance captured by the SAS reflects the enduring (trait) aspects of respondents’ survey attitude, while the remaining one-third reflects situational (state) aspects, such as the specific topics and questions presented to respondents in each wave of the LISS panel. Since the SAS shows considerable stability across waves, it may be used to profile subpopulations and develop targeted strategies to reduce nonresponse and panel dropout.
In Study 2, we employ negative binomial regression and survival analysis fitted on data from 2008 to 2013 to compare the explanatory power of the SAS to respondents’ psychographic and sociodemographic profiles. In studies about nonresponse indicators and in weighting adjustment, sociodemographic and psychographic variables are often used as key variables. Thus, to be of theoretical and practical use, the SAS should explain nonresponse and panel drop-out when controlling for these key variables. In our second study, the results indicate that respondents’ survey attitudes, as measured by the SAS, significantly explain nonresponse and panel dropout beyond what can be accounted for by respondents’ psychographic and sociodemographic profiles. Notably, survey enjoyment and survey burden are identified as significant explanatory factors. While stable aspects of respondents’ survey attitudes are more effective at explaining nonresponse and dropout than situational changes, situational factors that influence respondents’ perceived survey value also display explanatory power. Therefore, while respondents’ psychographic and sociodemographic profiles contribute to explaining nonresponse and dropout, they do not fully capture their survey attitudes, which emerge as unique respondent characteristics.
In Study 3, we use the models estimated in Study 2 to assess whether respondents’ survey attitudes can forecast survey nonresponse and panel dropout on new data from 2014 and 2015. The findings indicate that, while respondents’ psychographic and sociodemographic profiles are more effective at predicting nonresponse at individual waves, the SAS is better at forecasting overall panel dropout. The SAS matches the predictive accuracy of respondents’ psychographic and sociodemographic profiles when trained on data from at least three waves and exceeds it when trained on more waves. We therefore recommend that panel managers include the SAS in the initial waves of a panel to identify respondents with a high likelihood of dropping out.
To conclude, incorporating the SAS in the initial wave(s) to measure respondents’ survey attitudes, alongside collecting their psychographic and sociodemographic characteristics, provides a valuable tool for identifying participants likely to miss a wave or drop out of panel surveys. Researchers can use this information to proactively address potential issues by tailoring their approach to at-risk participants. Strategies might include increasing contact between waves, personalized outreach, offering assistance, adjusting invitation language, providing targeted incentives, and employing varied data collection methods (see Lynn, 2015, 2017). Given that the SAS effectively explains and predicts missingness, it can also serve as an auxiliary variable in methods for handling missing values, such as weighting or imputation techniques (Enders, 2010).
Maximizing survey enjoyment and minimizing burden is central to several methodological frameworks on survey nonresponse, such as leverage-saliency theory (Groves et al., 2000), social exchange theory (Dillman, 1978, 2020), and gamification theory (Puleston, 2012, 2013). Our study results align with this perspective. While factors such as societal survey frequency and respondents’ psychographic and sociodemographic profiles are beyond researchers’ control, surveys can be designed to be brief, enjoyable, easy to complete, and emphasize the survey’s importance and legitimacy in invitations. The SAS essentially evaluates how effectively these principles are implemented from the participants’ perspective. Our results show that measuring both the enduring and situational aspects of respondents’ survey attitudes is a valuable tool for forecasting nonresponse and panel dropout and can help to identify and engage participants who are likely to miss a wave or drop out of the panel.
This study is not without limitations. First, it relies on Dutch data from a probability-based online panel, which may not represent very well variation in survey attitudes in other contexts. Nonresponse trends differ among countries, and we may expect that different countries differ in survey attitude among their inhabitants. Replicating this study in multiple countries would thus be beneficial. The ESS CRONOS-2 Panel, covering 12 European countries, has recently incorporated an onlineversion of the Survey Attitude Scale in its first and fifth waves (ESS-CRONOS-2, 2024). This international comparative study will allow researchers to explore variation across countries in survey attitude. Second, as with other nonresponse studies, the data excludes individuals who declined initial panel participation. This exclusion may lead to an underestimation of regression coefficients if respondents with negative survey attitudes are systematically omitted.
The LISS panel data used were collected by the non-profit research institute Centerdata (Tilburg University, the Netherlands). Funding for the panel’s ongoing operations comes from the Domain Plan SSH and ODISSEI since 2019. The initial set-up of the LISS panel in 2007 was funded through the MESS project by the Netherlands Organization for Scientific Research (NWO). LISS-data from the years 2008 up to and including 2015 were used (https://www.lissdata.nl/).
The Survey Attitude Scale (SAS) was developed by Edith de Leeuw and Joop Hox. The latent state-trait analyses (Study 1) were conducted by Hugo Bons and Joop Hox, based on Bons’ (2015) master’s thesis. Benjamin Rosche and Joop Hox analyzed survey nonresponse and panel dropout (Studies 2 and 3), which are revised and expanded versions of Rosche, Hox, and de Leeuw’s (2020) technical report. The study was designed by Edith de Leeuw, Benjamin Rosche, and Joop Hox, who also co-authored and revised the draft manuscripts. Edith de Leeuw did most of the preparatory work, prepared presentations, descriptions of the SAS background and construction, and wrote draft reports. All authors reviewed and approved the final manuscript
We thank Annette Scherpenzeel, Corrie Vis, and Miquelle Marchand (LISS-CentERdata) for their knowledgeable assistance in procuring the LISS data. We also extend our gratitude to the 31 international experts in survey methodology who provided their expertise by rating the importance of nonresponse indicators.
Atrostic, B. K., Bates, N., Burt, G., & Silberstein, A. (2001). Nonresponse in U.S. government household surveys: Consistent measures, recent trends, and new insights. Journal of Official Statistics, 17(2), 209–226. →
Beullens, K., Loosveldt, G., Vandenplas, C., & Stoop, I. (2018). Response rates in the European social survey: increasing, decreasing, or a matter of fieldwork efforts? Survey Methods: Insights from the Field. https://surveyinsights.org/?p=9673 (Accessed May 2023). →
Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley. →
Bons, H. I. (2015). Stability of the survey attitude scale over time: a latent state-trait analysis. Methodology and statistics for the Behavioural, biomedical, and social sciences. Utrecht University. MSc Thesis a, b
Brehm, J. (1993). The phantom respondent: opinion surveys and political representation. Ann Arbor: Policy Press. →
CentERdata (2022). LISS-data from 2008–2015. https://www.lissdata.nl/. Accessed 07.2024. →
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indices for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. →
Cialdini, R. B. (1984). Influence: the new psychology of modern persuasion. New York: Morrow. a, b
Curtin, R., Presser, S., & Singer, E. (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly, 69(1), 87–96. https://doi.org/10.1093/poq/nfi002. →
Dillman, D. A. (1978). Mail and telephone surveys: the total design method. New York: Wiley. a, b, c
Dillman, D. A. (2020). Towards survey response rate theories that no longer pass each other like strangers in the night. In P. S. Brenner (Ed.), Understanding survey methodology (pp. 31–54). Springer. https://doi.org/10.1007/978-3-030-47256-6_2. a, b
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. New York: Wiley. a, b
Dutwin, D., & Lavrakas, P. J. (2017). Trends in telephone outcomes, appendix D to the future of U.S. General population telephone survey research AAPOR task force report 2017 →
Enders, C. K. (2010). Applied missing data analysis. New York: Guilford. →
ESS-CRONOS‑2 (2024). https://www.europeansocialsurvey.org →
Fiedler, I., Euler, T., Jungermann, N., & Schwabe, U. (2022). Validating the survey attitude scale (SAS): are measurements comparable among different samples of highly qualified from German higher education? General Online Research (GOR) Conference, Berlin. https://doi.org/10.13140/RG.2.2.16658.73929/1. →
Goyder, J. (1986). Survey on surveys: limitations and potentialities. Public Opinion Quarterly, 50(4), 27–41. a, b, c
Goyder, J. (1987). The silent minority: nonrespondents on sample surveys. Cambridge: Polity Press. →
Groves, R. M. (1989). Survey errors and survey costs. New York: Wiley. a, b
Groves, R. M., & Couper, M. P. (1998). Nonresponse in household survey interviews. New York: Wiley. a, b, c, d, e
Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56(4), 475–495. a, b
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation. Public Opinion Quarterly, 64(3), 299–308. a, b
Hox, J., Moerbeek, M., & Van de Schoot, R. (2017). Multilevel analysis: techniques and applications (3rd edn.). New York: Routledge. →
Hox, J. J., de Leeuw, E. D., & Vorst, H. (1995). Survey participation as reasoned action: a behavioral paradigm for survey nonresponse? Bulletin de Méthodologie Sociologique, 48, 52–67. →
Kenny, D. A., & Zautra, A. (2001). Trait-state models for longitudinal data. In L. Collins & A. Sayer (Eds.), New methods for the analysis of change (pp. 241–265). Washington: American Psychological Association. a, b
Kim, J., Gerhenson, C., Glaser, P., & Smith, T. (2011). The polls-trend: trends in surveys on surveys. Public Opinion Quarterly, 75(1), 165–191. a, b
de Leeuw, E. D., & de Heer, W. (2002). Trends in household survey nonresponse: a longitudinal and international comparison. In R. M. Groves, D. A. Dillman, J. L. Eltinge & R. J. A. Little (Eds.), Survey nonresponse (pp. 41–54). New York: Wiley. →
de Leeuw, E., Hox, J., & Luiten, A. (2018). International nonresponse trends across countries and years: an analysis of 36 years of labor force survey data. Survey Methods: Insights from the Field. https://surveyinsights.org/?p=10452 (Accessed May 2023). →
de Leeuw, E., Hox, J., Silber, H., Struminskaya, B., & Vis, C. (2019). Development of an international survey attitude scale: measurement equivalence, reliability, and predictive validity. Measurement Instruments for the Social Sciences. https://doi.org/10.1186/s42409-019-0012-x. a, b, c, d
de Leeuw, E., Luiten, A., & Stoop, I. (2020). Preface to the special issue on nonresponse. Journal of Official Statistics, 36(3), 463–468. https://doi.org/10.2478/jos-2020-0024. →
de Leeuw, E., Hox, J., Silber, H., Struminskaya, B., & Vis, C. (2022). The survey attitude scale. In ZIS Open Access Repository for Measurement Instruments. https://doi.org/10.6102/zis325_exz. a, b, c
Loosveldt, G., & Joye, D. (2016). Defining and assessing survey climate. In C. Wolf, D. Joye, T. W. Smith & Y.-C. Fu (Eds.), The Sage handbook of survey methodology (pp. 67–76). Los Angeles: SAGE. a, b, c
Loosveldt, G., & Storms, V. (2008). Measuring public opinions about surveys. International Journal of Public Opinion Research, 20(1), 74–89. https://doi.org/10.1093/ijpor/edn006. a, b, c, d
Luiten, A., Hox, J., & de Leeuw, E. (2020). Survey nonresponse trends and fieldwork effort in the 21st century: results of an international study across countries and surveys. Journal of Official Statistics, 36(3), 469–487. https://doi.org/10.2478/jos-2020-0025. →
Lyberg, I., & Lyberg, L. (1991). Nonresponse research at Statistics Sweden. In Proceedings of the survey research methods section of the American statistical association (pp. 78–87). http://www.asasrms.org/Proceedings/papers/1991_012.pdf (Accessed July 2024). →
Lynn, P. (2015). Targeting response inducement strategies on longitudinal surveys. In Y. Engel, B. Jann, P. Lynn, A. Scherpenzeel & P. Sturgis (Eds.), Improving survey methods: lessons from recent research (pp. 322–338). New York: Routledge. a, b, c
Lynn, P. (2017). From standardised to targeted survey procedures for tackling nonresponse and attrition. Survey Research Methods, 11(1), 93–103. →
Morton-Williams, J. (1993). Interviewer approaches. Dartmouth Publishing. →
Muthén, L., & Muthén, B. (2017). Mplus user’s guide (8th edn.). Los Angeles: Muthén & Muthén. →
National Research Council (2013). Nonresponse in social science surveys: a research agenda. Washington: The National Academies Press. →
Puleston, J. (2012). Gamification 101: from theory to practice, part 1 & 2. Quirk’s Marketing Research Media. →
Puleston, J. (2013). Gamification of market research. In C. A. Hill, E. Dean & J. Murphy (Eds.), Social media, sociality, and survey research (pp. 253–293). Wiley. →
Rogelberg, S. G., Fisher, G. G., Maynard, D. C., Hakel, M. D., & Horvath, M. (2001). Attitudes toward surveys: development of a measure and its relationship to respondent behavior. Organizational Research Methods, 4(1), 3–25. a, b
Rosche, B., Hox, J., & de Leeuw, E. (2020). Survey attitude as indicator for survey climate and as a predictor of nonresponse and attrition in a probability-based online panel. Technical Report. Utrecht University. →
Scherpenzeel, A., & Das, M. (2010). True longitudinal and probability-based Internet panels: evidence from the Netherlands. In P. Ester & L. Kaczmirek (Eds.), Social and behavioral research and the internet (pp. 77–104). New York: Taylor & Francis. →
Schleifer, S. (1986). Trends in attitudes toward and participation in survey research. Public Opinion Quarterly, 50(1), 17–26. a, b
Schmitt, M. J., & Steyer, R. (1993). A latent state-trait model (not only) for social desirability. Personality and Individual Differences, 14(4), 519–529. →
Singer, E., & Ye, C. (2013). The use and effect of incentives in surveys. Annals of the American Academy of Political and Social Science, 645(1), 112–141. →
Singer, E., van Hoewyk, J., & Maher, M. (1998). Does the payment of incentives create expectation effects? Public Opinion Quarterly, 62, 152–164. https://doi.org/10.1086/297838. →
Stata Corp (2017). Stata statistical software release 15. College Station: Stata Corp LLC. →
Steyer, R., Ferring, D., & Schmitt, M. J. (1992). States and traits in psychological assessment. European Journal of Psychological Assessment, 8(2), 79–98. →
Steyer, R., Schmitt, M., & Eid, M. (1999). Latent state-trait theory and research in personality and individual differences. European Journal of Personality, 13(5), 389–408. →
Stocké, V. (2006). Attitudes towards surveys: attitude accessibility and the effect on respondents’ susceptibility to nonresponse. Quality and Quantity, 40(2), 259–288. →
Stoop, I. A. L. (2005). The hunt for the last respondent: Nonresponse in sample surveys. The Hague: SCP. a, b, c, d
Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). Improving survey response: lessons learned from the European social survey. Chichester: Wiley. a, b, c
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. →
Williams, D., & Brick, M. (2017). Trends in U.S. face-to-face household survey nonresponse and level of effort. Journal of Survey Statistics and Methodology, 6(2), 186–211. →
Zijlmans, E. A. O., & Hamaker, E. L. (2014). Distinguishing between-person constructs from within-person constructs in latent state trait models. Technical Report. Utrecht University. a, b