Asking for Panel Consent in Web Surveys: Choice, Opt-in, or Opt-out?

Survey Research Methods
ISSN 1864-3361
836610.18148/srm/8366Asking for Panel Consent in Web Surveys: Choice, Opt-in, or Opt-out?
Oliver Lipps oliver.lipps@fors.unil.ch Swiss Centre of Expertise in the Social Sciences (FORS) Lausanne Switzerland
Lukas Lauener lukas.lauener@fors.unil.ch
Anke Tresch ankedaniela.tresch@unil.ch
University of Bern Bern Switzerland University of Lausanne Lausanne Switzerland
43112025European Survey Research Association

Some surveys ask respondents for consent to be recontacted for follow-up surveys after the initial part of the survey has been completed. Based on an experiment, we compare three options for asking for this panel consent: choice (yes/no), opt-in, and opt-out. We analyse panel consent rates and compare consenters with non-consenters against a comprehensive set of socio-demographic characteristics, political attitudes, and survey-related variables in a probability-based web survey. In a second step, we analyse consenters’ actual participation in the first follow-up wave.

The opt-out option yields higher panel consent rates than the other two options. Based on socio-demographic variables, panel consenters and non-consenters are most similar to each other in the choice design, and most different in the opt-out design. Based on typically biased variables, such as political interest or how the survey was perceived, the opt-out design performs better than the opt-in design in terms of consent, followed by the choice design. When it comes to actually participating in the first follow-up wave, the three panel consent options work in a similar way to giving consent. Overall, these findings speak in favour of the opt-out design, followed by the opt-in design.

Supplementary Information

The online version of this article (https://doi.org/10.18148/srm/8366) contains supplementary material.

1Introduction

1.1Informed consent and how to ask for it

Data protection laws stipulate that respondents must provide “informed consent” to the processing of their personal data. This means they should be fully informed of the associated risks and benefits of their participation, the study procedures and objectives, and who to contact for further inquiries. Additionally, it must be clearly communicated that participation is voluntary and can be withdrawn at any time (see, for example, U.S. Privacy Act of 1974). Informed consent can be obtained in various ways, depending on national data protection laws. The two primary methods are opt-in, where respondents must actively agree, and opt-out, where consent is assumed unless explicitly declined. There is evidence that opt-out methods yield larger sample sizes. For example, to match administrative records to the Current Population Study (CPS), the U.S. Census Bureau moved from an opt-in to an opt-out approach in 2006. This resulted in consent rates nearing 100%, having previously seen refusal rates of up to 30% (Fulton, 2012). In a multi-survey study, Fulton (2012) further found that consent rates from opt-out procedures were significantly higher (nearly 100%) than from opt-in procedures (averaging 69%), regardless of whether a social security number or health-related identifier was requested.

While opt-in is legally required in many countries (under the General Data Protection Regulation (GDPR) in the European Union (EU), for example), other countries have more leeway when it comes to asking for consent. For example, under the U.S. Federal Trade Commission (FTC) Act,1 opt-out is common practice for storing contact addresses (Johnson et al., 2002). However, even if opt-out is legally permissible, it might be ethically undesirable. This is because, by default, the opt-out option implies agreement if no action is taken, which may be misleading for some survey respondents. At the same time, optimizing the balance between achieving a high response rate and respecting people’s right to refuse is challenging (Martin & Marker, 2007). As these authors put it, “i[t] has been clear on a number of occasions that there are different interpretations of exactly what is meant by informed consent and what this entails for ethically acceptable procedures in survey research” (Martin & Marker, 2007, p. 2261). From the perspective of the respondent, failure to consent does not necessarily equate to deliberate refusal (Singer, 1978); people often forget to consent in a low-cost, low-benefit situation (Quandt & Ohr, 2004) or they simply do not understand the request for consent (Sakshaug et al., 2021).

When opt-out is not possible, and opt-in is not desirable, a third option is choice, although this has rarely been used. In a choice design, participants are asked to explicitly agree or refuse to participate by selecting one of two options (i.e., by ticking one of two boxes indicating either consent or refusal). Choice can be seen as a balanced approach, requiring all respondents—both those who consent and those who do not—to take action. Unlike the opt-out design, which may inflate consent rates simply because no action is needed, and the opt-in design, which may lower consent rates by requiring a proactive step, the choice option treats all participants equally. This approach may thus ensure that both consenters and non-consenters engage with the decision, leading to more deliberate and informed responses.

1.2Asking for consent—empirical evidence

There is an abundance of research on optimizing consent requests across various domains, ranging from organ donation for biobanks (e.g., Johnson & Goldstein, 2003), contact tracing (e.g., Altmann et al., 2020) and transferring contact information to a third-party data-collection agency (e.g., Sakshaug et al., 2016) to making multiple simultaneous requests (e.g., Beuthner et al., 2022; Walzenbach et al., 2022). In survey research, the most prominent consent requests relate to linking survey data with other data such as passively collected (e.g., GPS) data (Felderer & Blom, 2019; Keusch et al., 2019), administrative data (Bacher, 2023; Hülle, 2024; Jäckle et al., 2021; Knies et al., 2012; Kreuter et al., 2016; Sakshaug et al., 2012; Sala et al., 2012; Yang et al., 2019) or medical records (e.g., Hutchings et al., 2021).

In longitudinal surveys, the time dimension introduces activities that differ from the standard practices of cross-sectional surveys (Lessof, 2009). For example, there is a requirement for “panel consent” to store contact information, as outlined in the EU GDPR (European Union, 2016, Article 13). As Hülle (2024, p. 2) aptly describes, “[w]ithout panel consent, a respondent may not be contacted again to participate in subsequent waves, and thus a longitudinal data structure cannot be established”. It is clear that the factors for obtaining panel consent are different to those for consent to linking survey data: consent to data linkage is likely to be more affected by trust in the survey organizers or institutions, whereas agreeing to be recontacted in panel studies is likely more influenced by participants’ interest in the survey topic, the effect of the survey on them, and their enjoyment of the survey process.

In web surveys, the panel consent requests become particularly challenging, since no interviewer is available to motivate respondents or answer their questions (Sakshaug et al., 2020). This challenge is probably amplified in probability surveys, compared with nonprobability surveys, since respondents neither self-select into the survey nor are they chosen based on quotas (Blom et al., 2016; Edwards & Biddle, 2021). In the randomized multi-mode German panel survey “Legitimation of Inequality Over the Life-Span (LINOS)”, Sakshaug et al. (2020) reported a panel consent rate of 97% for computer-assisted personal interviews (CAPI) and 46% for self-administered modes. Follow-ups via postcard—and particularly telephone calls—increased the self-administered consent rate to 72%. Similarly, Hülle (2024) found a 95% consent rate in the “German Quality of Life and Social Participation” computer-assisted telephone interview (CATI) panel study. Witte et al. (2023) reported consent rates of 84–90% in the mail-based “German Emigration and Remigration Panel Study”. Additionally, Tourangeau and Ye (2009) found panel consent rates ranging from 78% (with gain framing) to 88% (with loss framing) in a random digit dialling survey. However, none of these studies compared different panel consent options. While there is some empirical evidence regarding the effects of different survey modes and socio-demographic characteristics on panel consent, only a limited number of studies have experimentally tested various panel consent options. One such study is by Sakshaug et al. (2016), which examined consent for transferring the federal contact data of individuals sampled from an employment register to a third-party survey agency, and conducting a subsequent telephone survey using both opt-in and opt-out designs. The findings indicated that the sample in the opt-out design was larger and less biased, in terms of administrative variables such as demographics, employment, wages, job-seeking measures, and benefit receipt. Additionally, there is some evidence suggesting that actual participation may exhibit offsetting biases. For example, while older persons are more likely to participate in the initial survey, they are less likely to agree to the linkage request after responding. Another study (Montoy et al., 2016) asking patients to consent to further HIV tests found that acceptance rates were 38% in the opt-in option, 51% in the choice option, and 66% in the opt-out option. The positive opt-out effect was, however, smaller for those with high-risk behaviours. This demonstrates that the relative advantage of the opt-out design may fade with increasing concerns about the topic.

Regarding the relationships between socio-demographic variables and consent, there is, for example, empirical evidence that a migration background (Hülle, 2024; Sakshaug et al., 2020) and lower educational attainment reduce linkage consent (Jäckle et al., 2021; Yang et al., 2019). Regarding survey-related variables, respondents are generally more likely to give linkage consent:

These relationships align with findings that individuals are more likely to participate in surveys if they are more politically interested (Krosnick et al., 2014), have greater trust in institutions (Helliwell & Putnam, 2004), participate in elections more often (Verba, 1995), have a genuine interest in the survey topic (Dillman et al., 2014; Groves et al., 2004), experience a lower survey burden (Groves & Couper, 1998; Groves et al., 2004), and encounter clear, easy-to-understand questions (Schuman & Presser, 1996).

1.3Research questions and contributions

Given the low consent rates in web surveys, and the lack of research on panel consent options—especially in the general population or in ongoing surveys—we investigate the following research questions:

  1. 1.

    Which consent option (choice, opt-in, opt-out) yields the highest panel consent rate in a probability-based online survey?

  2. 2.

    Which consent option provides the least panel consent bias for socio-demographic characteristics, political attitudes, and survey-related variables?

Socio-demographic variables are sometimes included in the sampling frame and can easily be corrected for in multivariate models. In addition, they are generally less important for consent bias, with the exception of educational attainment (Sakshaug et al., 2012). We will therefore focus on bias related to education, and political and survey-related variables, which are more important for assessing consent bias. For the sake of transparency and completeness, we first examine bias on socio-demographic variables (other than education), before turning to the more critical variables that determine bias in our empirical analyses.

Since people can still refuse to participate in the follow-up wave, even after having given panel consent, we additionally analyse:

  1. 3.

    What is the actual participation rate and participation bias for socio-demographic characteristics, political attitudes, and survey-related variables in the first follow-up survey wave after having given panel consent?

Again, when analysing the biases in actual participation, we focus on biases regarding political and survey-related variables, as well as education. Our article extends the limited research on how to ask for panel consent by experimenting with three consent options, investigating their implications in terms of panel consent rates and bias. In addition, it (1) is based on a probability-based sample using the web as survey mode, (2) includes the choice option, and (3) analyses actual participation and bias in the follow-up wave.

As a hypothesis, we expect to have the most panel consenters in the opt-out design, and the fewest in the opt-in design. With respect to bias, there is empirical evidence that concern with the survey topic is associated with inverse consent rates: for example, Montoy and colleagues (2016) found that people with higher HIV risks exhibited the highest rate of consent to an HIV test in the opt-in design, a lower rate in the choice design, and the lowest rate in the opt-out design. For our study, we hypothesize that consenters in the three designs have similar distributions of socio-demographic variables. Yet, in line with Montoy et al. (2016), we expect fewer people who are more concerned by the survey topic, and therefore typically overrepresented in political surveys (i.e., higher educated, politically interested people, left voters, those with positive feelings about the survey, etc.), in the consenting group under the opt-out design, relative to the choice and in particular the opt-in design.

Finally, panel consent bias may be offset to some extent when it comes to actually participating in the follow-up survey.

2Data and methods

2.1Data

We use data from an experiment implemented in the Panel Survey of the Swiss Election Study 2019 (Selects, 2024). A probability sample of 25,575 Swiss nationals, aged 18 or older living in Switzerland, was drawn from an individual register sampling frame maintained by the Swiss Federal Statistical Office (FSO). The initial survey consisted of three parts, with the third part scheduled to take place after the federal elections of October 2019. At the end of Part 3 of the questionnaire, yearly “short follow-up surveys” were announced to complement this initial three-part survey (the precise wording of the consent requests are provided later in this section). Descriptive statistics of the socio-demographic frame variables are listed for the gross sample, and for Part 1 and Part 3 respondents, in Table A.1 in the Appendix.

For Part 1 (AAPOR RR2 response rate: 31%), conducted between 20 May and 8 July, 2019, sample members were pre-notified by a letter with information on the survey, explaining that it consisted of three parts. They then received a second letter including the URL to participate in the survey, a personal login code, and a postal cheque of 10 Swiss francs (about 10 Euros) that respondents could cash at any post office. Up to two postal reminders were then sent.

Part 2 (conditional AAPOR RR2 response rate: 68%) was conducted among respondents of Part 1 during the election campaign, between 2 September and 17 October, 2019. The invitations were again sent by letter, followed by up to two reminders by e‑mail (for respondents with a known e‑mail address) or letter (otherwise). To boost enrolment in the survey and to reduce attrition, participants who responded to all three parts of the survey in 2019 were entered in a lottery to win one of five iPads; this was communicated in the reminders.

The fieldwork for Part 3 (conditional AAPOR RR2 response rate: 65%) started one day after the federal elections of 20 October and lasted until 9 December, 2019. All respondents from Part 1 were recontacted, regardless of their participation in Part 2, and up to three reminders were sent by e‑mail (for respondents with a known e‑mail address) or letter (otherwise). As in Part 2, sample members were told in the reminder letters that they would participate in the iPad lottery if they responded to all three parts of the survey. At the end of Part 3, an experiment was conducted to determine the best way of asking respondents for panel consent, in this case to participate in short annual follow-up surveys until the next federal election in 2023. In the experiment, three consent request designs were employed and worded as follows:2

If the respondent did not click the box, an additional question was asked: “You did not click the box to be recontacted for follow-up surveys. We understand your decision and thank you again for your participation in the Selects 2019 survey. If you did not click the box by mistake, you can still click it below. We would be very pleased to count on your help to continue our study.”

Respondents were randomly assigned to the three design groups, but the groups were of different sizes: because we expected a lower consent rate in the opt-in design, we assigned fewer respondents to this group to reduce the risk of losing too many respondents. Similarly, we assigned more respondents to the choice design because we considered this option the most appropriate from an ethical point of view.

The first follow-up wave (conditional AAPOR RR2 response rate: 85%) was conducted one year after the federal elections, between 28 September and 2 November, 2020. A week in advance, consenting respondents from Part 3 received a pre-notification letter informing them about the upcoming survey. Sample members received up to three reminders (two by e‑mail, one by letter). Respondents were offered 10 Swiss francs for their participation in the follow-up wave, or 20 Swiss francs if we considered them less likely to participate.3

The analytical sample comprises 4655 respondents who answered the consent question from Part 3.4 We imputed all independent variables using chained equations (van Buuren et al., 1999). Eight of the variables used in the analyses (see below) contained missing values (education, left-right position, political interest, participation in 2019 federal election, trust in institutions, survey is interesting, length of the survey is adequate, survey questions are easy to understand). Of all respondents, 4238 had no missing values, 367 had one, 17 had two, 28 had three, and five had four missing values.

2.2Variables and methods

To examine panel consent bias, we used socio-demographic variables as well as political attitudes and survey-related variables that relate to consent, as identified in the literature (see Background section). Specifically, we used the following variables and categories:

Socio-demographic variables (mostly from the sampling frame) (Lipps & Pekari, 2021; Sakshaug et al., 2020), modelled as nominal:

Political attitudes (mostly measured in Part 1):

Survey-related variables (measured in Part 3), modelled as linear:

Education stands out in the group of socio-demographic variables because it is not included in the sampling frame and—more importantly—it strongly influences nonresponse. Therefore, we include education in the group of critical variables alongside political attitudes and survey-related factors.

Our analytic strategy is to first calculate the panel consent rates across the three designs, and then compare consenters with non-consenters across the three designs with respect to the variables listed above. Specifically, we calculate the r‑indicator in each design (Schouten et al., 2009) first with respect to the socio-demographic variables, and then with respect to the critical political, educational, and survey-related variables. The r‑indicator calculates the similarity between consenters and non-consenters in terms of the covariates, and is defined as:

r(ρ)=1-2S(ρ)

with ρ denoting the predicted response probabilities and S(ρ) their standard deviation. The r‑indicator has a range between 0 and 1, where 1 is perfectly representative (i.e., all individuals have the same predicted consent probability) and 0 deviating maximally from representativeness. Compared with pseudo R2 measures (Hemmert et al., 2018), the r‑indicator is comparable across different datasets, normalized, and easy to interpret (Schouten et al., 2009, 2012).

2.3Timing of obtaining panel consent, sample, and conditioning effects

Scholars agree that the ideal time to obtain panel consent is during the first wave of fieldwork, typically at the end of the questionnaire (Hülle, 2024; Lessof, 2009). However, there may be reasons to delay this request. For instance, new funding or events like the COVID-19 pandemic may prompt additional waves (e.g., Haas et al., 2021), requiring panel consent. Early introduction of the panel nature can overwhelm respondents, increase concerns about privacy, or lead to nonresponse due to the perceived long-term commitment (Bianchi et al., 2017; Eisnecker & Kroh, 2017; Lugtig, 2014; Tourangeau & Ye, 2009). Delaying consent can help build rapport (Sakshaug & Huber, 2016; Sakshaug et al., 2020) and aligns with loss framing by emphasizing the importance of responses already given (Sakshaug et al., 2019, 2021). In some cases, as in our study, the first wave may have multiple parts, with consent requested at the end of the final part. By not asking earlier, we prioritized maximizing Part 2 and Part 3 participation (high N and small standard errors (SE) and probably smaller bias) over longitudinal participation (more longitudinal respondents and within variance).

It is possible that Part 3 respondents are already more cooperative, leaving fewer people with characteristics typical of nonrespondents (i.e., Part 3 respondents might already be—at least to some degree—panel-conditioned). To test this, we compared the gross sample (N = 25,575) with Part 1 respondents (N = 7939) and Part 3 respondents (N = 5125). Using socio-demographic variables from the sampling frame and logistic regression models, we found an r‑indicator of 0.84 for Part 1 and 0.87 for Part 3. This suggests that the Part 1 sample is slightly more biased than Part 3, even though only Part 1 respondents were invited to participate in Part 3. We also compared Part 1 and Part 3 respondents on political variables ascertained in Part 1 (political interest, left-right position) and education. Those with lower education and political interest levels, along with right-wing voting tendencies, are often underrepresented in surveys, especially political surveys (Groves et al., 2004; Lipps & Pekari, 2016). As expected, political interest and education were significantly higher at the 5% level, and right-leaning positions significantly lower at the 10% level among Part 3 participants, aligning with known conditioning effects. In evaluating panel consent bias, we will consider lower political interest and education, and a more right-leaning political position, as reducing this bias.

3Results

3.1Panel consent

Table 1 provides descriptive statistics of panel consent rates across the three designs. In line with our expectation, the opt-out option produces higher mean consent rates than opt-in, with the choice option in a middle position and closer to opt-in. The difference between opt-in and choice is insignificant (Pr(|T| > |t|) = 0.11), while opt-out fares significantly differently from the other two designs on the 1% significance level.

To examine whether the different designs produce different levels of bias with respect to the socio-demographic variables, we compare consenters with non-consenters in each of the three designs. First, we calculate r‑indicators based on these variables (see Table 1). While the r‑indicator in the choice design amounts to 0.897, it amounts to 0.881 in the opt-in design and only 0.822 in the opt-out design. This means that consenters are least similar to non-consenters in the opt-out design with respect to socio-demographic variables.

Table 1 Panel consent rates by design

Design

N

Meana

SE

r‑indicator (socio-demo)

r‑indicator (polit/edu/survey)

a If the (weak) violation of the randomization (see Footnote 4) is accounted for, the choice-adjusted mean amounts to 0.601 for opt-in and to 0.703 for opt-out

Choice (yes/no)

2114

0.631

0.010

0.897

0.549

Opt-in

 958

0.600

0.016

0.881

0.614

Opt-out

1583

0.708

0.011

0.822

0.664

For the combined political attitudes, education, and survey-related variables, the r‑indicators are 0.549 for choice, 0.614 for opt-in, and 0.664 for opt-out. This means that consenters are most similar to non-consenters in the opt-out design, and least similar in the choice design, in terms of the consent-critical variables.

Given that these variables are the focus of our bias analysis, we examine the relationship between each of them and the three consent options separately. Specifically, we estimate eight pooled (across all three designs) logistic models including the interaction of the design dummies with each of these (linear) political/survey variables. Fig. 1 depicts the predicted effects on panel consent depending on these variables in the three designs.

Fig. 1Predicted effects on panel consent depending on selected variables, by design

All variables have a significant effect on consent in all three options (except for trust in institutions and left-right position in the opt-out design) and according to our theoretical considerations, values closer to the 0‑line indicate less bias. For most variables, there is no difference in consent across design options as levels increase, except for interest in the survey and survey length assessment. In these cases, the effect of higher variable values on consent is smaller in the opt-out option than in the choice option. According to our criteria, this supports the opt-out option as the best solution, as consenters and non-consenters are more similar in the opt-out option. This finding is in line with the highest r‑indicator for political attitudes, education, and survey-related variables in the opt-out design, and the lowest in the choice design.

3.2Actual participation in the follow-up wave

We repeat these analyses using actual participation in the follow-up wave as the dependent variable. We use all participants of Part 3 as the baseline, and assign a 0 participation to the non-consenters. We include non-consenters to maximize the number and representativeness of the participants in the follow-up wave based on all participants of Part 3, not only the consenting part.

As with the distribution of consent across the three designs (see Table 1), we list the mean participation rates and the r‑indicators in Table 2.

Table 2 shows that the opt-out design produces the highest mean participation rate in the follow-up wave, followed by the choice design. Again, the difference between opt-in and choice is insignificant (Pr(|T| > |t|) = 0.50), while opt-out is significantly different from the other two designs on the 1% significance level. The r‑indicators for the socio-demographic variables are similar to those from the panel consent analysis. This means that respondents in the follow-up wave are least similar to nonrespondents in the opt-out design. With respect to political attitudes, education, and survey-related variables, however, participants are most similar to non-participants in the opt-out design and least similar in the choice design; this is in line with the result obtained for consenters vs non-consenters in Table 1. The higher value for the choice design (0.591 vs 0.549) suggests an offsetting effect compared to consent, as the values of the other options are almost equal (0.623 vs 0.614 for opt-in, 0.662 vs 0.664 for opt-out).

Table 2 Participation in the follow-up wave by experimental design

Design

N

Mean

SE

r‑indicator (socio-demo)

r‑indicator (polit/edu/survey)

Choice (yes/no)

2114

0.518

0.011

0.897

0.591

Opt-in

 958

0.505

0.016

0.855

0.623

Opt-out

1583

0.575

0.012

0.797

0.662

Replicating Fig. 1 on actual participation in the follow-up wave, Fig. 2 plots the effects of political attitudes, education, and survey-related variables.

Fig. 2Predicted effects on participation in the follow-up wave depending on selected variables, by design

We find the graphs from Fig. 1 largely reproduced in Fig. 2. Exceptions are that education no longer has a significant effect in the opt-in design, and the confidence intervals of the survey length in the opt-out and choice designs now overlap. Overall, however, the effects of these variables are similar on panel consent and participation in the follow-up wave in the designs.

4Conclusion

In this paper, we tested three options for requesting survey respondents’ panel consent: choice, opt-in, and opt-out. Three research questions guided our empirical analyses. We first analysed the three options in terms of consent rates (i.e., which option yields the highest level of consent). Second, using a selection of socio-demographic characteristics from the sampling frame, as well as political attitudes, education, and survey-related variables, we sought to understand which of the three options produces the least panel consent bias among respondents. Third, we analysed the rates of actual participation in the first follow-up wave, and biases among all respondents, using the same variables as in the consent analysis. We expected the highest panel consent rates in the opt-out option and the lowest in the opt-in option. While we anticipated similar socio-demographic distributions among consenters and participants across all three designs, we also expected more individuals with typical nonresponse characteristics among the opt-out consenters.

Consistent with findings from Montoy et al. (2016) and Sakshaug et al. (2016), and in line with our expectations, panel consent is significantly higher in the opt-out option, although there is no significant difference between the other two options. While consenters in the opt-out option are the least similar to non-consenters in terms of socio-demographic variables, they are more similar to non-consenters in terms of typical nonrespondent characteristics, such as political interest and survey perception. This pattern still holds for actual participation in the follow-up wave, although we find some evidence that sample bias in actual participation offsets consent bias to some extent. Overall, our results show that the opt-out design results in better population representation among consenters. It should be noted that the opt-out design is not permissible in some countries. In these cases, the opt-in design would be the second-best solution because it produces less bias than choice regarding typical nonrespondent characteristics.

This research has some shortcomings. For example, we worked with (single) imputed data, although we do not think that the results are sensitive to how the independent variables are imputed. In addition, instead of using Part 1 respondents (who had a 31% response rate), we conducted the experiment on Part 3 respondents, who account for 20% of the original sample. We do not believe this compromises the generalizability of our results, however. The comparison between Part 1 and Part 3 respondents on political variables shows that our sample in Part 3 is slightly more conditioned than in Part 1, similar to how different surveys vary in response rates and conditioning levels depending on general nonresponse levels and fieldwork effort. Since our key finding—that the opt-out design yields the highest level of panel consent—aligns with theory, it is likely robust to similarly conditioned samples. Given the sample’s conditioning, our results may be slightly weaker than they would have been had we used a less conditioned sample, such as respondents to Part 1.

This research can be extended in various ways. For example, future studies could correlate consent and/or participation with additional variables such as previous participation behaviour (in our case, (non)participation in Part 2), item nonresponse, or reporting behaviour such as straightlining (Sakshaug et al., 2012). They could also experiment more explicitly with the choice design based on theoretical concepts, such as gain framing or loss framing statements cross-referenced with different default answer options (see, for example, Sakshaug et al., 2019 for a framing experiment linking interview data with administrative records).

These additional design elements notwithstanding, our research clearly demonstrates that the opt-out design results in a higher level of consent and better representation of typically underrepresented groups. This result also holds for actual participation in follow-up surveys, indicating that behaviour in low-cost situations (giving panel consent) may well translate into a corresponding behaviour in high-cost situations (actually participating in the next wave). Wherever an opt-out design complies with informed consent requirements, therefore, we would strongly recommend that it be the chosen option in (panel) surveys.

1Supplementary Information

Table A1: socio-demographic distributions of the gross sample, Part 1 respondents, and Part 3 respondents (main) do file: (Asking for panel consent in web surveys.do) to reproduce the results datafile: (Panel_consent_variables.dta) not distributed with the Selects Panel scientific use file) do file (labels.do) to reproduce the figures (called in main do file)

Acknowledgments

We would like to thank Nicolas Pekari and the ”Data Collection and Analyses“ (DCA) team at FORS for the collection, processing, and preparation of the Selects panel data. Many thanks go to the two anonymous reviewers and the editor for their valuable suggestions and constructive feedback on a previous version of the article, which greatly improved the quality of this work.

References

Altmann, S., Milsom, L., Zillessen, H., Blasone, R., Gerdon, F., Bach, R., Kreuter, F., Nosenzo, D., Toussaert, S., & Abeler, J. (2020). Acceptability of app-based contact tracing for COVID-19: Cross-country survey study. JMIR mHealth and uHealth, 8(8), e19857. https://doi.org/10.2196/19857. 

Bacher, J. (2023). Willingness to consent to data linkage in Austria—Resultsof a pilot study on hypothetical willingness for different domains. survey methods: insights from the field. https://surveyinsights.org/?p=18071a, b

Beuthner, C., Keusch, F., Silber, H., Weiß, B., & Schröder, J. (2022). Consent to data linkage for different data domains—the role of question order, question wording, and incentives. International Journal of Social Research Methodology. https://doi.org/10.31235/osf.io/qh93g.

Bianchi, A., Biffignandi, S., & Lynn, P. (2017). Web-face-to-face mixed-mode design in a longitudinal survey: effects on participation rates, sample composition, and costs. Journal of Official Statistics, 33(2), 385–408. https://doi.org/10.1515/jos-2017-0019.

Blom, A. G., Bosnjak, M., Cornilleau, A., Cousteaux, A. S., Das, M., Douhou, S., & Krieger, U. (2016). A comparison of four probability-based online and mixed-mode panels in Europe. Social Science Computer Review, 34(1), 8–25. https://doi.org/10.1177/0894439315574825.

Das, M., & Couper, M. P. (2014). Optimizing opt-out consent for record linkage. Journal of Official Statistics. https://doi.org/10.2478/jos-2014-0030.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Wiley.

Edwards, B., & Biddle, N. (2021). Consent to data linkage: experimental evidence from an online panel. In P. Lynn (Ed.), Advances in longitudinal survey methodology (pp. 181–203). Hoboken: Wiley.

Eisnecker, P., & Kroh, M. (2017). The informed consent to record linkage in panel studies: optimal starting wave, consent refusals, and subsequent panel attrition. Public Opinion Quarterly, 81(1), 131–143. https://doi.org/10.1093/poq/nfw052.

European Union (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. http://data.europa.eu/eli/reg/2016/679/2016-05-04;. Accessed 9 Oct 2024. 

Felderer, B., & Blom, A. G. (2019). Acceptance of the automated online collection of geographical information. Sociological Methods & Research, 51(2), 866–886. https://doi.org/10.1177/0049124119882480.

Fulton, J. A. (2012). Respondent consent to use administrative data. College Park: University of Maryland. Dissertationa, b

Groves, R. M., & Couper, M. P. (1998). Nonresponse in Household Interview Surveys. Wiley series in survey methodology.

Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68(1), 2–31. https://doi.org/10.1093/poq/nfh002.a, b, c

Haas, G. C., Müller, B., Osiander, C., Schmidtke, J., Trahms, A., Volkert, M., & Zins, S. (2021). Development of a new COVID-19 panel survey: the IAB high-frequency online personal panel (HOPP). Journal for labour market research. https://doi.org/10.5164/IAB.HOPP_W01-W07.DE.EN.V2.

Helliwell, J. F. (2004). The social context of well-being. Philosophical transactions. Royal society of London, series B,, 359, 1435–1446. https://doi.org/10.1098/rstb.2004.1522.

Hemmert, G. A. J., Schons, L. M., Wieseke, J., & Schimmelpfennig, H. (2018). Log-likelihood-based pseudo‑R2 in logistic regression: deriving sample-sensitive benchmarks. Sociological Methods & Research, 47(3), 507–531. https://doi.org/10.1177/0049124116638107.

Hülle, S. (2024). Assessing and improving data quality in linked panel surveys. Faculty of Sociology, University of Bielefeld. unpublished Doctoral dissertationa, b, c, d, e, f

Hutchings, E., Loomes, M., Butow, P., et al. (2021). A systematic literature review of attitudes towards secondary use and sharing of health administrative and clinical trial data: a focus on consent. Systematic Reviews, 10, 132. https://doi.org/10.1186/s13643-021-01663-z.a, b

Jäckle, A., Beninger, K., Burton, J., & Couper, M. P. (2021). Understanding data linkage consent in longitudinal surveys. In P. Lynn (Ed.), Advances in Longitudinal Survey Methodology (pp. 122–150). Hoboken: Wiley.a, b, c

Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339. https://doi.org/10.1126/science.1091721.

Johnson, E. J., Bellman, S., & Lohse, G. L. (2002). Defaults, framing and privacy: Why opting in-opting out. Marketing letters, 13(1), 5–15. https://doi.org/10.1023/A:1015044207315.

Keusch, F., Struminskaya, B., Antoun, C., Couper, M. P., & Kreuter, F. (2019). Willingness to participate in passive mobile data collection. Public Opinion Quarterly, 83(S1), 210–235. https://doi.org/10.1093/poq/nfz007.

Knies, G., Burton, J., & Sala, E. (2012). Consenting to health-record linkage: evidence from a multi-purpose longitudinal survey of a general population. BMC Health Services Research, 12, 1–6. https://doi.org/10.1186/1472-6963-12-52.

Kreuter, F., Sakshaug, J. W., & Tourangeau, R. (2016). The framing of the record linkage consent question. International Journal of Public Opinion Research, 28(1), 142–152. https://doi.org/10.1093/ijpor/edv006.

Krosnick, J. A., Kim, N., & Lavrakas, P. (2014). Survey research. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology 2nd edn. New York: Cambridge University Press.

Lessof, C. (2009). Ethical issues in longitudinal surveys. Methodology of longitudinal surveys. In P. Lynn (Ed.), Methodology of longitudinal surveys (pp. 35–54). Chichester: Wiley.a, b

Lipps, O., & Pekari, N. (2016). Sample representation and substantive outcomes using web with and without incentives compared to telephone in an election survey. Journal of Official Statistics, 32(1), 165–186. https://doi.org/10.1515/jos-2016-0008.

Lipps, O., & Pekari, N. (2021). Sequentially mixing modes in an election survey. In Survey Methods: Insights from the Field (SMIF). https://doi.org/10.13094/SMIF-2021-00003.

Lugtig, P. (2014). Panel attrition: separating stayers, fast attriters, gradual attriters, and lurkers. Sociological Methods & Research, 43(4), 699–723. https://doi.org/10.1177/0049124113520305.

Martin, J., & Marker, D. A. (2007). Informed consent: interpretations and practice on social surveys. Social Science & Medicine, 65(11), 2260–2271. https://doi.org/10.1016/j.socscimed.2007.08.004.a, b

Montoy, J. C. C., Dow, W. H., & Kaplan, B. C. (2016). Patient choice in opt-in, active choice, and opt-out HIV screening: randomized clinical trial. BMJ. https://doi.org/10.1136/bmj.h6895.a, b, c, d, e

Quandt, M., & Ohr, D. (2004). Worum geht es, wenn es um nichts geht (“How to decide about nothing?”)? KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie, 56(4), 683–707. https://doi.org/10.1007/s11577-004-0109-x.

Sakshaug, J. W., & Huber, M. (2016). An evaluation of panel Nonresponse and linkage consent bias in a survey of employees in Germany. Journal of Survey Statistics and Methodology, 4(1), 71–93. https://doi.org/10.1093/jssam/smv034.

Sakshaug, J., Couper, M., Ofstedal, M.-B., & Weir, D. (2012). Linking survey and administrative data: mechanisms of consent. Sociological Methods and Research, 41, 535–569. https://doi.org/10.1177/0049124112460381.a, b, c, d, e, f

Sakshaug, J. W., Schmucker, A., Kreuter, F., Couper, M. P., & Singer, E. (2016). Evaluating active (opt-in) and passive (opt-out) consent bias in the transfer of federal contact data to a third-party survey agency. Journal of Survey Statistics and Methodology, 4(3), 382–416. https://doi.org/10.1093/jssam/smw020.a, b, c

Sakshaug, J. W., Stegmaier, J., Trappmann, M., & Kreuter, F. (2019). Does benefit framing improve record linkage consent rates? A survey experiment. Surv Res Methods, 13(3), 289–304. https://pmc.ncbi.nlm.nih.gov/articles/PMC7447194/ (accessed Nov 15, 2024).a, b

Sakshaug, J. W., Hülle, S., Schmucker, A., & Liebig, S. (2020). Panel survey recruitment with or without interviewers? Implications for nonresponse, panel consent, and total recruitment bias. Journal of Survey Statistics and Methodology, 8(3), 540–565. https://doi.org/10.1093/jssam/smz012.a, b, c, d, e

Sakshaug, J. W., Schmucker, A., Kreuter, F., Couper, M. P., & Holtmann, L. (2021). Respondent understanding of data linkage consent. In Survey methods: insights from the field. https://doi.org/10.13094/SMIF-2021-00008.a, b

Sala, E., Burton, J., & Knies, G. (2012). Correlates of obtaining informed consent to datalinkage: respondent, interview, and interviewer characteristics. Sociological Methods and Research, 41, 414–439. https://doi.org/10.1177/0049124112457330.a, b

Schouten, B., Cobben, F., & Bethlehem, J. (2009). Indicators for the representativeness of survey response. Survey Methodology, 35(1), 101–113. Statistics Canada, Catalogue No. 12-001‑X.a, b

Schouten, B., Bethlehem, J., Beullens, K., Kleven, Ø., Loosveldt, G., Luiten, A., & Skinner, C. (2012). Evaluating, comparing, monitoring, and improving representativeness of survey response through R‑indicators and partial R‑indicators. International Statistical Review, 80(3), 382–399. https://doi.org/10.1111/j.1751-5823.2012.00189.x.

Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: experiments on question form, wording, and context. SAGE.

Selects (2024). Selects 2019 Panel Survey (waves 1–7) (Version 5.0.0) [Data set]. FORS. https://doi.org/10.48573/115z-fd63.

Singer, E. (1978). Informed consent: consequences for response rate and response quality in social surveys. American Sociological Review, 43(2), 144–162. https://doi.org/10.2307/2094696.

Tourangeau, R., & Ye, C. (2009). The framing of the survey request and panel attrition. Public Opinion Quarterly, 73(2), 338–348. https://doi.org/10.1093/poq/nfp021.a, b

U.S. Department of Justice (1974). ”U.S. Privacy Act of 1974, PL 93–579,“ 5 U.S.C.-552a

Van Buuren, S., Boshuizen, H., & Knook, D. (1999). Multiple imputation of missing blood pressure covariates in survival analysis. Statistics in Medicine, 18, 681–694. https://doi.org/10.1002/(SICI)1097-0258.

Verba, S. (1995). Voice and equality: civic voluntarism in American politics. Harvard UP.

Walzenbach, S., Burton, J., Couper, M. P., Crossley, T. W., & Jäckle, A. (2022). Experiments on multiple requests for consent to data linkage in surveys. Journal of Survey Statistics and Methodology, 11(3), 518–540. https://doi.org/10.1093/jssam/smab053.

Witte, N., Schaurer, I., Schröder, J., Décieux, J. P., & Ette, A. (2023). Enhancing participation in probability-based online panels: two incentive experiments and their effects on response and panel recruitment. Social Science Computer Review, 41(3), 768–789. https://doi.org/10.1177/08944393211054939.

Yang, D., Fricker, S., & Eltinge, J. (2019). Methods for exploratory assessment of consent-to-link in a household survey. Journal of Survey Statistics and Methodology, 7(1), 118–155. https://doi.org/10.1093/jssam/smaa026.a, b