Effects of Changing the Incentive Strategy on Panel Performance: Experimental Evidence from a Probability-Based Online Panel of Refugees

Survey Research Methods
ISSN 1864-3361
843710.18148/srm/2025.v19i2.8437Effects of Changing the Incentive Strategy on Panel Performance: Experimental Evidence from a Probability-Based Online Panel of Refugees
Jean Philippe Décieux Jean.Decieux@bib.bund.de
Sabine Zinn szinn@diw.de German Institute for Economic Research Berlin Germany
Andreas Ette Andreas.Ette@bib.bund.de Federal Institute for Population Research (BiB) Wiesbaden Germany
University of Bonn Bonn Germany Humboldt University of Berlin Berlin Germany
223172025European Survey Research Association

This study investigated how changing the mode of incentive administration between two panel waves, spaced six months apart, affected longitudinal survey response. A split-ballot incentive experiment was used to compare shifting from an unconditional pre-paid incentive mode in the first wave to a conditional post-paid mode in the second wave, versus consistently using a conditional post-paid mode across both waves. Social Exchange Theory, Self-Perception Theory, and Leverage-Salience Theory form the theoretical framework for grasping response behavior. The main performance indicators for evaluating both incentive strategies were wave-specific and total panel participation, panel consent, and cumulative response. Results also shed light on data quality, demographic composition, and survey costs. The experiment was implemented in the context of the IAB-BiB/FReDA-BAMF-SOEP survey “Refugees from Ukraine in Germany,” a probability-based register sample. Multivariate analysis indicated that unconditional pre-paid incentives were only superior within a single-wave perspective, and a constant conditional post-paid incentive strategy substantially outperformed the incentive strategy that changed the mode from initial pre-paid to post-paid. This was mainly the case in terms of participation and survey costs, while data quality and bias concerning demographics were similar across the groups. Based on these results, we derive practical implications.

Supplementary Information

The online version of this article (https://doi.org/10.18148/srm/2025.v19i2.8437) contains supplementary material.

1Introduction

In times of decreasing response rates, various measures have been developed to motivate potential respondents for survey participation (DeBell, 2022; Décieux & Heinz, 2024; Fan & Yan, 2010; Fumagalli, Laurie, & Lynn, 2013; Sun et al., 2020). Among these measures, incentives stand out as a commonly used strategy in which respondents are offered rewards in exchange for their survey participation (Booth, Wong, Brown, & Fitzsimons, 2024; Hanly, Savva, Clifford, & Whelan, 2014; Spreen et al., 2019). Actually, incentives are one of the key strategies for encouraging panelists for continuous participation (Brosnan, Kemperman, & Dolnicar, 2021).

Previous research on incentives has largely focused on assessing the impact of different incentive strategies on a variety of outcomes. They clarified the positive effect of incentives on survey participation, especially when the cash is paid in advance of participation (Warriner, Goyder, Gjertsen, Hohner, & McSpurren, 1996). However, most of these studies focused on single wave participation, specifically, referring to participation in cross-sectional surveys (e.g. Krieger, 2018; Pforr et al., 2015; Spreen et al., 2019), or to participation in panel recruitment waves or one single panel wave (e.g. Krieger, 2018; Pforr et al., 2015; Stanley, Roycroft, Amaya, Dever, & Srivastav, 2020; Witte, Schaurer, Schröder, Décieux, & Ette, 2022). Several studies found a positive effect of incentives on panel consent rates (e.g. Stanley et al., 2020; Witte et al., 2022).

However, the state-of-the-art report by Scheepers & Hoogendoorn-Lanser (2018) revealed that systematic research on distinct strategies on panel participation is still limited. To the best of our knowledge, little has changed since then. That is, the potential of incentives for sustainable panel participation remains relatively unexplored. Exceptions are the recent studies by Becker (2023) and by Beste, Frodermann, Trappmann, and Unger (2023), both of which found a valuable effect of an unconditional pre-paid incentive in an ongoing panel study. The limited knowledge about panel incentivization is insufficient for at least three reasons. First, in panel studies, incentives are considered to be even more crucial than in cross-sectional surveys because—via response rate—they may impact panel stability and prevention of cumulative attrition, both limiting smaller and increasingly more selective panel samples (Lipps, Jaquet, Lauener, Tresch, & Pekari, 2022; Lynn & Lugtig, 2017; Singer & Ye, 2013). Second, within an ongoing panel, incentives can change respondents’ likelihood of participation: incentives might influence intangible factors, e.g. feelings of loyalty, involvement, or commitment to a study (Brüggen, Wetzels, De Ruyter, & Schillewaert, 2011; Jäckle & Lynn, 2008; Laurie & Lynn, 2009; Scheepers & Hoogendoorn-Lanser, 2018). Third, in times of rising overall survey costs, the efficient and sustainable use of incentives is one of the factors that can be adjusted to make large panel projects still feasible (e.g. Lipps et al., 2023; Olson, Wagner, & Anderson, 2020).

Together with the continuously escalating survey expenses and the fluctuating nature of participation behavior over time, the development of appropriate incentive strategies thus remains an ongoing endeavor within most (long-term) panel studies (e.g. the Panel Study of Income Dynamics in the United States; Schoeni, Stafford, McGonagle, and Andreski (2013), the Swiss Household Panel; Lipps et al. (2019)). However, often incentive schemes are only implemented on the basis of interview comments, advice from survey practitioners, suggestions from field institutes, budget constraints, or even pure guesswork (e.g. Göritz & Neumann, 2016). As a result, these implementations lack a reliable foundation, typically making it impossible to ascertain whether an adjusted incentive strategy genuinely impacts response behavior or other factors (e.g. greater public interest due to higher media coverage). (e.g. Laurie & Lynn, 2009; Lipps et al., 2022; Scheepers & Hoogendoorn-Lanser, 2018).

The aim of the research presented in this paper was to experimentally analyze the effect of a change in the incentive strategy—from an unconditional pre-paid to a conditional promised post-paid incentive—on participation behavior and panel consent, as a crucial determinant to reduce survey errors, during a running panel. To develop a more comprehensive assessment of the overall panel performance, further analyses focus on the effects on estimates of nonresponse biases and demographic composition as well as on survey costs. The analyses were conducted on a probability-based web-push panel consisting of two waves spaced six months apart, focusing on refugees from Ukraine arriving in Germany shortly after the start of Russia’s full-scale invasion of Ukraine in February 2022. We aimed to find an answer to the following research question: How does changing the incentive mode from a wave 1 unconditional pre-paid to a wave 2 conditional post-paid incentive affect (longitudinal) survey response? We considered different theoretical mechanisms to reflect respondent behavior. Specifically, we explored the implications of Social Exchange Theory (SET), Self-Perception Theory (SPT), and Leverage-Salience Theory (LST). Our analyses were tailored to refugees, whose participation behavior notably diverged from that of other demographic groups due to their inherently higher levels of intrinsic motivation (Jacobsen & Siegert, 2023; Steinhauer, Décieux, Siegert, Ette, & Zinn, 2024).

With our research, we contribute to existing knowledge on incentive usage in surveys by experimentally investigating opportunities and barriers for sustainable panel participation with an additional focus on survey costs, data quality, and bias in estimates of demographic characteristics. The remainder of this article is organized as follows: First, we provide a tailored review of the literature on existing studies, explore what is known about efficient incentive strategies, and identify gaps. Then, we establish a theoretical framework for how changing the incentive from an unconditional pre-paid to a conditional post-paid incentive might work in terms of participation behavior. We then describe the study design, data, and analysis methods used to generate findings, followed by a presentation of the results. Finally, we critically discuss our findings and conclude our paper with some guidelines for practice.

2Previous Research on Incentive Strategies in Longitudinal Studies

While there is a large body of research focusing on how incentives can affect survey response in a single wave (see Krieger, 2018; Lipps et al., 2019 for an overview), research that takes a panel perspective on the relation between incentives and participation behavior is scarce. This gap is particularly notable in the realm of online panel surveys. For instance, it is common for online studies to move from an initial unconditional pre-paid incentive to a conditional post-paid incentive, especially in register-based post-recruitment web push surveys (Göritz & Neumann, 2016; Neman et al., 2022; Witte et al., 2022). However, the consequences of such a transition have not been thoroughly explored and thus the effects of such a change are not yet conclusively understood. The few studies that exist can be categorized into two groups: studies that use constant incentive strategies over panel waves, and those that alter incentive strategies.

2.1Wave-constant Incentive Strategies during a Panel

Among the few existing studies to consider wave-constant incentive strategies, Castiglioni, Pforr, and Krieger (2008) compared, in their study, the effectiveness of a monetary, unconditional pre-paid incentive over three waves with a pure post-paid incentive over the same period. The results suggested that continuous post-paid incentives are better than pre-paid incentives at keeping respondents within a panel. Jäckle and Lynn (2008) also provided evidence of the positive effect of continued incentive payments on attrition, bias, and item nonresponse, using data from a large-scale, multi-wave, mixed-mode incentive experiment. However, contrary to the study from Castiglioni et al. (2008), Jäckle and Lynn (2008) found that pre-paid incentives, in particular, can significantly reduce attrition at all waves. Complementary to that result, Becker (2023) revealed that constant pre-paid incentives might especially motivate respondents with strong reciprocal preferences compared to those invitees who had weak reciprocal preferences. He also pointed out that this effect declined as more time passed since the gratuities were given.

2.2Changing Incentive Strategies across Waves

Most studies looking at incentive strategy change across panel waves have focused on either changes in the monetary value of the incentives or changes in the mode of incentive administration—mainly changing from conditional post-paid to unconditional pre-paid incentives. Concerning changes in the monetary value, Rodgers (2011) found that substantially increasing a cash incentive (from $ 20 to $ 50) had a positive effect on the response rate for a biennial data collection for at least the next four waves. In line with the overall result of a positive effect of increasing the incentive amount during a panel, Laurie’s (2007) experimental study within an annual Household Panel suggested that even modest and regular increases in monetary incentives can improve survey outcomes and prove more effective than occasional substantial increases. Laurie interpreted her finding as indicating that the rather symbolic (small or modest) increase in the amount that respondents are paid can be seen as indicating appreciation of respondents’ loyalty. Laurie and Lynn (2009) concluded that it is practically inevitable for long-running panel surveys to increase financial incentives over time. They argued that an increase is a strategy to ensure that the incentives remain meaningful for respondents and to signal to long-serving sample members that their contributions are still valued. A recent study by Beste et al. (2023) took a similar perspective and found within their experimental approach that substantially doubling a pre-paid incentive significantly increased participation of low propensity respondents in an annual panel. These findings align with the studies by Becker, Möser, and Glauser (2019) within a biannual panel and Yu et al. (2017) with an irregular but multi-year panel interval, both of which also found an activating effect of additional cash incentives to convert refusals from previous waves into respondents. Cabrera-Álvarez and Lynn (2023) showed similar spillover effects. They discovered that changing incentives can even activate or spillover across different household members within households where at least one adult had completed the interview in the last wave. The results of an incentive experiment in a panel study with a two-year interval by Zagorsky and Rhoton (2008) were consistent with this discovery. They concluded that, especially for those individuals who previously refused panel participation, additional incentives can significantly and substantially increase response rates.

In order to investigate changing incentive modes from a panel perspective, Kretschmer and Müller (2017) conducted an incentive experiment in an annual panel. In their study, half of the respondents were randomly assigned into a condition in which the incentive changed from a conditional post-paid cash incentive into an unconditional pre-paid incentive. In the other condition respondents constantly received a post-paid incentive. Results from the study showed that the change towards a pre-paid incentive had a moderately positive but significant effect on response rates and led to a more balanced sample composition. In a complementary study, Felderer, Müller, Kreuter, and Winter (2018) examined the effect of changing from a conditional lottery to an unconditional pre-paid cash incentive on the response rate within an annual panel. They noted that this move had a positive effect. Hence, overall, there is evidence that increasing the incentive amount over the course of the panel as well as changing the mode to unconditional pre-paid incentives positively correlates with rising response rates.

However, there are also studies mandating against changing the incentive mode over the course of a panel. For example, Lipps et al. (2022) conducted an incentive experiment in which they randomly split an annual panel sample that received post-paid incentives in the previous waves into a post-paid and a lottery portion. They found that the response rates from the two experimental groups did not differ notably. Lipps and his colleagues also asked the respondents about their reasons for participating in the study. Most did not mention the incentive but, for example, interest in the topic of the study. Also, Jäckle and Lynn’s (2008) study reinforces the idea that changing the incentive strategy over time is not problematic. Their experiment revealed that a change from conditional to unconditional incentive had no lasting effect on survey response within an annual panel interval. These two findings suggest that substantial cost reductions can be achieved if incentive payments are reduced for persons who respond minimally or do not respond at all to incentive changes.

There are also a handful of studies that argue that the impact of an incentive is primarily direct on the participation behavior in the current wave, but that it more or less disappears over subsequent waves. For example, Göritz (2008) investigated the influence of a single bonus incentive after panel registration on short-term and long-term participation within a monthly panel. Her experiment included a treatment group that received an advance gift after registration and a control group without any gift. The initial provision of an advance gift increased the initial participation rate. However, this positive effect faded away over time. During the ongoing panel, Göritz (2008) divided the panelists into two groups: one group that was offered redeemable loyalty points and the other group that was offered participation in cash lotteries. In the beginning, there was no significant difference in response rates between the two groups. However, as the study progressed, loyalty points became more appealing in sense of participation than the cash lottery. Largely in line with the result of a gradually decreasing effect of a single bonus incentive, several studies with different wave intervals reported that payments in one wave did not necessarily have to be repeated in subsequent waves as there seems to be nearly no carry-over effect of a previous wave’s incentive to the following waves (Becker et al., 2019, with a biannual; Creighton, King, & Martin, 2007, with a triannual; and Singer & Kulka, 2002, with a monthly panel interval). That is, response behavior in one wave seems to be rather independent from treatments of previous waves. Looking at biannual panels, Haas, Volkert, and Senghaas (2022) as well as Becker and Glauser (2018) have shown that incentives only seem to have a direct positive effect on response of a current wave, but do not increase response rates in a subsequent wave.

Thus, research on incentive changes across panel waves is deficient in three respects in general: first, the number of existing studies is limited; second, the results of the few existing studies are inconclusive; and third, the panel intervals have typically not been taken into account in the studies. Moreover, studying ways to maintain refugee panel stability is crucial, as previous research in Germany shows that while refugees are initially more willing to participate than other groups, their responsiveness quickly declines in later waves due to reduced motivation and address quality (see e.g. Jacobsen & Siegert, 2023). Furthermore, earlier studies have shown that foreign individuals may react differently to incentives compared to general populations (Feskens, Hox, Schmeets, & Wetzels, 2008).

3Sociological Views on a Pre-paid—Post-paid Incentive Change

Sociological-theoretical frameworks for understanding the impacts of incentives on response behavior typically adopt a cross-sectional viewpoint, accounting for panel scenarios almost exclusively through extrapolation. The most common theories rely on an economic, rational choice perspective weighing costs and benefits of participation with a focus on extrinsic factors such as external rewards, e.g. cash incentives (Spreen et al., 2019).

The Social Exchange Theory (SET; see Thibaut & Kelley, 1959) frames consequences of any social interaction in terms of costs and rewards. Applied to survey response, high rewards for participating can be reached if participation is linked to the personal interests of sample members or by an attractive incentive. Such rewards are offset by the costs that discourage people from participating, such as the time required to participate, alternative activities, and uncertainty and fears about unfamiliar situations or loss of anonymity (Keusch, 2015; Kocar & Lavrakas, 2023). Monetary incentives as rewards are considered in relation to the perceived costs of participating, and people are expected to participate when the anticipated rewards outweigh the perceived costs (Dillman, Smyth, & Christian, 2014; Groves, Cialdini, & Couper, 1992; Keusch, 2015). In the case of a pre-paid incentive, the reward lies not only in the value of the incentive, but also in the advance trust of the survey administrator inspiring feeling of goodwill on the part of the potential respondent. Following this argument, pre-paid incentives lead to a reciprocity effect (Greenberg & Dillman, 2021). Consequently, within our experiment, we expected that offering a pre-paid incentive would result in a higher response rate in wave 1 and greater panel consent.

The effect of changed incentive modes on the overall participation during the course of the panel is theoretically ambiguous (Singer & Ye, 2013). In our study, there is a six-month interval between the two waves, which can be regarded as short. From the literature on panel conditioning, we know that shorter intervals between waves increase the probability that details from previous waves are recalled and reactivated (see e.g. Bergmann & Barth, 2018; Struminskaya & Bosnjak, 2021). There is therefore good reason to assume that the decision to participate in the second wave is not independent of participation in wave 1. Given the lack of previous experimental evidence and inconclusive results from existing studies, predicting the impact of transitioning from pre-paid to post-paid incentives on response behavior during the panel is challenging. Existing empirical research does not provide a basis for predicting the direction of any potential effect.

From the perspective of the rational choice theory that guided the SET and following the argumentation of Singer and Kulka (2002) or Creighton et al. (2007), payments do not necessarily need to be repeated to sustain a positive effect of a pre-paid incentive. Thus, it is conceivable that feelings of goodwill and reciprocity from wave 1 due to the pre-paid incentive will still be remembered as a benefit in wave 2.

An alternative perspective poses the Self-Perception Theory (SPT; see Bem, 1972). According to this theory, people develop an understanding of their attitudes through rational analysis of their own previous behaviors (Helgeson, Voss, & Terpening, 2002; Tybout & Yalch, 1980). In the panel study participation context this could for example, mean that non-material factors such as previous survey experience are also considered relevant for deciding whether to continue participating. Previous experience, such as a personal favorable incentive in the initial recruitment, may have the potential to have a lasting “foot-in-the-door” effect (Kocar & Lavrakas, 2023). In other words, once respondents have been persuaded to participate once, many of them tend to maintain their previous decision (or behave in a way consistent with their previous behavior) and remain loyal in the future (Thibaut & Kelley, 1959).

So far, we have based our theoretical considerations either on spillover effects of a favorable pre-paid incentive in the first wave or on intrinsic behavioral factors. A combination of both aspects is also conceivable: respondents assess bundles of internal and external factors as costs and benefits and compare them with an internal threshold (Brosnan et al., 2021; Kocar & Lavrakas, 2023). They evaluate these factors and decide whether to participate (again) in a study. The Leverage-Salience Theory (LST; Fan & Yan, 2010) extends this line of thinking. Specifically, the LST postulates that an evaluation of unfavorable experiences in the past may lead to a shift of perception, and thus to a decision against further participation (Keusch, 2015). Hence, a negative perception of the switch from pre-paid incentive in wave 1 to post-paid incentive in wave 2 would lead to a loss of the initial positive feelings (confidence boost, goodwill) that the initial pre-paid incentive had. That is, the reciprocity effect from wave 1 fades out in wave 2, and wave 2 participation is less likely for participants for whom the incentive mode has been changed (Brosnan et al., 2021; Kocar & Lavrakas, 2023). Then, in the most favorable scenario, respondents from wave 1 who had only participated in wave 1 due to the reciprocity effect of the pre-paid incentive are less inclined to participate again in wave 2. In the least favorable scenario, respondents from experiencing an incentive change, who would have participated even without an incentive or additional incentive become disengaged in wave 2. They expected something that does not come, namely the pre-paid incentive and are let down (cp. e.g. Zagorsky & Rhoton, 2008). By integrating these three theoretical perspectives, we can derive four hypotheses about the potential effects of transitioning the incentive strategy from an unconditional prepaid model to a post-paid approach, as opposed to consistently utilizing a post-paid incentive.

First, all three theories collectively suggest that:

Prepaid incentives in wave 1 result in higher response rates and greater panel consent compared to post-paid incentives in wave 1.

Drawing on Social Exchange Theory (SET), it is reasonable to hypothesize that the additional benefit of a prepaid incentive provided in wave 1 continues to positively influence participation in wave 2. This assumption leads to the following hypothesis:

The wave 2 response rate is higher for units that transition from a prepaid to a post-paid incentive compared to those that consistently receive post-paid incentives.

However, the expected difference in response gains for wave 2 is likely to diminish relative to the constant post-paid group, consistent with findings reported by Göritz (2008).

As far as Social Psychological Theory (SPT) is concerned, internal factors (such as loyalty) are likely to be decisive for the respondents’ motivation to participate in wave 2. It then can be assumed that the external factor “post-paid incentive” has a similar effect in both groups.

We expect no difference in wave 2 response rates between units that switched from pre-paid to post-paid and those that remained post-paid throughout.

Combining this assumption with the observation that the initial prepaid incentive in wave 1 boosts the response rate for the group undergoing the incentive change ultimately results in a higher overall cumulative response rate for this group. Building on the Leverage-Salience Theory (LST), which suggests that respondents may become disengaged due to an incentive change, gives rise to the following hypothesis:

Response rates are lower for units with a change from pre-paid to post-paid than for those who receive post-paid constantly.

As a result of such a situation, the cumulative response of both waves is similar across both groups.

4Design, Data, and Variables

4.1Experimental Design

To test our hypotheses, we conducted a split-ballot experiment over the first two waves of the recent IAB-BiB/FReDA-BAMF-SOEP survey “Refugees from Ukraine in Germany” (Brücker et al., 2023). All potential survey participants were randomly assigned to either the control or the treatment group. The control group (W1 post-paid group) received a post-paid incentive after completion of wave 1 and a second post-paid incentive after completion of wave 2. The treatment group (W1 pre-paid group) instead received a pre-paid incentive in wave 1 and a post-paid incentive after completion of wave 2. Hence, members of the control group experienced a constant incentive strategy, whereas members of the treatment group experienced a change in the incentive mode and thus in the incentive strategy (see Table 1). A third group receiving pre-paid incentive in wave 1 and wave 2 was not implemented. We address this point in our discussion of the study results.

Table 1 Experimental setting of the incentive change experiment

Group

Wave 1

Wave 2

Control Group

Post-paid Incentive

Post-paid Incentive

Treatment Group

Pre-paid Incentive

Post-paid Incentive

4.2Data

The IAB-BiB/FReDA-BAMF-SOEP survey “Refugees from Ukraine in Germany” is a probability-based semiannual panel that was established following the start of the Russian full-scale invasion of Ukraine in February 2022. The sample contained 48,000 Ukrainian refugees aged between 18 and 70 who had fled from Ukraine to Germany between February 24, 2022 and June 8, 2022. For the recruitment of wave 1, all sampled individuals were invited by postal mail to take part in a web survey. The envelope contained basic information about the content of the survey, data protection information, a URL to access the online questionnaire, a personalized password, and an electronically readable QR code (for detailed information on the design see Steinhauer et al., 2024). Implementing the experimental design, 50% of all invited potential survey participants were randomly promised a post-paid 10 Euro incentive in return for survey completion. Respondents received this post-paid incentive along with a thank you letter a few weeks after finishing the survey. The other 50% of potential participants received a 5 Euro unconditional pre-paid incentive that was included with their first invitation letter.

Following the standard procedure for online panel recruitment (Kaczmirek, Phillips, Pennay, Lavrakas, & Neiger, 2019; Witte et al., 2022), the invitation referred to the survey and becoming part of the innovative project but did not mention the panel design explicitly. At the end of the online survey, participants were asked for panel consent and to provide an email address and additional contact information (postal address and phone number). In wave 2, this contact information was used to contact respondents and wherever possible email addresses were used as the contact mode to recruit respondents for wave 2. Field time for the first wave of the survey was between August and October 2022 and overall, 11,754 respondents provided complete interviews in wave 1 (and an additional 640 incomplete interviews) with an average response rate of 25% (RR1; AAPOR, 2019). Of those, 88% consented to panel participation, yielding a gross sample of 10,395 for wave 2 (Steinhauer et al., 2024). Of those, 6754 individuals participated in wave 2 between January and March 2023, resulting in a response rate of 65% (RR1; AAPOR, 2019; for detailed information on the wave 2 design see, Torregroza, Leschny, & Gilberg, 2024). For all following analyses, 183 individuals were excluded because they were not part of the sample frame (AAPOR code 4.1 and 4.2, e.g. because of an earlier arrival date in Germany). Another 10,076 addresses were excluded from the initial wave 1 gross sample because the provided postal address did not exist (AAPOR code 3.18) reducing the sample size from 48,000 to 37,741 addresses.

4.3Measures

The treatment variable incentive change forms the central explanatory variable of the present study. Respondents within the control group, who received a post-paid incentive in both waves, were assigned the code “0”. Conversely, respondents in the treatment group, who encountered a shift from a pre-paid incentive in wave 1 to a post-paid incentive in wave 2, were assigned the code “1”.

We evaluated results of the experiment by means of three aspects of survey performance: First, we measured basic response figures by initially calculating response in wave 1, using AAPOR response rate RR51 and coded individuals who answered at least 80% of all applicable questions in the questionnaire with “1”. All other individuals who were invited to the survey but did not participate at all or provided answers to less than 80% of all applicable questions (overall 640 individuals started answering the questionnaire but dropped out) were coded “0”. Then, we analyzed panel consent, i.e. the proportion of respondents providing their consent to be contacted again for participation in the panel study. Wave 1 participants who provided panel consent were coded “1”, whereas those who did not were coded “0”. Thereafter, we coded response in wave 2 in the same manner as in wave 1. From all individuals who provided panel consent in wave 1, those who participated in wave 2 were coded “1”, whereas non-respondents in wave 2 were coded “0”. The cumulative response rate across both waves differentiates individuals from the initial gross sample who participated in both waves (coded “1”) from those who only participated in wave 1 or did not participate at all (coded “0”).

Second, to determine if the two distinct incentive strategies resulted in varying biases in the estimates of demographic statistics, we estimated nonresponse and non-consent bias with respect to age, marital status, municipality size, and duration of stay in Germany—information sourced from the population register from which our sample was drawn. We treated sex as a binary variable (male, female), age as a categorical variable (18–29, 30–39, 40–49, 50–59, and 60–70), marital status as a categorical variable (single, married, divorced, widowed), and size of the municipality as a categorical variable (less than 100,000 inhabitants, between 100,000 and 500,000 inhabitants, and more than 500,000 inhabitants). Duration of stay in Germany was calculated as the difference between the date of registration and the start of fieldwork in wave 1 and then categorized into less than 3 months, 3 months, 4 months, 5 months or more. We contacted registration offices to compile the gross sample, and some residents’ registration offices did not provide this demographic information. The analyses include an “unknown” category to control for items missing for that reason. Table A1 shows the related descriptive statistics for the gross sample, the wave 1 sample, and the wave 2 sample. Binary logistic regressions were used for modelling wave-specific and cumulative nonresponse and for panel matching as a function of sex, age, marital status, size of municipality, and duration of stay in Germany. A fully interacting model with main effects and interactions between the incentive mode change variable and all these demographic variables ensure the detection of treatment-induced effects. Besides average marginal effects, we used predicted probability plots to evaluate the influence of the experiment in the demographic groups under consideration. These graphical representations allow a more meaningful perception of the effect of the incentive on estimates of demographic biases (Best & Wolf, 2014).

The occurrence of item nonresponse during the two panel waves served as the third measure of the impact of the changed incentive mode on the data quality. Specifically, we constructed two count variables of the number of missing item values using all non-filtered items from the core questionnaire in wave 1 (n = 43) and wave 2 (n = 40). Items for which respondents of complete or incomplete interviews provided no answer or ineligible answers were counted as missing values. Then, we estimated Poisson regressions with the number of missing values as dependent variable and an indicator for the incentive group as independent variable with the control group is coded as “0” and the treatment group as “1”. We ran regressions for wave 1, wave 2, and waves 1 and 2 combined. The fourth aspect of panel performance that we addressed concerns the actual costs to conduct the survey. These costs are calculated for fixed numbers nw2 of complete interviews after the second panel wave. The total survey costs Cg for the two experimental groups g across both waves compute as follows (see Table 2).

Table 2 Total survey costs for the experimental groups

Control group (g=0)

Treatment group (g=1)

Wave 1 Cost

Cw10=nw10Ipost+Nw10Pw10

Cw11=Nw11Ipre+Nw10Pw11

Wave 2 Cost

Cw20=nw2Ipost+Nw20Pw20

Cw21=nw2Ipost+Nw20Pw21

Total Costs

C0=Cw10+Cw20

C1=Cw11+Cw11

Here Ipre is the amount of the pre-paid incentive (i.e. 5 EUR) and Ipost the amount of the post-paid incentive (i.e. 10 EUR). The number nw1g specifies the wave 1 net sample size in the experimental group g,g{0,1}. That is nw1g=nw2/(pcgrw2g) with rw2g being the group-specific wave 1 response rate and pcg the related panel consent rate (after wave 1). Nw11 is the wave 1 gross sample size of the treatment group g=1 calculated by Nw11=nw2/(rw11pc1rw21) with rw11 being the corresponding response rate. Pwig represents the mean expenses incurred for dispatching invitations and reminders during wave i within experimental group g. This calculation considers the varying costs associated with sending letters, contingent upon whether participation in the survey occurs early or late, thus necessitating additional reminder letters. As a result, these costs vary across waves and groups. Specifically, the costs were as follows: Pw10=2.28EUR, Pw11=2.22EUR, Pw20=3.79EUR, and Pw21=4.13EUR. Comparing survey costs between the two incentive-experimental groups alone is insufficient to fully capture the quality of the data collected. Therefore, we assess the incurred survey costs also in relation to the impact of demographic nonresponse bias estimates across the two groups.

5Results

5.1Wave-Specific and Cumulative Response and Panel Consent

The descriptive results on the effect of the incentive change on survey participation are consistent with available findings from other contexts and populations. Fig. 1 shows that 33% of all successfully contacted respondents in the W1 pre-paid group participated in the first wave of the survey. In contrast, only 30% of the successfully contacted potential respondents in the W1 post-paid group participated.

Fig. 1Response and consent rates by experimental groupSource: IAB-BiB/FReDA-BAMF-SOEP survey 2022/23

From a longitudinal perspective, however, this initial pre-paid incentive had only marginal advantages compared to the constant post-paid incentive (control) group. Fig. 1 shows that the rate of all 11,754 wave 1 participants providing their consent to participate in future survey waves was 4 percentage points lower in the treatment group compared to the control group. Even more pronounced are the differences between the two incentive strategies concerning participation behavior in wave 2. The change within the treatment group from a pre-paid incentive in wave 1 to a post-paid incentive in wave 2 resulted in a response rate of 61%. In contrast, the control group that received a post-paid incentive in both waves showed a 9-percentage point higher response rate. Thus, being in the treatment group and thereby experiencing a change in the incentive strategy was related to a significant decrease in the probability to respond in wave 2. Overall, the 19% cumulative response rate of the W1 post-paid group was significantly higher than the W1 pre-paid group response rate (17%).

These results are robust even when controlling for the basic demographic characteristics of sex, age, marital status, municipality size, and duration of stay. Table 3 presents the average marginal effects of the related binary logistic regressions. The percentage point differences in response and consent between the two experimental groups were identical to the descriptive results presented in Fig. 1. In sum, our results provide support for the notion that a constant post-paid incentive strategy is more effective in recruiting participants for a two-wave panel study compared to an incentive strategy changing from a pre-paid incentive to a post-paid incentive.

Table 3 Average marginal effects of logistic regressions on the effect of incentive change on response and consent

Response W1

Consent W1

Response W2

Cumulative Response

 AME

S.E.

AME

S.E.

AME

S.E.

AME

S.E.

Source: IAB-BiB/FReDA-BAMF-SOEP survey 2022/23

Cluster-robust standard errors on the municipality level have been computed to account for possible effects of the decentralized population registers in Germany

* p < 0.05, ** p < 0.01, *** p < 0.001

Incentive change (ref. constant post-paid)

 0.033***

0.005

−0.040***

0.005

−0.090***

0.008

−0.014***

0.004

Female (ref. male)

 0.042***

0.008

−0.011

0.008

 0.004

0.012

 0.024***

0.006

Age (ref: 18–29)

30–39

−0.023**

0.007

−0.002

0.009

 0.030*

0.014

−0.006

0.005

40–49

−0.010

0.009

−0.001

0.008

 0.038*

0.017

 0.003

0.006

50–59

−0.026*

0.011

−0.029*

0.012

 0.019

0.019

−0.017*

0.008

60–70

−0.013

0.011

−0.075***

0.012

−0.012

0.019

−0.026**

0.008

Unknown

 0.004

0.018

−0.013

0.007

 0.006

0.011

 0.002

0.011

Marital status (ref. single)

Married

 0.025**

0.008

−0.002

0.009

 0.028

0.014

 0.021***

0.005

Divorced

−0.003

0.015

−0.008

0.014

−0.004

0.029

−0.005

0.012

Widowed

−0.041*

0.018

 0.006

0.018

−0.044

0.037

−0.034**

0.012

Unknown

−0.003

0.012

−0.007

0.007

−0.006

0.012

−0.005

0.008

Municipality size (ref. >500k)

< 100k

 0.042*

0.018

 0.001

0.009

−0.018

0.018

 0.019

0.011

100–500k

 0.049**

0.016

 0.006

0.005

 0.010

0.009

 0.033***

0.010

Duration of stay (ref. ≥5 months)

< 3

 0.046**

0.016

 0.020

0.019

 0.000

0.027

 0.033*

0.013

3

 0.018*

0.008

 0.030*

0.013

−0.010

0.014

 0.014*

0.006

4

 0.002

0.006

 0.010

0.009

−0.007

0.010

 0.000

0.005

Unknown

 0.024

0.013

 0.013

0.010

−0.016

0.010

 0.012

0.008

Observations

37,741

11,754

10,394

37,741

5.2Estimated Nonresponse and Non-Consent Bias

Evaluating simple response or consent rates alone is not sufficient for assessing the performance of an incentive strategy. It is also important that an incentive strategy does not lead to biases by motivating some groups in the population more or less than others (unit nonresponse) or by reducing motivation to answer the questionnaire conscientiously (item nonresponse). Examining the average marginal effects presented in Table 3 alongside the predicted probability plots for wave 1 response and cumulative response in Fig. 2, we find some disparities between the treatment and the control groups in the measured demographic groups. However, these differences are generally minor when considered collectively. For example, in wave 1 the pre-paid incentive was particularly successful in recruiting married and middle-aged individuals, whereas post-paid incentives yielded better results among the youngest participants of 18–29 years old. Furthermore, there was a slightly higher response rate among women compared to men in the treatment group than in the control group for wave 1. Similarly, younger individuals exhibited slightly higher response rates than older ones, married individuals surpassed singles, and recently arrived refugees showed a higher predicted probability of responding compared to those who arrived earlier. Notably, the cumulated response rate for waves 1 and 2 showed hardly any significant differences between the treatment and the control groups with respect to the composition of demographic groups. In summary, any differences in estimated bias found were small, indicating similar response and consent behavior within the pre- to post-paid (treatment) and post- to post-paid (control) groups.

Fig. 2Predicted response rates for wave 1 and the cumulative response rate for wave 1 and 2 by incentive strategies and demographic characteristicsSource: IAB-BiB/FReDA-BAMF-SOEP survey 2022/23

5.3Item Nonresponse

Along with unit nonresponse as a major source of bias in survey data, item nonresponse can also limit the analytical potential of data. Overall, the results in Fig. 3 show small differences concerning the prevalence of item nonresponse across both incentive groups in course of wave 1, wave 2, and for both waves combined. In wave 1 survey participants in the treatment group—who received a pre-paid incentive—had a slightly but significantly higher risk of item missing, whereas participants in the control group—who received a post-paid incentive—showed a reduced risk of item nonresponse. This effect vanished in wave 2, when both groups received a post-paid incentive. Likewise, no significant difference in the occurrence of item nonresponse was found when combining all item nonresponses from wave 1 and wave 2.

Fig. 3Effect of treatment on item nonresponse in wave 1, wave 2, and both waves combinedCoefficients of Poisson regressions (with counts of missing items as dependent variables) reflecting the effects of the treatment along waves. Analyses are based on complete (wave 1: 11,754; wave 2: 6659) and incomplete (wave 1: 640; wave 2: 395) interviews. Additional analyses based on complete interviews show the same results. Condition “Wave 1 + Wave 2” includes only respondents who participated in both waves

5.4Survey Costs

The results of our cost calculations are presented in Fig. 4. Since our analyses of nonresponse and non-consent bias reveal minimal differences in relevant demographic factors between the two incentive-experimental groups, the quality adjustment of survey costs remains consistent across both groups. Therefore, comparing costs without quality adjustments provides the same insights as comparing quality-adjusted costs. All in all, there is clear evidence that costs of the W1 pre-paid group exceeded the costs of the W1 post-paid group. Although recruitment in the W1 pre-paid group was slightly faster and the number of reminder letters smaller, the higher incentive costs combined with a lower response rate in wave 2 resulted in overall survey costs of 339,000 Euros for 5000 interviews at the end of wave 2. These costs exceeded the costs of the control group by 70%.

Fig. 4Survey costs by incentive strategy and number of complete interviews after two panel waves, in 1000 EurosSource: Own calculations based on the experiences of the IAB-BiB/FReDA-BAMF-SOEP survey 2022/23

6Discussion and Conclusion

This article presents an incentive experiment on the effectiveness of two well-established monetary incentive strategies in a probability-based online 2‑wave panel study. There has not been much research in this area to date, despite the growing number of (online) longitudinal survey studies worldwide. Thus, our study contributes significantly to a relevant field of research. In our experiment, we investigated theoretically and empirically the effects of a change in the type of incentive from an unconditional pre-paid incentive in wave 1 to a conditional post-paid incentive in wave 2 on respondent participation and different aspects of data quality: The participation rates in the individual waves, the cumulative response rate after two waves, nonresponse and non-consent bias, selection bias due to item nonresponse and the survey costs serve as measures for validation. The panel aspect, i.e. the overall result after two waves, is crucial for our study. This is because only such a perspective is able to provide an understanding of how incentives can be used cost-effectively to achieve the best possible data quality within a panel.

Concerning the response rate in wave 1, our findings indicated that the unconditional pre-paid incentive significantly boosted the response rate compared to the conditional post-paid incentive. Thus, proving the first part of our first hypothesis. This observation aligns with prior research conducted in the German context. An explanation for the efficacy of pre-paid incentives over post-paid incentives in a cross-sectional study (which includes the initial wave of a panel) is that pre-paid incentives activate social norms of exchange and reciprocity, facets not elicited by post-paid incentives. Thus, from a single wave perspective, our experiment confirms previous findings that pre-paid incentives are more effective than post-paid incentives in optimizing participation. For the sake of thoroughness, it should also be noted here that the pre-paid incentive in wave 1 in the treatment group at least has one advantage from a longitudinal perspective. The higher response rate in the initial wave facilitates the gathering of additional information about subsequent dropouts. This aspect proves beneficial for addressing selective dropout in statistical analysis, such as through survey weighting techniques.

However, aside from that very specific benefit, the multi-wave panel perspective unmistakably revealed that this positive effect directly disappeared starting with panel consent, thus, refuting the second part of our first hypothesis. Furthermore, our study demonstrated that transitioning from a pre-paid to a post-paid incentive strategy resulted in a significant decline in continuing participation in the panel. Indeed, the decrease in willingness to participate after the shift to post-paid incentives in wave 2 was so substantial that even the cumulative response rate across both waves suffered considerably, resulting in fewer overall participants in the treatment group (W1 pre-paid) compared to the control group (W1 post-paid). This finding is in line with our fourth hypothesis and refutes our hypotheses two and three. From both a survey methodological and theoretical standpoint, this discovery is notably unexpected. In wave 1, the more enticing pre-paid incentive successfully attracted a larger number of participants. Therefore, following the logic of the SPT, one might have anticipated a foot-in-the-door effect stemming from the higher baseline established in wave 1, which could have potentially bolstered the cumulative response (Kocar & Lavrakas, 2023). However, contrary to expectations, this was not observed in our study, as the response rate significantly decreased within the treatment group. In consequence, it became apparent that a significant portion of respondents within the treatment group were not enticed by the post-paid incentive. This suggests that not only the participants of wave 1, who were exclusively motivated by social norms of exchange and reciprocity due to the pre-paid incentive, were lost in the treatment group for wave 2, but also that those who were initially incentivized by the pre-paid incentive in wave 1 and thus developed increased expectations that were not fulfilled by the post-paid incentive in wave 2. From a theoretical perspective, this could be interpreted as a negative development of the incentive leverage point reinforced by unfulfilled expectations due to the experiences in wave 1 (Brosnan et al., 2021; Kocar & Lavrakas, 2023). At least partly, these results can be seen as complementary to the findings from Göritz (2008), who found that the effects of initial generous incentives gradually decreased over time, and to Castiglioni et al. (2008), who showed that constant post-paid incentives (compared to pre-paid incentives) better retained respondents in a three-wave panel.

Although response rates are a crucial factor in determining incentive strategies, they should not be the sole evaluation criterion (Zinn & Wolbring, 2023). Therefore, we also examined the sample composition of the two experimental groups across the study’s two waves, along with assessing the respective answer quality and survey costs. Concerning answer quality and sample composition, we detected only minimal substantial differences between the incentive strategy groups. However, in terms of costs, it was also evident that the constant post-paid strategy was significantly more economical.

To summarize, using a constant conditional post-paid incentive strategy offers more advantages than switching from an unconditional pre-paid to a conditional post-paid incentive. This is supported not only by the higher cumulative response rates, but also by the lower survey costs associated with the pure post-paid strategy (with comparable data quality and sample composition for both incentive strategies).

Despite employing an experimental approach, our study has limitations. Chief among these is the fact that the experiment was conducted within a migrant cohort. Although our data come from a probability-based sample, indicating high data quality and some degree of generalizability, particularly to similar demographics, validating our findings in a general population panel represents a crucial next step. Additionally, it is worth noting that the pre-paid incentive offered in wave 1 to the treatment group was € 5, while the post-paid incentive in wave 1 for the control group was € 10. In terms of single-wave perspective, it might have been more straightforward to offer identical values to both groups, as research suggests that the value of a post-paid incentive alone can positively influence participation (e.g. Witte et al., 2022). However, since both groups received the same € 5 post-paid incentive for wave 2, and the control group effectively experienced a reduction in their incentive value—thereby potentially failing to meet the expectations set in wave 1—our results may be considered conservative from a panel perspective. Furthermore, it would have been potentially informative to also test the performance of a constant pre-paid condition. Due to budgetary restrictions and practical reasons, this was not possible within the IAB-BiB/FReDA-BAMF-SOEP survey “Refugees from Ukraine in Germany”. However, this extended approach is an interesting step for future research. Starting with the higher panel consent in wave 1 in the W1 post-paid group, our results at least indicate an increasingly better performance of the constant post-paid incentive strategy as the panel progresses. Finally, our focal study is a panel with a biannual wave interval. In contrast, many other studies have utilized intervals of one or two years between panel waves. It remains uncertain whether our findings can be generalized to such contexts. Theoretically, it is plausible that the interval length may exert a similar influence as panel conditioning (e.g. Kraemer et al., 2023; Rettig & Struminskaya, 2023), potentially impacting recall effects. However, existing research lacks evidence supporting this notion (as outlined in our section on Previous Research on Incentive Strategies in Longitudinal Studies). To gain clarity here, future research could experimentally explore the impact of varying panel wave intervals, for instance, by varying the distance between panel waves within a longitudinal experiment on incentive change. Finally, our study solely captures insights from two waves. It would be intriguing to investigate whether the observed pattern extends to additional waves. This exploration could delve into the reverberating effects of altering incentive strategies in subsequent waves, as well as the implications when such changes occur in a later wave.

We were surprised that changes in incentive strategies are still relatively under-researched, given how widely they are used in practice. This article has convincingly argued that incentive changes during an ongoing panel study carry inherent risks, even with respect to practices identified as superior, such as unconditional pre-paid incentivization. Therefore, it is essential for future research to thoroughly investigate the effects of incentives, especially from a panel perspective. Our study highlights that established best practices derived from cross-sectional or single-wave perspectives may not be directly applicable to panel incentive strategies. Instead, survey managers are advised to pursue a longitudinal incentive strategy from the outset.

7Practical implications

For single-wave settings such as cross-sectional surveys, our findings suggest that pre-paid incentives are most effective in increasing response rates. However, in a panel study that starts with an unconditional pre-paid incentive during recruitment, i.e. in the first wave, and then moves to a conditional post-paid incentive, the dynamic shifts for the entire panel. We see that changing the incentive strategy from pre-paid to post-paid incentives leads to lower cumulative response rates over the two waves, combined with higher survey costs. Therefore, in cases in which budget constraints do not allow for a continuous pre-paid incentive across all waves, or when the post-paid strategy is predetermined for an ongoing panel, it is advisable to implement a post-paid incentive in the first wave, possibly in parallel with a pre-paid incentive, to increase the response rate of the opening wave.

We advise panel managers to carefully consider their incentive strategy, preferably planning ahead and aligning their incentive strategies with their budget constraints. For smaller budgets, we suggest a sustainable distribution of the incentive budget throughout the panel’s duration, rather than allocating it indiscriminately to all participants at the outset and then transitioning to less appealing alternatives. Moreover, based on our findings and those of Göritz (2008), constant conditional post-paid incentive strategies seem to surpass the initial advantageous effects of a single-wave unconditional pre-paid incentive in terms of panel stability and cost, while maintaining response quality and minimizing bias. As the panel progresses and gathers more systematic information about respondents and their response behavior, a tailored incentive strategy (such as adjusting the monetary incentive amount) can still be implemented to optimize response rates in a targeted manner. This could involve offering slightly more attractive incentives for groups with a high risk of default and a different approach for participants who are already willing to participate.

We extend our gratitude to Annette Jäckle and Peter Lynn, whose support enabled us to present our research at their colloquium in Essex and whose insightful comments significantly enhanced it. Our thanks also go to the entire IAB-BiB/FReDA-BAMF-SOEP survey team for facilitating our experiment. Additionally, we are grateful to Markus Grabka for his guidance during our initial steps as well as the reviewers and editors of Survey Research Methods who provided valuable additional insights

The data collection for this research was funded by the Federal Ministry of Labour and Social Affairs (BMAS), the Federal Employment Agency (BA), the Federal Ministry of Education and Research (BMBF), and the Federal Ministry of the Interior and Community (BMI). The study was part of a collaborative effort involving the IAB, BAMF, SOEP, and the Federal Institute for Population Research (BiB).

Data and Source Code

All data work can be carried out directly based on the IAB-BiB/FReDA-BAMF-SOEP data that will be available from July 2024 from the SOEP RDC (https://www.diw.de/de/diw_01.c.924972.de/soep.iab-bib_freda-bamf-soep.2022-2023.1.). All statistical analyses were performed in Stata version SE 18. The Stata-do files for the replication of our empirical results are available alongside this paper and at https://github.com/andreasette/incentive-change.

1Supplementary Information

The supplementary material includes the complete replication package, comprising information on datasets needed and Stata do-files necessary to reproduce the results presented in the article as well as an additional table including descriptive statistics of our analytical sample.

References

AAPOR (2019). Standard definitions: final dispositions of case codes and outcome rates for surveys. 9th edition a, b

Baker, R., Blumberg, S. J., Brick, J. M., Couper, M. P., Courtright, M., Dennis, J. M., et al. (2010). Research synthesis: AAPOR report on online panels. Public Opinion Quarterly, 74(4), 711–781. https://doi.org/10.1093/poq/nfq048.

Becker, R. (2023). The researcher, the incentive, the panelists and their response: the role of strong reciprocity for the panelists’ survey participation. Survey Research Methods, 17(3), 223–242. https://doi.org/10.18148/srm/2023.v17i3.7975.a, b

Becker, R., & Glauser, D. (2018). Are prepaid monetary incentives sufficient for reducing panel attrition and optimizing the response rate? An experiment in the context of a multi-wave panel with a sequential mixed-mode design. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 139(1), 74–95. https://doi.org/10.1177/0759106318762456.

Becker, R., Möser, S., & Glauser, D. (2019). Cash vs. vouchers vs. gifts in web surveys of a mature panel study—main effects in a long-term incentives experiment across three panel waves. Social Science Research. https://doi.org/10.1016/j.ssresearch.2019.02.008.a, b

Bem, D. J. (1972). Self-perception theory. In Advances in experimental social psychology (Vol. 6, pp. 1–62). Amsterdam: Elsevier.

Bergmann, M., & Barth, A. (2018). What was I thinking? A theoretical framework for analysing panel conditioning in attitudes and (response) behaviour. International Journal of Social Research Methodology, 21(3), 333–345. https://doi.org/10.1080/13645579.2017.1399622.

Best, H., & Wolf, C. (2014). Logistic regression. In The Sage handbook of regression analysis and causal inference (pp. 153–172).

Beste, J., Frodermann, C., Trappmann, M., & Unger, S. (2023). Case prioritization in a panel survey based on predicting hard to survey households by machine learning algorithms: an experimental study. Survey Research Methods, 17(3), 243–268. https://doi.org/10.18148/srm/2023.v17i3.7988.a, b

Blom, A. G., Gathmann, C., & Krieger, U. (2015). Setting up an online panel representative of the general population: the German Internet panel. Field Methods, 27(4), 391–408. https://doi.org/10.1177/1525822X15574494.

Booth, C., Wong, E., Brown, M., & Fitzsimons, E. (2024). Evaluating the effect of monetary incentives on web survey response rates in the UK millennium cohort study. Survey Research Methods, 18(1), 47–58. https://doi.org/10.18148/srm/2024.v18i1.8210.

Brosnan, K., Kemperman, A., & Dolnicar, S. (2021). Maximizing participation from online survey panel members. International Journal of Market Research, 63(4), 416–435. https://doi.org/10.1177/1470785319880704.a, b, c, d

Brücker, H., Ette, A., Grabka Markus, M., Kosyakova, Y., Niehues, W., Rother, N., et al. (2023). Ukrainian refugees in Germany: evidence from a large representative survey. Comparative Political Studies. https://doi.org/10.12765/CPoS-2023-16.

Brüggen, E., Wetzels, M., De Ruyter, K., & Schillewaert, N. (2011). Individual differences in motivation to participate in online panels:the effect on reponse rate and reponse quality perceptions. International Journal of Market Research, 53(3), 369–390. https://doi.org/10.2501/ijmr-53-3-369-390.

Cabrera-Álvarez, P., & Lynn, P. (2023). Short-term impact of increasing the value of unconditional and conditional incentives in understanding society

Cabrera-Álvarez, P., & Lynn, P. (2025). Benefits of increasing the value of respondent incentives during the course of a longitudinal mixed-mode survey. International Journal of Social Research Methodology. https://doi.org/10.1080/13645579.2024.2443630.

Callegaro, M., & DiSogra, C. (2008). Computing response metrics for online panels. Public Opinion Quarterly, 72(5), 1008–1032.

Castiglioni, L., Pforr, K., & Krieger, U. (2008). The effect of incentives on response rates and panel attrition: results of a controlled experiment. Paper presented at the Survey Research Methods.a, b, c

Creighton, K. P., King, K. E., & Martin, E. A. (2007). The use of monetary incentives in Census Bureau longitudinal surveys. Survey methodology, Vol. 2007–2002. Washington, D.C.: U.S. Census Bureau.a, b

DeBell, M. (2022). The visible cash effect with prepaid incentives: evidence for data quality, response rates, generalizability, and cost. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smac032.

Décieux, J. P., & Heinz, A. (2024). Does a short-term deadline extension affect participation rates of an online survey? Experimental evidence from an online panel. International Journal of Social Research Methodology, 27(3), 1–5. https://doi.org/10.1080/13645579.2022.2153475.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Hoboken: John Wiley & Sons, Inc..

DiSogra, C., & Callegaro, M. (2015). Metrics and design tool for building and evaluating probability-based online panels. Social Science Computer Review, 34(1), 26–40. https://doi.org/10.1177/0894439315573925.

Fan, W., & Yan, Z. (2010). Factors affecting response rates of the web survey: a systematic review. Computers in Human Behavior, 26(2), 132–139.a, b

Felderer, B., Müller, G., Kreuter, F., & Winter, J. (2018). The effect of differential incentives on attrition bias:evidence from the PASS wave 3 incentive experiment. Field Methods, 30(1), 56–69. https://doi.org/10.1177/1525822x17726206.

Feskens, R., Hox, J., Schmeets, H., & Wetzels, W. (2008). Incentives and ethnic minorities: results of a controlled randomized experiment in the Netherlands. Survey Research Methods, 2(3), 159–165. https://doi.org/10.18148/srm/2008.v2i3.71.

Fumagalli, L., Laurie, H., & Lynn, P. (2013). Experiments with methods to reduce attrition in longitudinal surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(2), 499–519. https://doi.org/10.1111/j.1467-985X.2012.01051.x.

Göritz, A. S. (2008). The long-term effect of material incentives on participation in online panels. Field Methods, 20(3), 211–225.a, b, c, d, e

Göritz, A. S., & Neumann, B. P. (2016). The longitudinal effects of incentives on response quantity in online panels. Translational Issues in Psychological Science, 2(2), 163.a, b

Greenberg, P., & Dillman, D. (2021). Mail communications and survey response: a test of social exchange versus pre-suasion theory for improving response rates and data quality. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smab020.

Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56(4), 475–495. https://doi.org/10.1086/269338.

Haas, G.-C., Volkert, M., & Senghaas, M. (2022). Effects of prepaid postage stamps and postcard incentives in a web survey experiment. Field Methods, 0(0), 1525822X221132401. https://doi.org/10.1177/1525822x221132401.

Han, D., Montaquila, J. M., & Brick, J. M. (2013). An evaluation of incentive experiments in a two-phase address-based sample mail survey. Survey Research Methods, 7(3), 207–218. https://doi.org/10.18148/srm/2013.v7i3.5402.

Hanly, M. J., Savva, G. M., Clifford, I., & Whelan, B. J. (2014). Variation in incentive effects across neighbourhoods. Survey Research Methods, 8(1), 19–30. https://doi.org/10.18148/srm/2014.v8i1.5485.

Helgeson, J. G., Voss, K. E., & Terpening, W. D. (2002). Determinants of mail-survey response: survey design factors and respondent factors. Psychology & Marketing, 19(3), 303–328.

Jäckle, A., & Lynn, P. (2008). Respondent incentives in a multi-mode panel survey: cumulative effects on nonresponse and bias. Survey Methodology, 34(1), 105–117.a, b, c, d

Jacobsen, J., & Siegert, M. (2023). Establishing a panel study of refugees in Germany: first wave response and panel attrition from a comparative perspective. Field Methods, 0(0), 1525822X231204817. https://doi.org/10.1177/1525822x231204817.a, b

Kaczmirek, L., Phillips, B., Pennay, D., Lavrakas, P. J., & Neiger, D. (2019). Building a probability-based online panel: life in Australia. CSRM methods series, Vol. 2/2019. Canberra: Center for Social Research and Methods, Australian National University.

Keusch, F. (2015). Why do people participate in web surveys? Applying survey participation theory to Internet survey data collection. Management Review Quarterly, 65(3), 183–216. https://doi.org/10.1007/s11301-014-0111-y.a, b, c

Kocar, S., & Lavrakas, P. J. (2023). Social-psychological aspects of probability-based online panel participation. International Journal of Public Opinion Research. https://doi.org/10.1093/ijpor/edad012.a, b, c, d, e, f

Kraemer, F., Silber, H., Struminskaya, B., Sand, M., Bosnjak, M., Koßmann, J., & Weiß, B. (2023). Panel conditioning in a probability-based longitudinal study: a comparison of respondents with different levels of survey experience. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smad004.

Kretschmer, S., & Müller, G. (2017). The wave 6 NEPS adult study incentive experiment. Methods, data, analyses: a journal for quantitative methods and survey methodology (mda), 11(1), 7–28.

Krieger, U. (2018). A penny for your thoughts. The use of cash incentives in face-to-face surveys. Mannheim: Universität Mannheim.a, b, c

Laurie, H. (2007). The effect of increasing financial incentives in a panel survey: an experiment on the British household panel survey, wave 14

Laurie, H., & Lynn, P. (2009). The use of respondent incentives on longitudinal surveys. In Methodology of longitudinal surveys (pp. 205–233).a, b, c

Lipps, O., Herzing, J. M. E., Pekari, N. M., Stähli, E., Pollien, A., Riedo, G., & Reveilhac, M. (2019). Incentives in surveys. Lausanne: FORS.a, b

Lipps, O., Jaquet, J., Lauener, L., Tresch, A., & Pekari, N. (2022). Cost efficiency of incentives in mature probability-based online panels. In Survey methods: insights from the field (SMIF).a, b, c

Lipps, O., Felder, M., Lauener, L., Meisser, A., Pekari, N., Rennwald, L., & Tresch, A. (2023). Targeting incentives in mature probability-based online panels. In Survey methods: insights from the field (SMIF).

Lynn, P. (2017). From standardised to targeted survey procedures for tackling non-response and attrition. Survey Research Methods, 11(1), 93–103. https://doi.org/10.18148/srm/2017.v11i1.6734.

Lynn, P., & Lugtig, P. (2017). Total survey error for longitudinal surveys. In P. P. Biemer, E. D. DeLeeuw, S. Eckman, B. Edwards, F. Kreuter, L. E. Lyberg, N. C. Tucker & B. T. West (Eds.), Total survey error in practice (pp. 279–298). Hoboken: John Wiley & Sons, Inc.

Neman, T. S., Dykema, J., Garbarski, D., Jones, C., Schaeffer, N. C., & Farrar-Edwards, D. (2022). Survey monetary incentives: digital payments as an alternative to direct mail. Survey Practice. https://doi.org/10.29115/SP-2021-0012.

Olson, K., Wagner, J., & Anderson, R. (2020). Survey costs: where are we and what is the way forward? Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smaa014.

Pforr, K., Blohm, M., Blom, A. G., Erdel, B., Felderer, B., Fräßdorf, M., et al. (2015). Are incentive effects on response rates and nonresponse bias in large-scale, face-to-face surveys generalizable to Germany? Evidence from ten experiments. Public Opinion Quarterly, 79(3), 740–768. https://doi.org/10.1093/poq/nfv014.a, b

Rettig, T., & Struminskaya, B. (2023). Memory effects in online panel surveys: investigating respondents’ ability to recall responses from a previous panel wave. Survey Research Methods, 17(3), 301–322. https://doi.org/10.18148/srm/2023.v17i3.7991.

Rodgers, W. L. (2011). Effects of increasing the incentive size in a longitudinal study. Journal of official Statistics, 27(2), 279–299.

Scheepers, E., & Hoogendoorn-Lanser, S. (2018). State-of-the-art of incentive strategies—implications for longitudinal travel surveys. Transportation Research Procedia, 32, 200–210. https://doi.org/10.1016/j.trpro.2018.10.036.a, b, c

Schoeni, R. F., Stafford, F., Mcgonagle, K. A., & Andreski, P. (2013). Response rates in national panel surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 60–87. https://doi.org/10.1177/0002716212456363.

Singer, E., & Kulka, R. A. (2002). Paying respondents for survey participation. Studies of welfare populations: Data collection and research issues, 4, 105–128.a, b

Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112–141. https://doi.org/10.1177/0002716212458082.a, b

Spreen, T. L., House, L. A., & Gao, Z. (2019). The impact of varying financial incentives on data quality in web panel surveys. Journal of Survey Statistics and Methodology, 8(5), 832–850. https://doi.org/10.1093/jssam/smz030.a, b, c

Stanley, M., Roycroft, J., Amaya, A., Dever, J. A., & Srivastav, A. (2020). The effectiveness of incentives on completion rates, data quality, and nonresponse bias in a probability-based Internet panel survey. Field Methods, 32(2), 159–179. https://doi.org/10.1177/1525822X20901802.a, b

Steinhauer, H. W., Décieux, J. P., Siegert, M., Ette, A., & Zinn, S. (2024). Establishing a probability sample in a crisis context: the example of Ukrainian refugees in Germany in 2022. AStA Wirtschafts- und Sozialstatistisches Archiv. https://doi.org/10.1007/s11943-024-00338-0.a, b, c

Struminskaya, B., & Bosnjak, M. (2021). Panel conditioning: types, causes, and empirical evidence of what we know so far. Advances in Longitudinal Survey Methodology. https://doi.org/10.1002/9781119376965.ch12.

Sun, H., Newsome, J., McNulty, J., Levin, K., Langetieg, P., Schafer, B., & Guyton, J. (2020). What works, what Doesn’t? Three studies designed to improve survey response. Field Methods. https://doi.org/10.1177/1525822X20915464.

Thibaut, J. W., & Kelley, H. H. (1959). The social psychology of groups. New York: Routledge.a, b

Torregroza, S., Leschny, K., & Gilberg, R. (2024). Technical report IAB-BIB-BAMF-SOEP Ukrainian refugees in Germany (wave 2)

Tybout, A. M., & Yalch, R. F. (1980). The effect of experience: a matter of salience? Journal of Consumer Research, 6(4), 406–413. https://doi.org/10.1086/208783.

Warriner, K., Goyder, J., Gjertsen, H., Hohner, P., & McSpurren, K. (1996). Charities, no; lotteries, no; cash, yes: main effects and interactions in a Canadian incentives experiment. Public Opinion Quarterly, 60(4), 542–562. https://doi.org/10.1086/297772.

West, B. T., Zhang, S., Wagner, J., Gatward, R., Saw, H.-W., & Axinn, W. G. (2023). Methods for improving participation rates in national self-administered web/mail surveys: evidence from the United States. PLOS ONE, 18(8), e289695. https://doi.org/10.1371/journal.pone.0289695.

Witte, N., Schaurer, I., Schröder, J., Décieux, J. P., & Ette, A. (2022). Enhancing participation in probability-based online panels: two incentive experiments and their effects on response and panel recruitment. Social Science Computer Review, 41(3), 8944393211054939. https://doi.org/10.1177/08944393211054939.a, b, c, d, e

Yu, S., Alper, H. E., Nguyen, A.-M., Brackbill, R. M., Turner, L., Walker, D. J., et al. (2017). The effectiveness of a monetary incentive offer on survey response rates and response completeness in a longitudinal study. BMC Medical Research Methodology, 17(1), 77. https://doi.org/10.1186/s12874-017-0353-1.

Zagorsky, J. L., & Rhoton, P. (2008). The effects of promised monetary incentives on attrition in a long-term panel survey. Public Opinion Quarterly, 72(3), 502–513. https://doi.org/10.1093/poq/nfn025.a, b

Zinn, S., & Wolbring, T. (2023). Recent methodological advances in panel data collection, analysis, and application. Survey Research Methods, 17(3), 219–222. https://doi.org/10.18148/srm/2023.v17i3.8317.