An Increase Matters, Not the Actual Value: Early Bird Incentives in Longitudinal Surveys

Survey Research Methods
ISSN 1864-3361
847810.18148/srm/2025.v19i1.8478An Increase Matters, Not the Actual Value: Early Bird Incentives in Longitudinal Surveys
Pablo Cabrera-Álvarez pcabre@essex.ac.uk
Peter Lynn plynn@essex.ac.uk
University of EssexInstitute for Social and Economic Research Essex
13122025European Survey Research Association

This paper provides unique new evidence regarding the effect on response rates of increasing the value of an early bird incentive (EBI) sent to respondents completing an online questionnaire during the first five weeks of fieldwork. The experiment analysed in this paper, embedded in wave 12 of Understanding Society, a longitudinal mixed-mode survey, tested an increase of the EBI from £10 to £20. Moreover, the experiment additionally covered a subsample who were being administered the web-first mixed-mode design for the first time, having previously been administered a CAPI-only design and, therefore, had not been offered the EBI. This allowed us to explore the mechanisms that drive the effect of the incentive increase on response rates. We also examined the effect of the increased incentives on fieldwork efforts and sample composition. We found that increasing the value of the incentive had a positive effect on response rates for panel members who had been offered the EBI previously, whereas the higher value had no significant effect on those who had not previously been offered an EBI. The effect was particularly pronounced for certain low-response propensity groups.

Supplementary Information

The online version of this article (https://doi.org/10.18148/srm/2025.v19i1.8478) contains supplementary material.

1Introduction

Time-limited conditional incentives, also called early bird incentives (EBIs), are increasingly used in survey research to enhance response rates and prompt a faster response. Several studies have shown the ability of this type of incentive to increase response rates during the period they are active and shorten the time between the arrival of the survey invitation and the completion of the questionnaire (e.g., Calderwood et al. 2023; Coopersmith et al. 2016; Fomby, Sastry, and McGonagle 2017; Ward et al. 2014). This ability to prompt a faster survey response can help curb fieldwork efforts, such as the number of calls made to contact a sample unit or the number of reminders sent, which consequently can reduce survey costs (Lynn, Thomson, and Brook 1998). This reduction in costs can be particularly notable in the case of sequential mixed-mode surveys in which a self-completion mode is followed up with one or more interviewer-administered modes.

However, there are important knowledge gaps regarding the possible effect on response rates of an increase in the value of the incentive. This paper attempts to fill some of those gaps. First, we are unaware of previous research addressing a change in the value of an EBI over waves of a longitudinal survey: previous research into changing incentive values has been restricted to unconditional incentives provided simply to encourage participation (e.g., Laurie and Lynn 2009; Rodgers 2011). Second, no previous research has attempted to disentangle the effect due to the value of the incentive and the effect caused by sample members’ perception that the value of an extant incentive has been raised. The experimental design reported in this paper allowed us to separate the effect of a change in the value of an incentive from the effect of the absolute value of the incentive. Third, only a few previous experiments have considered the effect of increasing the value of an EBI on sample composition (Friedel et al. 2022).

This paper presents the results of an experiment embedded in wave 12 of Understanding Society, a mixed-mode longitudinal household study based on a probability sample of residents in the United Kingdom. The experiment covered in this paper trialled an increase in the value of an EBI incentive from £10–£20 to enhance web completion during the web-only phase of the fieldwork and thereby reducing fieldwork efforts in the interviewer-administered phase. In addition, the experiment also covered a random subsample that had been issued CAPI-only in the previous waves and transitioned to a web-first design at wave 12. This subsample allowed us to explore whether any effect of the increase in the incentive is due to the higher absolute value EBI or the fact that the value of an extant incentive was increased, as we would expect any effect of the absolute value also to be observed amongst the sample being offered the EBI for the first time.

The effect of the increase in the incentive value is evaluated in terms of individual response rates after the five-week web-only fieldwork phase and at the end of the fieldwork, the complete household response rate after the web-only phase, which is a proxy for savings in fieldwork efforts and costs, and sample composition. We find evidence suggesting that the effect on response rate is driven by the increase in the value of the incentive rather than the absolute value of it. The results also show that fieldwork efforts during the interviewer-administered phase were lower in the group that received the increased EBI. No difference was observed in the sample profile due to the higher incentive.

2Background

Survey incentives have proven to be an effective intervention to increase response rates in cross-sectional and longitudinal surveys (e.g., Laurie and Lynn 2009; Mercer et al. 2015; Singer and Ye 2013; Toepoel 2012). In recent years, conditional incentives that encourage response within a defined time period, also known as early bird incentives, have attracted the interest of researchers and practitioners as a potentially cost-effective intervention, i.e. one for which the subsequent saving in field costs may outweigh the cost of the EBI (see Lynn, Thomson and Brook (1998) for an early exposition of this argument). The main objective of an EBI is to encourage response during the period that the incentive is active and, as a result, minimise fieldwork efforts, such as the number of reminders sent or calls in interviewer-administered surveys, which could reduce survey costs.

Several experiments have shown the effectiveness of early bird monetary incentives in increasing response rates, at least during the time they are active (e.g., Calderwood et al. 2023; Carpenter and Burton 2018; Friedel et al. 2022; McGonagle 2020). The economic exchange theory can explain this positive effect on prompting a faster response (Biner and Kidd 1994). Sample members would weigh the costs and benefits to decide whether to participate in the survey, and the EBI would contribute to increasing the weight of the benefits. However, the early-bird incentives also involve a time-limited offer—they are active for a specific time period. As discussed in Calderwood et al. (2023), the regret avoidance decision-making theories can help explain the effectiveness of time-limited incentives. Sample members would respond more promptly to the questionnaire in order to avoid regretting missing out on the extra incentive offered (Zeelenberg and Pieters 2007).

The effectiveness of early bird incentives has been explored in several experiments covering different survey modes and designs. Some experiments have shown that EBIs can improve response rates in telephone surveys (Fomby et al. 2017; McGonagle, Sastry, and Freedman 2022), postal surveys (LeClere and Amaya 2012), and reduce interviewing efforts in face-to-face surveys (Brown and Calderwood 2014; Kochanek, Krishnamurty, and Michael 2010). In web surveys, some experiments have shown that EBIs can effectively raise response rates. Some of these studies reported an increase in response rates for the group receiving the EBI limited to the time period in which the incentive was active, while the response rates at the conclusion of the fieldwork tended to be similar between those receiving the EBI and the control group. In the National Immunization Survey in the United States, which surveys a sample of households where underage children reside, an experiment tested the effect of adding an EBI in the form of a $10 gift card offered to those who completed the web survey in the first ten days of fieldwork, before the telephone stage began (Ward et al. 2014). The results showed that those offered the EBI responded earlier to the survey than those receiving a $1 unconditional or no incentive. In a study of US high school principals, an experiment showed that a $50 EBI combined with a $50 conditional incentive was beneficial to increase the response rate during the time-limited period compared to the group being offered just a $50 conditional incentive (Coopersmith et al. 2016). Nevertheless, at the end of the fieldwork, there was no difference in the response rates of the two groups.

Some other experiments have tested the effect of EBIs in longitudinal studies with sequential mixed-mode designs, in which an interviewer-administered mode follows a web survey. In this type of design, the main objective of the EBI is to increase the response rate during the web-only phase in order to reduce the interviewers’ workload at the following stage. In Next Steps, a cohort study that collects data on a sample of people born between 1989 and 1990 recruited in 2004 from secondary schools in England, they tested the ability of an EBI to increase response during the web-only phase of the fieldwork (Calderwood et al. 2023). The web-only phase was followed by CATI for non-respondents, and subsequently by CAPI. The £20 EBI increased the response rate at the end of the web phase compared to the control group, which was offered a £10 conditional incentive. However, this effect did not translate into differences in response rates at the end of the fieldwork or in the sample composition between the control and treatment groups. In another study, at wave 8 of Understanding Society, when a portion of the sample was moved from a CAPI-only design to a web-first sequential mixed-mode design, the research team tested the effect of offering a £10 EBI in addition to the usual conditional or unconditional incentives to foster an earlier response to the questionnaire (Carpenter and Burton 2018). This study employed a quasi-experimental design based on the allocation of random subsample batches over months to organise the fieldwork. The first month of fieldwork served as a reference point, with the EBI being offered for the first time to the second-month sample. The web response rate in the second monthly sample was twice as high (36%) as in the first month (19%), when the EBI was not offered. In a four-wave survey to evaluate the YouthBuild program in the U.S. using a web and telephone sequential mixed-mode design, the research team tested an early bird incentive of $40 for completing the survey during the web phase of the fieldwork compared to a conditional incentive of $25. The results show a higher web response rate for the group receiving the early bird incentive, although there was no difference in the final response rate after the telephone interviewing (Goble, Stein, and Schwartz 2014).

Other experiments have shown that EBIs can have a positive effect on response rates that extends beyond the end of the time-limited period. For instance, an experiment embedded in a survey of participants in a training programme for unemployed citizens in the United States showed the positive effect of offering a $50 EBI versus no incentive on the final response rate of the web survey, as well as speeding up response times (De Santis et al. 2016). Likewise, in the recruitment of a booster sample of the German Internet Panel (GIP), the research team tested the effect on response rates, sample composition and fieldwork costs of including a €20 or €50 EBI, in addition to the €5 prepaid incentive offered to the control group (Friedel et al. 2022). The results showed that the €20 and €50 EBIs had a similar positive effect on the final response rate, although they did not find that the incentives improved sample representativeness. Also, in wave 10 of the DAB panel study, a longitudinal survey that follows a sample of young adults in Switzerland, they tested adding a CHF 10 and CHF 20 early bird incentives to a CHF 10 unconditional incentive. The experiment was restricted to those who took more than seven days to respond (“late respondents”) and non-respondents in the previous wave. The results pointed out a positive effect of the EBI during the time period that it was active and at the end of the fieldwork among the previous wave late respondents (Möser, Glauser, and Becker 2023). The push-to-web survey “Food and You 2”, where a sample of addresses received an invite letter with the login details to the web questionnaire, tested an early-bird incentive to boost the web response rate (Smith et al. 2021). The invitation letter was followed by three reminders, and the second of these reminders, mailed 30 days after the initial invitation, contained a paper questionnaire. In the experiment, the control group was offered a £10 conditional incentive, the first treatment group was offered a £15 early bird incentive if they completed the web survey within eight days after the initial invite was mailed and £10 afterwards, while for the second treatment group, the £15 early bird incentive was reduced to £5 after eight days. The results showed the effectiveness of the EBI in increasing the web response rate during the first eight days of the fieldwork and for the group being offered the £15 EBI and £10 afterwards, the final response rate was higher than the control group. However, the final response rate of the group for which the incentive was reduced to £5 after eight days was lower than that of the control group.

Leverage-salience theory states that survey features, such as incentives and their characteristics, appeal to sample members differently (Groves, Singer, and Corning 2000). A practical implication of this theory is that some sample subgroups can be more motivated to respond after being offered a higher early bird incentive. The evidence about the effect of EBI on differential response rates is scarce. The EBI experiments in Next Steps (Calderwood et al. 2023), Food and You 2 (Smith et al. 2021) and the recruitment of a boost sample in the GIP (Friedel et al. 2022) described above looked into the effect of the EBI on sample composition and did not find significant differences. However, other incentive experiments have found that some subgroups are more likely to react to incentives positively than the average (e.g., Groves et al. 2000; Laurie 2007; Mack et al. 1998; Martin, Abreu, and Winters 2001), especially among the subgroups less likely to participate, such as males, younger sample members, those from ethnic minorities or those on lower incomes.

The evidence listed in the previous paragraphs supports using EBIs to increase response rates in different contexts. In longitudinal studies, the experiments embedded in Next Steps or Understanding Society showed that the EBI helped boost response rates in web-first mixed-mode designs. However, to the authors’ knowledge, there is a lack of evidence about whether increasing the value of the EBI would help boost response rates further. Two hypotheses can explain the effect of an increase in the value of the EBI on response rates. The first hypothesis relies on social exchange theory and focuses on the logic of reciprocity (Dillman, Smyth, and Christian 2014). Panel members may perceive the change in the value of the incentive and interpret it as a token of appreciation, which could activate a reciprocity mechanism that would ultimately be responsible for the increase in response rates. The second hypothesis refers to economic exchange theory and relies on the absolute higher value of the incentive, which would alter the cost-benefit calculation since, for a larger number of sample members, the new incentive value would compensate for the costs of participating (Biner and Kidd 1994).

The most closely related evidence comes from some studies that assessed changes in the value of unconditional or conditional incentives in longitudinal surveys. The Health and Retirement Study (HRS) tested an increase in the value of the unconditional incentives from $20 to $30 or $50. The response rate was higher for the group receiving the $50 incentive, and this difference remained over the subsequent four waves (Rodgers 2011). In the British Household Panel Survey, an experiment tested the effect of raising the adult incentive from £7–£10 and the one for children from £4–5. These relatively small increases resulted in higher response rates, especially for the previous wave non-respondents (Laurie 2007). Likewise, an experiment from the Innovation Panel of Understanding Society tested different incentives—types and values—to increase the response rate of a subsample transitioning from a CAPI-only to a web-first sequential mixed-mode design (Gaia 2017). The experimental design did not allow to infer that the change in the value was the sole cause of the increase in response rates, but panel members receiving the higher incentives had a higher response rate, similar to those of the CAPI-only subsample. The Panel Study of Income Dynamics (PSID) increased the value of the conditional incentive for the remaining non-respondents in the final weeks of data collection from $70–£150. Although it was not an experimental design, the results suggest that the higher incentive had a positive effect on the response rate in the wave in which it was implemented, and the number of attempts to interview the late respondents in the following wave decreased (McGonagle 2020).

3Research questions

The experiment was embedded in a sequential mixed-mode survey, where a telephone interview attempt for the non-respondents followed a web-only fieldwork phase. In this context, increasing the value of the EBI sought to boost the response rate during the web-only phase of the fieldwork as a route to reducing resources allocated to the interviewer-administered stage of the fieldwork. The first research question addresses the effect of the incentive value increase on the response rates after the five-week web-only fieldwork and at the end of the CATI phase.

RQ 1

Does an increased EBI, for sample members previously offered a lower-value EBI, affect (a) response rates at the end of the five-week web-only phase of the fieldwork, and (b) final response rates at the end of fieldwork?

The experimental design allows us to disentangle the two hypotheses that may explain the effect of increasing the value of the incentive in a panel study: the effect of offering a higher absolute value and the impact of perceiving the change in the value (Biner and Kidd 1994; Dillman et al. 2014).

RQ 2

Are any effects of the increased EBI on response rates driven by the absolute value of the EBI or by the fact that the higher value represents an increase?

Leverage-salience theory states that incentives and the incentive characteristics can be more salient for some sample members than others (Groves et al. 2000). The third research question analyses variation in response rates across sample subgroups to identify whether some panel members were more strongly affected by the increase in the value of the EBI than others.

RQ 3

Does any effect of the higher EBI incentive vary across sample subgroups?

The main objective of early bird incentives is to reduce survey costs by prompting an early response (e.g., Calderwood et al. 2023; Carpenter and Burton 2018). The second outcome of interest is the full household response rate at the end of the web-only period, a proxy of fieldwork efforts during the interviewer-administered phase. In a household survey such as Understanding Society, where all adults—aged 16 and older—in the household are invited to respond to an individual questionnaire, a substantial reduction in fieldwork efforts occurs when all adults in a household complete their interviews online, so the household is not issued to CATI. Thus, we used the complete household response rate, which refers to households where all adult interviews were completed, as an indicator of the impact of the higher incentive on fieldwork efforts.

RQ 4

Does the higher EBI increase the complete household response rate at the end of the web-only phase?

Finally, differential response propensities across groups due to the higher EBI could affect sample composition. Therefore, the fifth research question assesses the impact of the increased incentive on sample composition.

RQ 5

Does the increase in the EBI affect sample composition?

4Data and methods

4.1The survey

Understanding Society is a national probability-based survey started in 2009. At wave 2, it incorporated the former British Panel Household Survey (BHPS). The target population of Understanding Society includes individuals of all ages residing in the United Kingdom. Adult panel members aged 16 and over are invited to take the survey annually alongside other adult household members.

The study has multiple sample components. The main component is the General Population Sample (GPS), which comprises two elements: a clustered and stratified probability sample of more than 24,000 households selected in Great Britain in 2009–10 and a simple random sample of approximately 2000 households selected in Northern Ireland in 2009 (Lynn 2009). The British Household Panel (BHPS) started in 1991 and consisted of a stratified and clustered probability sample of more than 5000 households; boost samples for Wales, Scotland were added in 1999, and in 2001 a simple random sample of households from Northern Ireland (Marcia Freed et al. 2018). In addition, Understanding Society includes two boost samples: the Ethnic Minority Boost (EMB) sample, selected in 2009–10 from areas with a high concentration of persons from an ethnic minority background (Berthoud et al. 2009), and the Immigrant and Ethnic Minority Boost (IEMB), selected at wave 6 (2014–15) (Lynn et al. 2018). At wave 12, Verian (formerly Kantar) and NatCen Social Research, the research agencies responsible for the fieldwork, issued 22,400 households. The wave 12 cross-sectional response rate (AAPOR RR 6) was 69.5. Online Appendixes A and B provide more details about Understanding Society design and cumulative response rates. The data from Understanding Society wave 12 used in this analysis is publicly available through the UK Data Service (University of Essex 2023).

Understanding Society has evolved from a face-to-face design, with a few non-respondent cases issued to the phone, to a web-first sequential mixed-mode design. Up to wave six, almost all households were issued to CAPI, with just a few being contacted on the phone during a mop-up period at the end of the fieldwork. The web mode was offered for the first time in wave seven, but only to the wave six non-respondents. From wave eight, an increasing proportion of panel members have been invited to complete the survey online, with those who do not respond online being issued to CAPI. From waves 8–11, before the COVID-19 crisis, three fieldwork protocols coexisted in the survey:

  1. 1.

    a random subsample of households (20% of the total) remained in a CAPI-only design;

  2. 2.

    most of the rest of the households (70% of the total) had been moved to a web-first protocol (invitation to complete online, with CAPI follow-up);

  3. 3.

    households out of the CAPI-only subsample but with a low predicted probability to respond online (10% of the total) were allocated to a “CAPI-first” design (Lynn 2017).

Wave 12 fieldwork was launched in January 2020, and due to the COVID-19 pandemic and the suspension of all face-to-face fieldwork in the United Kingdom from mid-March 2020, all households were moved to a sequential web and CATI mixed-mode design (Burton, Lynn, and Benzeval 2020). This protocol consisted of a 5-week web-only period, after which interviewers started contacting non-respondents on the phone. The web survey remained open throughout the whole fieldwork period.

The incentives strategy in Understanding Society combines unconditional and conditional incentives sent or offered to panel members based on previous participation. Furthermore, an EBI is offered to those completing the web questionnaire within the first five weeks of fieldwork. Table 1 summarises the incentive strategy extant at the start of wave 12 of Understanding Society. Individuals who had responded at the previous wave received a £10 unconditional incentive, while those in responding households who had not completed the individual questionnaire or were new household entrants received the same amount upon completing the questionnaire. Panel members in households that had not participated in the previous wave received a £20 incentive conditional upon completing the individual questionnaire. The incentives were sent in the form of gift cards valid at some of the most popular retailers in the country. The unconditional incentive gift card was sent in the invitation letter, while the conditional and EBI were mailed after completing the questionnaire.

Table 1 Incentive strategy at wave 12 of Understanding Society

Previous wave household outcome

Responding household

Non-responding household

Previous wave adult interview outcome

Responding adult and rising 16

Non-responding adult and new entrants

Non-responding adult, rising 16 and new entrants

Unconditional incentive

£10

None

None

Incentive conditional on completing individual questionnaire

None

£10

£20

Early-bird incentive conditional on completing web questionnaire during first 5 weeks of fieldwork (web-first protocol only)

£10

£10

£10

4.2Experimental design

In order to manage the fieldwork, the sample of Understanding Society is divided randomly into 24 monthly samples. The EBI experiment was fielded in six monthly samples of wave 12, covering April to September 2020, coinciding with the onset of the COVID-19 crisis when Understanding Society adopted a web and telephone sequential mixed-mode design. Consequently, the previously CAPI-only random subsample was shifted to the web-first mixed-mode protocol and received the EBI for the first time, while the previously web-first subsample had already been offered an EBI at previous waves. In both sub-samples, households were randomly assigned to be offered either a £10 or a £20 EBI.

The experimental design is presented in Table 2. Panel members from households randomly allocated to the control group were offered the usual £10 EBI, whilst those in the higher EBI group were offered a £20 EBI. Survey respondents had to complete the web questionnaire before the five-week deadline to receive the gift card in their mailbox. Both experimental groups received, in addition to the EBI, the unconditional or conditional incentive based on their previous wave participation (see Table 1). The invitation letter and emails that panel members received at the beginning of the fieldwork included a reference to the values and deadline of the EBI (Fig. 1).

Table 2 Summary of the experimental allocation by previous wave fieldwork protocol

Fieldwork protocol at previous wave (Wave 11)

Control (£10)

Higher EBI (£20)

Web-first

n households

1326

1299

n adults

2571

2540

CAPI-only

n households

 302

 288

n adults

 555

 518

Fig. 1Text excerpts from the invitation letter and email referred to the value of the early bird incentive

4.3Methods and variables

Research questions RQ1, RQ2 and RQ3 address different aspects related to individual response rates at the end of the web-only period and at the end of the fieldwork after the CATI phase. The web response rate at the end of the web-only fieldwork was calculated based on the AAPOR RR6 (AAPOR 2023):

1

where Iwebo represents the web interviews completed during the web-only phase of the fieldwork, Iweb represents the web interviews and Pweb the partials completed on the web mode during the whole fieldwork, ICATI and PCATI correspond to the interviews and partials from CATI interviews, Pr denotes the proxy interviews where another household member responded to a shorter version of the questionnaire on behalf of a non-respondent panel member, IR is individual refusals, HR is household refusals, NC is non-contacted households, O is other non-interviews, and U untraced households. Partials refer to individual questionnaires completed up to the household finance module. The final response rate adds up all the web and CATI interviews:

2

Most research questions involve testing the difference in response rates between the control and treatment groups. For this, we used the predicted response probabilities from logistic regression models. The logistic regression models used the response indicator as the dependent variable and the experimental allocation as the independent variable. For the analyses that involved testing the differences across sample subgroups, the moderator variable and the interaction with the experimental allocation indicator were included in the models. The predicted probabilities from the models were used to test the differences (Mize 2019).

The first research question (RQ1) compares the response rate of the experimental groups at the end of the web-only phase of the fieldwork and the end of the fieldwork. This research question focuses on the effect of increasing the value of the incentive and, therefore, the analysis was restricted to the subsample that had been web-first in the previous wave1. The second research question (RQ2) seeks to understand how the increase in the value of the EBI affects the response rate. To address this question, we compare the effect due to the increase in the value of the EBI in the (previously) web-first subsample to the treatment effect in the (previously) CAPI-only subsample, which is an estimated impact of offering a higher absolute EBI value. Since the (previously) web-first group excludes a small proportion of households with particularly low predicted web response propensities, we removed the equivalent households from the CAPI-only subsample. To identify the low web response propensity households in the CAPI-only subsample, we replicated the procedure used to allocate households to the web-first protocol based on the predicted web response propensities of the households and their members (see Lynn 2017). Online Appendix C compares the profiles of the web-first and CAPI-only subsamples after removing the low web propensity cases.

The third research question (RQ3) explores the effect of increasing the value of the EBI across the groups defined by a set of moderators. Two sets of logistic regressions were fitted to compute the heterogeneous effects in the previous wave web-first subsample: simple models that only included the experimental allocation indicator, a moderator, and the interaction term between the two, and multivariate models that included all moderators and the interactions terms. The reason for producing these sets of models—simple and multivariate—is due to the level of missingness in some of the moderators, and the effect that excluding a part of the sample could have on the estimates of some heterogeneous effects. The estimates from the simple models are found in the results section, and the effect estimates from the multivariate models are presented in the Online Appendix D.

In the analysis of RQ3, we used a set of variables that might moderate the relationship between the incentive value and the propensity to respond to the survey. These moderators lie in three groups: demographic characteristics, internet use measures, and variables about past participation in the study. We tested the effect of the incentive across the groups defined by gender, age, ethnic background, personal net income, education and Internet use. Finally, we also included an indicator of the previous wave household and individual response as well as a variable that identifies the regular respondents to the survey, i.e. those responding to at least two in three waves.

The fourth research question (RQ4) examines the effect of increasing the value of the early-bird incentive on the full household response rate at the end of the web-only phase. This analysis was carried out on the ex-web-first subsample. We calculated the full household web response rate (FHWRR), which corresponds to the proportion of households where all eligible adults completed their interviews within the web-only phase in addition to the household questionnaire. The full household web response rate is based on the AAPOR RR5 (AAPOR 2023), where the partials are not considered as respondents:

3

where Iweb represents households where the household and all individual questionnaires were completed during the web-only phase, while Pweb refers to households where at least the household questionnaire and one individual questionnaire were completed online, but not all individual questionnaires. Note that we omit the CATI mode from this calculation as the analysis relates to the web-only phase. Finally, the fifth research question (RQ5) uses the sample of wave 12 respondents from the (previously) web-first group. The sample profile was compared between the experimental groups for a set of target variables to evaluate the impact of increasing the value of the incentive on the sample composition using a chi-squared test adjusted for the complex sample design of the study. This analysis included a mix of demographic, attitudinal and health-related variables.

The analyses described above were weighted to account for the unequal selection probabilities and the allocation of the experiment to six monthly samples of wave 12. The statistical tests of the differences relevant to RQ1, RQ3, and RQ4 were one-sided since we only expected a positive (or null) effect when offering a higher EBI. The significance level was set to 5%. The analysis was carried out using Stata 17 (StataCorp 2021). A description of the variables used in the analysis can be found in Online Appendix B.

4.4Results

Table 3 presents the individual response rates by experimental condition for the web-first subsample and the equivalent group from the CAPI-only subsample. The left-hand panel of Table 3 shows that the increase in the value of the EBI from £10–£20 had a positive effect on the response rates. The response rate after the five-week web-only period (RQ1.a) was 4.9 p.p. higher for the group receiving the higher incentive, while at the end of the fieldwork—after the CATI interviewing (RQ1.b), the difference eroded to 2.1 p.p. and was not significant.

Table 3 Individual response rates and standard errors after the web-only period and at the end of the fieldwork for the full sample by experimental group and previous wave fieldwork protocol

Previous wave web-first

Previous wave CAPI-only

Second dif2

Control (£10)

HEBI1 (£20)

Dif

Control (£10)

HEBI1 (£20)

Dif

%

S.E.

%

S.E.

%

S.E.

%

S.E.

%

S.E.

%

S.E.

%

S.E.

Web response rate refers to the panel members responding online in the first 5 weeks of the fieldwork, excluding those who responded online after the beginning of the CATI phase of the fieldwork

1Higher Early Bird Incentive

2Second difference between the estimated effect of the higher early bird incentive in the previous wave web-first group and the previous wave CAPI-only group

* p < 0.05, ** p < 0.01, *** p < 0.001

Web response rate

  61

1.4

  66

1.4

5**

2.0

 60

3.1

 61

3.4

1

4.4

  4

4.9

Final response rate

  77

1.3

  79

1.2

2

1.8

 76

2.8

 79

2.7

3

3.9

 −1

4.3

n

2571

2540

555

518

The second research question (RQ2) sought to shed some light on the reason underlying the effect of the increase in the EBI value on the response rate. The last column in Table 3 (Second dif) shows that the incentive effect at the end of the web-only period was more prominent among the panel members allocated to the web-first protocol in the previous waves compared to those transitioning from a CAPI design, although the difference of 4.2 p.p. between the two treatment effects was not significant. At the end of the fieldwork, after the CATI phase, the increase in response rates in the CAPI-only subsample was 1.0 p.p. higher than in the web-first subgroup, although this difference was not significant.

Table 4 presents the heterogeneous effects of the increase in the value of the EBI for a set of sample subgroups (RQ3). At the end of the web-only period, the increase in the value of the incentive had a more pronounced effect among males, younger panel members (16–44 years old), those without a university degree, ethnic minorities, and those on lower incomes. Regarding technology use, panel members who use the Internet on a daily basis were more likely to respond after receiving the higher EBI. Then, regular respondents, those who took part in at least two-thirds of the waves they were invited to, reacted better to the higher EBI as well as the previous wave non-respondents from households where other members responded to the survey and the previous wave respondents. At the end of the fieldwork, those with an ethnic minority background and those who did not participate from responding households showed a significantly higher response propensity after receiving the increase in the EBI.

Table 4 Differences in marginal effects and standard errors of the higher early bird incentive on early web response and final response by sample subgroups

Web response rate

Final response rate

n

Diff.

S.E.

Diff.

S.E.

Web response rate refers to the panel members responding online in the first 5 weeks of the fieldwork, excluding those who responded online after the beginning of the CATI phase of the fieldwork. The estimates in each cell correspond to the difference in the predicted probabilities of response from a set of logistic regression models, each including a moderator, the experimental allocation variable and the interaction term. Estimates and standard errors are expressed as percentage points. Analysis is restricted to the (previously) web-first subsample

* p < 0.05, ** p < 0.01, *** p < 0.001

Gender

Male

   7.2**

(2.5)

   3.6

(2.3)

2390

Female

   2.8

(2.3)

   0.8

(1.9)

2719

Age groups

16–29

  10.4**

(4.2)

   2.9

(4.1)

1040

30–44

   8.9**

(3.8)

   5.0

(3.6)

1037

45–64

   1.2

(3.0)

  −0.1

(2.4)

1776

65+

   3.6

(3.3)

   2.4

(2.6)

1258

Education

No degree

   5.1*

(2.4)

   2.0

(2.0)

3578

University degree

   5.1

(3.1)

   2.5

(2.4)

1320

Ethnic background

Ethnic minority

  11.6*

(5.6)

  11.9**

(5.0)

 847

White British

   3.8*

(2.1)

   0.6

(1.8)

4164

Individual net income (Quartiles)

Q1 (Bottom)

  10.2**

(3.6)

   3.5

(3.1)

1263

Q2

   7.2*

(3.6)

   1.4

(2.9)

1255

Q3

  −1.6

(3.4)

  −0.7

(2.9)

1235

Q4 (Top)

   3.6

(3.4)

   3.7

(2.7)

1178

Uses Internet daily

Soft-users and non-users

   2.3

(4.8)

   2.7

(4.5)

 713

Daily users

   5.9**

(2.1)

   2.1

(1.7)

4121

Response pattern

Irregular respondent

   3.0

(3.4)

   1.2

(3.8)

 885

Regular respondent

   4.8**

(2.0)

   1.7

(1.5)

4226

Last wave response

Respondent

   4.9**

(1.9)

   2.0

(1.4)

4098

Non-respondent (responding household)

   9.5**

(4.1)

   8.7*

(4.8)

 522

Non-respondent (non-responding household)

   0.1

(5.1)

  −3.6

(6.3)

 491

The third research question (RQ3) addresses the possibility that the positive effect of the increase in the value of the EBI on response during the web-only fieldwork could translate into a reduction in the cases issued to the interviewers in the subsequent phase of the fieldwork. Table 5 presents complete household response rates after the web-only phase for the control and higher EBI groups. The higher EBI increased the complete household response rate by 4.2 p.p., from 54.6%–59%.

Table 5 Complete household response rate at the end of the web-only phase by experimental group

Control (£10)

Higher EBI (£20)

Difference

%

S.E.

%

S.E.

%

S.E.

The base for the calculations is households issued to wave 12 fieldwork (quarters 2 and 3)—weighted estimates. These estimates are predicted from a logistic regression model that included the last wave fieldwork protocol and the interaction term with the experimental allocation. Analysis is restricted to the ex-web-first subsample

* p < 0.05, ** p < 0.01, *** p < 0.001

Full household response rate

  55

1.5

  59

1.6

4*

2.2

n

1328

1302

Regarding sample composition (RQ5), there is no evidence that the increase in the EBI altered the sample composition for the variables included in the analysis. The table can be found in the Online Appendix E.

5Discussion

The first research question (RQ1) addressed the effect of increasing the value of an EBI from £10–£20 in a longitudinal study. The results show that increasing the value of the EBI boosted response rates, at least during the time-limited period when the EBI was active. The 4.9 p.p. increase in the response rate observed at the end of the web-only period translated to a 2.1 p.p. increase at the end of the fieldwork, suggesting that the increased EBI might also have a positive effect on the final survey response rate, although this difference was not significant. The observed positive impact of increasing the value of the EBI is in line with the results obtained in other studies after raising the value of unconditional incentives (Laurie 2007; Rodgers 2011). In these experiments, the increase in the response rates was achieved by relatively small increases in the value of the incentive, from £7–10 (Laurie 2007) or after a somewhat substantial increase in the amount offered from $20–50 (Rodgers 2011). In this case, where the incentive value was raised by £10 (or 100%), it is unclear whether a somewhat smaller increase could have achieved a similar boost in response rates. Another significant point, which aligns with the findings of previous studies (Calderwood et al. 2023), is that the withdrawal of the increased EBI, which is time-limited and offered exclusively during the first five weeks of fieldwork, did not harm the final response rate.

The second research question (RQ2) focused on whether the increase in response rates is driven by the higher absolute value of the incentive or the panel members’ perception that the incentive value has been raised. From a survey practice perspective, it would be useful to know how effective it is to progressively increase the value of the incentive versus offering a higher incentive from an earlier wave in the study. The results show that while the effect of increasing the EBI to £20 boosted response rates at the end of the web-only phase of the fieldwork by 4.9 p.p., offering a £20 instead of £10 EBI did not affect response rates for the subsample that was exposed to the incentive for the first time at wave 12. It, therefore, seems to be the perception of the EBI value having increased that has brought about improved response, rather than the higher absolute value of the EBI per se. This finding suggests that offering subsequent increases in the incentives might be a more cost-effective way to maintain response rates than providing a higher incentive upfront. However, we only experimented with one increase of £10 (or 100%), it might be that a larger absolute or relative increase in the value has a different effect on the response propensities (Rodgers 2011). Therefore, it is unclear how the magnitude of the incentive value increase can impact the trade-off between the part of the effect due to the absolute value and the part due to the perceived change in the value. More experimental research is needed to assess this hypothesis.

The third research question (RQ3) looked at the change in response rates across some groups formed by a set of moderators. Although previous experiments on the effect of EBI had not found a significant impact on sample composition (Calderwood et al. 2023; Friedel et al. 2022; Smith et al. 2021), the results of the experiment showed that the increase in the value of the EBI had a more prominent effect on the web response rates of some subgroups that are less likely to participate and more prone to drop out of the study. For instance, younger panel members (16–44), those on a lower income, with an ethnic minority background, or previous wave non-respondents from responding households exhibited higher response rates at the end of the web-only period and, in the case of the ethnic minority panel members and previous wave non-respondents, this effect was substantial and endured until the end of the fieldwork. However, the higher response rates of these subgroups due to the higher EBI were not enough to alter the composition of the sample of respondents. In fact, regarding the sample of respondents (RQ5), we did not find any significant differences in the eleven variables we used in the analysis.

The fourth research question (RQ4) explored whether an increase in response rates due to the higher EBI could beneficially impact fieldwork efforts. There is a mechanism that connects an increase in individual response rates due to a higher EBI with a reduction in fieldwork efforts. In the context of a sequential mixed-mode survey, higher participation rates at the first, self-completion, phase of fieldwork will result in less fieldwork effort being necessary at the second, interviewer-administered phase. The results show that the complete household response rate increased after raising the value of the incentive, meaning that the telephone interviewers indeed had fewer households to contact, suggesting that survey costs could have been reduced. This finding is consistent with, but expands upon, previous research, which showed that using EBIs could reduce fieldwork efforts or positively impact survey costs (Calderwood et al. 2023; McGonagle et al. 2022), by suggesting that increasing the value of the EBI could further reduce the necessary fieldwork efforts.

The generalisation of the experiment’s results needs to consider various limitations. First, interpretation must consider the specificities of the survey context. The experiment was embedded in wave 12 of a household panel, and the EBI was offered along with another conditional or unconditional incentive. Second, fieldwork coincided with the COVID-19 lockdown in the United Kingdom, which could have affected the reaction of panel members to the EBI.

1Supplementary Information

The online appendix contains additional information about the data used, Understanding Society, the detailed results of the analysis of research question 5, and the models underlying the marginal effects presented in the main text.

References

AAPOR (2023). Standard definitions. Final dispositions of case codes and outcome rates for surveys. Alexandria: American Association for Public Opinion Research.a, b

Berthoud, R., Fumagalli, L., Lynn, P., & Platt, L. (2009). Design of the Understanding Society ethnic minority boost sample. Understanding society working papers, Vol. 2009–02.

Biner, P. M., & Kidd, H. J. (1994). The interactive effects of monetary incentive justification and questionnaire length on mail survey response rates. Psychology & Marketing, 11(5), 483–492. https://doi.org/10.1002/mar.4220110505.a, b, c

Brown, M., & Calderwood, L. (2014). Can encouraging respondents to contact interviewers to make appointments reduce fieldwork effort? Evidence from a randomized experiment in the UK. Journal of Survey Statistics and Methodology, 2(4), 484–497. https://doi.org/10.1093/jssam/smu017.

Burton, J., Lynn, P., & Benzeval, M. (2020). How understanding society: the UK household longitudinal study adapted to the COVID-19 pandemic. Survey Research Methods, 14(2), 235–239. https://doi.org/10.18148/SRM/2020.V14I2.7746.

Cabrera-Álvarez, P., & Lynn, P. (2023). Increasing the Value of an Early Bird Incentive in a Mixed-Mode Longitudinal Survey. Understanding Society Working Papers, 2023(11), 1–28.

Calderwood, L., Peycheva, D., Wong, E., & Silverwood, R. (2023). Effects of a time-limited push-to-web incentive in a mixed-mode longitudinal study of young adults. Survey Research Methods. https://doi.org/10.18148/SRM/2023.V17I2.7980.a, b, c, d, e, f, g, h, i

Carpenter, H., & Burton, J. (2018). Adaptive push-to-web: experiments in a household panel study. Understanding Society Working Papers, 2018(05), 1–11.a, b, c

Coopersmith, J., Klein Vogel, L., Bruursema, T., & Feeney, K. (2016). Effects of incentive amount and type of web survey response rates. Survey Practice, 9(1), 1–10. https://doi.org/10.29115/SP-2016-0002.a, b

De Santis, J., Callahan, R., Marsh, S., & Perez-Johnson, I. (2016). Early bird incentives: results from an experiment to determine response rate and cost effects

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys. The tailored design method (4th edn.). Hoboken: Wiley & Sons.a, b

Fomby, P., Sastry, N., & McGonagle, K. A. (2017). Effectiveness of a time-limited incentive on participation by hard-to-reach respondents in a panel study. Field Methods, 29(3), 238–251. https://doi.org/10.1177/1525822X16670625.a, b

Freed Taylor, M. J. B., Buck, N., & Prentice-Lane, E. (Eds.). (2018). British Household Panel Survey user manual volume A: introduction, technical report and appendices. Colchester: University of Essex.

Friedel, S., Felderer, B., Krieger, U., Cornesse, C., & Blom, A. G. (2022). The early bird catches the worm! Setting a deadline for online panel recruitment incentives. Social Science Computer Review. https://doi.org/10.1177/08944393221096970.a, b, c, d, e

Gaia, A. (2017). The effect of respondent incentives on panel attrition in a sequential mixed-mode design. Understanding Society Working Papers, 2017(03), 1–22.

Goble, L., Stein, J., & Schwartz, L. K. (2014). Approaches to increase survey participation and data quality in an at-risk, youth population. FedCASIC Conference, Washington, D.C, 19.03.

Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: description and an illustration. Public Opinion Quarterly, 64(3), 299–308. https://doi.org/10.1086/317990.a, b, c

Kochanek, K., Krishnamurty, P., & Michael, R. (2010). The “early bird” field innovation on a 30-year-old longitudinal survey. Panel Survey Methodology Workshop, Mannheim, July.

Laurie, H. (2007). The effect of increasing financial incentives in a panel survey: an experiment on the British Household Panel Survey. ISER Working Paper Series, 2007(05), 1–22.a, b, c, d

Laurie, H., & Lynn, P. (2009). The use of respondent incentives on Longitudinal Surveys. In Methodology of longitudinal surveys (pp. 205–233). Chichester: John Wiley.a, b

LeClere, F., & Amaya, A. (2012). Household early bird incentives: leveraging family influence to improve household response rates. In Section on survey research methods—JSM (pp. 4156–4165).

Lynn, P. (2009). Sample design for Understanding Society. Understanding society working papers, Vol. 2009–01.

Lynn, P. (2017). Pushing household panel survey participants from CAPI to web. 28th International Workshop on Household Survey Nonresponse, Utrecht, Netherlands.a, b

Lynn, P., Thomson, K., & Brook, L. (1998). An experiment with incentives on the British Social Attitudes Survey. Survey methods newsletter, Vol. 12–14.a, b

Lynn, P., Nandi, A., Parutis, V., & Platt, L. (2018). Design and implementation of a high-quality probability sample of immigrants and ethnic minorities: lessons learnt. Demographic Research, 38, 513–548. https://doi.org/10.4054/DemRes.2018.38.21.

Mack, S., Huggins, V., Keathley, D., Sundukchi, M., & Mack, S. (1998). Do monetary incentives improve response rates in the survey of income and program participation. In Proceedings of the American Statistical Association, survey research methods section. San Diego. (pp. 529–534).

Martin, E., Abreu, D., & Winters, F. (2001). Money and motive: effects of incentives on panel attrition in the survey of income and program participation. Journal of Official Statistics, 17(2), 267–284.

McGonagle, K. A. (2020). The effects of an incentive boost on response rates, fieldwork effort, and costs across two waves of a panel study. Methods, Data, Analyses, 14(2), 10. https://doi.org/10.12758/mda.2020.04.a, b

McGonagle, K. A., Sastry, N., & Freedman, V. A. (2022). The effects of a targeted “early bird” incentive strategy on response rates, fieldwork effort, and costs in a national panel study. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smab042.a, b

Mercer, A., Caporaso, A., Cantor, D., & Townsend, R. (2015). How much gets you how much? Monetary incentives and response rates in household surveys. Public Opinion Quarterly, 79(1), 105–129. https://doi.org/10.1093/poq/nfu059.

Mize, T. (2019). Best practices for estimating, interpreting, and presenting nonlinear interaction effects. Sociological Science, 6(4), 81–117. https://doi.org/10.15195/v6.a4.

Möser, S., Glauser, D., & Becker, R. (2023). Use of incentives in the DAB panel study. Conference of the European Survey Research Association, Milan, July.

Rodgers, W. L. (2011). Effects of increasing the incentive size in a longitudinal study. Journal of Official Statistics, 27(2), 279–299.a, b, c, d, e

Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The Annals of the American Academy of Political and Social Science, 645, 112–141.

Smith, P., King, L., Candy, D., Bridge, R., & Armstrong, B. (2021). Incentivising early responses in a push-to-web survey: an experiment. In Social Research Practice (pp. 4–12).a, b, c

StataCorp (2023). Stata statistical software: release 18

Toepoel, V. (2012). Effects of incentives in surveys. In L. Gideon (Ed.), Handbook of survey methodology for the social sciences (pp. 209–223). New York: Springer.

University of Essex, Institute for Social and Economic Research (2023). Understanding society. In UK data service 9th edn. https://doi.org/10.5255/UKDA-Series-2000053.

Ward, C., Stern, M., Vanicek, J., Black, C., Knighton, C., & Wilkinson, L. (2014). Evaluating the effectiveness of early bird incentives in a web survey. FedCASIC Conference, Washington DC, March.a, b

Zeelenberg, M., & Pieters, R. (2007). A theory of regret regulation 1.0. Journal of Consumer Psychology, 17(1), 3–18. https://doi.org/10.1207/s15327663jcp1701_3.