Evaluating the Effect of Monetary Incentives on Web Survey Response Rates in the UK Millennium Cohort Study

Survey Research Methods
ISSN 1864-3361
821010.18148/srm/2024.v18i1.8210Evaluating the Effect of Monetary Incentives on Web Survey Response Rates in the UK Millennium Cohort Study
Charlotte Booth charlotte.booth@ucl.ac.uk
Erica Wong erica.wong@ucl.ac.ul
Matt Brown matt.brown@ucl.ac.uk
Emla Fitzsimons e.fitzsimons@ucl.ac.uk
Centre for Longitudinal StudiesUniversity College London London UK
47122024European Survey Research Association

A major objective for longitudinal studies is retaining participants over time. The Millennium Cohort Study (MCS) is the largest on-going nationally representative birth cohort study of young people in the UK. Seven waves of data collection took place face-to-face between 2001 and 2019, with no monetary incentives offered. Throughout 2020 to 2021, participants were invited to take part in three web surveys focused on understanding the effects of the COVID-19 pandemic. In the third web survey, conditional monetary incentives were introduced in the form of a randomised experiment (N = 13,351). Three quarters of the issued sample were offered a £10 voucher to complete the survey and the remaining proportion were offered no incentive as usual. Regression analyses were conducted to examine the effect of the incentive on response rates and various aspects of data quality. The incentive boosted the response rate by 6-percentage points, from a baseline of 22%, with a smaller incentive effect observed in previous non-respondents. Regarding data quality, the incentivised group showed slightly lower break-off rates and spent more time completing the survey. In conclusion, the incentive had a large positive effect on response rates and small positive effect on some aspects of data quality. Future research should evaluate the effect of incentives in relation to other modes of data collection and consider alternative strategies to improve response rates for previous non-respondents, who are considered harder to reach.

1Introduction

The Millennium Cohort Study (MCS) is the largest on-going nationally representative birth cohort study of young people in the UK. It has followed the lives of 18,818 individuals born in 2000–02, up to the present day. Nationally representative birth cohort studies, such as MCS, provide an optimal resource for observing naturally occurring trends in population health, economic and social domains, understanding changes in inequalities, and identifying pervasive risk factors and intervention targets.

One of the major challenges for longitudinal studies is retaining participants over time, as they cannot be replaced and their loss in any wave can lead to greater cumulative losses over time (Williams and Brick, 2018; Lepkowski & Couper, 2002). Declining response rates affect sample stability and representativeness, increasing the possibility for non-response bias. Statistical methods, such as inverse probability weighting and multiple imputation can partly correct for non-response bias in survey estimates (Silverwood et al., 2021). However, collecting observed data and retaining participants remains the primary goal in longitudinal studies.

Prior to the COVID-19 pandemic, seven waves of face-to-face data collection had been conducted in the MCS, with high engagement and relatively low attrition observed. The response rate was 72% of the issued sample at baseline (Plewis et al., 2007), and 74% of the issued sample at the last major wave at age 17 (Fitzsimons et al., 2020). During the COVID-19 pandemic, three web surveys were conducted at separate timepoints throughout 2020–21, when participants were aged 19–21 years old. Due to the exceptional circumstances, the surveys had to be conducted rapidly and remotely via web, which was the first time that all participants were invited to complete a survey online.

While web surveys offer a cost-effective way to collect large amounts of data rapidly (Cornesse et al., 2021), response rates are typically much lower than other modes of data collection (Braekman et al., 2022; Daikeler et al., 2020; Dillman et al., 2014). Indeed, the first two COVID-19 web surveys achieved response rates of 27 and 24% respectively, which were much lower than those typically achieved face-to-face. A randomised experiment was conducted in the third and final web survey, where some participants were offered a £10 shopping voucher and others were offered no incentive as usual, in order to evaluate the effect of an incentive on response rates and data quality.

2The effect of incentives on response rates

As web surveys tend to achieve much lower response rates than other modes, incentives are often used to reduce attrition. There is ample evidence of their effectiveness in web surveys (Goritz, 2006; McMaster et al., 2017), including evidence of greater effects of incentives in online versus offline surveys (Goritz, 2006; 2016). For example, a recent experimental study found that web surveys produced higher response rates than paper/postal questionnaires in a longitudinal survey of young adults when conditional ($ 10 voucher) incentives were offered (McMaster et al., 2017).

As evidenced in several meta-analyses and reviews, cash (or cash-like1) incentives tend to be more effective than gifts (Singer & Ye, 2013), lotteries (Goritz & Wolff, 2007), charity donations, or loyalty points (Goritz & Neumann, 2016). Although unconditional incentives, where payment is given upfront, are generally more effective than conditional incentives, where payment is given upon completion of a survey (Singer & Ye, 2013), some evidence from longitudinal studies suggests that the opposite can be true (Castiglioni et al., 2008; Goritz, 2015; Collins et al., 2000; Coughlin et al., 2011).

One issue for longitudinal studies is whether incentive-induced response at one wave impacts response at later waves. Studies have shown that any conditioning effects may be positive and enduring over later waves, as even without repeated incentive payments, their positive effect on response rates seems to persist (Laurie & Lynn, 2009; Mack et al., 1998; Scherpenzeel et al., 2002; Castiglioni et al., 2008; Sundukchi, 1999). For example, Jackle and Lynn’s (2008) incentive experiment on a postal/telephone longitudinal study among young adults found a lasting but diminishing effect of incentives on response at later waves. More recent research found that offering incentives for early registration had a positive effect on response rates for the next several waves of the German Internet Panel (Friedel et al., 2023). Therefore, as well as examining the effect of incentives on response rates concurrently, we examined whether they had any effect on participation at a subsequent web survey, conducted six months later.

3Differential effects of incentives

Incentives are sometimes used to engage participants who may be less likely to take part (Nicolaas & Stratford, 2005; Knibbs et al., 2018). Incentives have been found to have stronger effects on sociodemographic groups with typically lower response propensities, such as males, those from lower socioeconomic backgrounds, and ethnic minority groups (Laurie, 2007; Felderer at al., 2018; McGonagle et al., 2013; Martin et al., 2001; Ryu et al., 2006). For example, in the UK’s Department for Education sponsored omnibus survey of pupils and their parents/carers, incentives doubled response rates on average, but tripled response rates for low-income families, as proxied by free school meal eligibility (Knibbs et al., 2018).

There is also evidence that the effect of incentives can vary by geographic area, over and above effects explained by socioeconomic status or civic engagement levels (Hanly et al., 2014). For example, differential regional effects have been found in experimental studies in Ireland (Hanly et al., 2014), the Netherlands (Wetzels et al., 2008), and the US (Westra et al., 2015).

However, experimental evidence for differential effects of incentives in longitudinal studies have been somewhat inconsistent, with some studies in the US, UK and Switzerland finding that incentive effects do not vary by sociodemographic characteristics (Cabrera-Alvarez and Lynn, 2023; Lipps et al., 2022; LeClere et al., 2012; Suzer-Gurtekin et al., 2016; Jackle & Lynn, 2008). Further, experimental studies from the US Survey of Income and Program Participation (SIPP) suggest that any differential effects by subgroup may disappear across waves within the same study, as incentives were initially effective in boosting response among Black households and households in poverty (Mack et al., 1998), but incentives offered in later waves did not elicit the same differential effects (Sundukchi, 1999; Westra et al., 2015).

Research suggests that incentives can be effective in converting refusals (Fomby et al., 2016), both within a particular wave or from previous waves. For example, non-respondents in the large-scale longitudinal World Trade Center Health Registry (WTCHR) survey, who received an incentive five months into data collection achieved a response rate 30% higher than those who did not (Yu et al., 2017). Incentives have also been shown to have a greater effect on those who refused to take part in a previous wave of a longitudinal survey, compared to those who previously took part (Zagorsky & Rhoton, 2008).

Because previous results were inconsistent and not directly applicable to our population of interest, it is unclear whether incentives can be expected to have a differential effect on subgroups in the MCS, particularly because the COVID-19 surveys were part of a unique set of additional surveys administered online between typical waves. Yet, to build on previous research, we explored whether the incentive had any differential effects according to participants’ sex, ethnicity, country of residence, socioeconomic background, or whether they had responded to the last major wave of data collection at age 17 (in 2018–19).

4Effects of incentives on data quality

In addition to boosting response rates or representativeness it is also possible that the use of incentives can affect other measures of data quality such as break-off rates, item non-response, response to free text items, or survey completion time. Previous findings are somewhat mixed but encouraging; incentives either have no effect on data quality (Ryu et al., 2006; Tzamourani & Lynn, 2000; Singer et al., 1999), or if they do, the effects seem to be positive (Jackle & Lynn, 2008; Stanley et al., 2020).

Several experimental studies have found that incentives increase survey completeness (Medway & Tourangeau, 2015; McGonagle & Freedman, 2017). For example, in a longitudinal study of women in the US, incentives increased the number of items answered (Zogorsky & Rhoton, 2008). In an experimental study examining item non-response on earnings questions in the SIPP, those who received incentives had 1% lower item non-response than those who did not receive an incentive (Ayromloo & Wilkin, 2022). Similarly, Singer et al. (2000) found that incentives of any kind lowered item non-response, although the effect size was small. However, Yu et al. (2017) found no effect of incentives on response completeness across different measures of mental health and post-traumatic stress among a sample of people directly exposed to the 9/11 terrorist attacks in New York, suggesting that incentives may not always affect item non-response, even to sensitive questions.

Incentives seem to have small positive effects on other measures of effort. For example, Singer et al. (1999) found no effect on the number of words given in open text responses. Yet, in a web panel survey, those receiving incentives took longer to complete the survey, and showed lower item non-response (Stanley et al., 2020). Further, in a large web survey of university students, minimal differences in straight-lining and item non-response were observed, but those who received incentives were more likely to complete the survey and take longer in doing so (Cole et al., 2015).

5Research aims

Our primary aim was to evaluate the effect of introducing a conditional incentive (£10 voucher)2 on response rates in the MCS and to examine differential effects across key participant characteristics. Our secondary aim was to investigate whether incentives had any effect on data quality, including break-off rates, item non-response, straight-lining, answering a free text question, and survey completion time. Our final tertiary aim was to examine whether the incentive had any lasting effect on response to a further web survey conducted six months later, in which all participants were offered the same conditional incentive to take part.

6Method

6.1Participants and design

The MCS is an on-going longitudinal study that recruited 18,552 families with babies aged nine months to take part in the first wave of data collection in the years 2001–03. The total sample at baseline amounted to 18,818 cohort children, including twins and triplets. Recruitment took place through Child Benefit records using a clustered random sample design, in order to achieve a diverse and representative sample of children born in the UK at the turn of the century (Calderwood et al., 2020; Fitzsimons et al., 2020; Joshi & Fitzsimons, 2016). Seven major waves of face-to-face data collection took place between 2001–2019, when participants were aged 9 months, and 3, 5, 7, 11, 14, and 17 years.

In early 2020, the COVID-19 pandemic resulted in a series of government enforced lockdowns across the UK, including closures of schools, businesses, and non-essential retail. These unprecedented events had major repercussions for surveys and data collection. In response to the pandemic, a cross-cohort series of three web surveys were conducted with participants in five major longitudinal cohort studies in the UK including the MCS (Brown et al., 2021).

The first survey (COV-1) was conducted during the first national lockdown in May 2020, the second survey (COV-2) was conducted during a national re-opening phase in September-October 2020, and the third survey (COV-3) was conducted during a subsequent national lockdown in February-March 2021. Response rates for the MCS were slightly lower at COV-2 (24%) compared to COV-1 (27%). This was related to the fact that a larger sample were issued at COV-2 (N = 13,547) compared to COV‑1 (N = 9946), due to the inclusion of postal invitations, whereas COV‑1 used email invitation only (meaning that only those who had previously provided an email address could be included).

6.2Incentive experiment

An incentive experiment was conducted among MCS participants at COV‑3. Three quarters (75%) of the issued sample at COV‑3 (N = 13,351)3 were randomly allocated to a group who were offered a £10 incentive (incentive group), while the remaining 25% were offered no incentive as usual (control group). Participants were randomised to the incentive or control group at the family level, so that twins and triplets4 were allocated to the same arm. MCS parents were also invited to take part in the survey but were not offered incentives and were not analysed in the current study.

Participants in the incentive group were notified about the incentive through survey invitation, which was a shopping voucher that could be redeemed in a wide variety of online and physical stores. After completing the web survey, on the final screen, participants received further instructions on how to claim their voucher (and instructions were also sent via post or email). Participants could choose either an electronic ‘love2shop’ or Amazon voucher, or a physical ‘love2shop’ voucher (sent to their home address).

6.3Mode of data collection

Invitations to participate in the web survey were sent by post and email (where email addresses were held). Non-respondents received three email reminders (or one postal reminder if no email address was held) and two text message reminders (where mobile numbers were held). In an attempt to boost the response rate, COV‑3 (unlike COV‑1 and COV-2) involved a telephone phase where a subset of non-respondents was invited to take part via telephone. Given the short fieldwork period it was not possible to issue all non-respondents and as such, priority was given to those who took part in the previous COVID-19 surveys (Brown et al., 2021).

The incentive experiment continued during telephone fieldwork, with those who had been randomised to the incentive group and who ended up completing by telephone receiving the same incentive. A total of 3609 participants responded by web (27% of the issued sample) and 863 responded by telephone (6% of the issued sample). However, as allocation to telephone was no longer random, as it was affected by the initial response to the web survey, telephone respondents were treated as non-respondents for the purpose of this ‘intent-to-treat’ analysis.

6.4Data availability

MCS data are available to download from the UK Data Service website for research purposes (https://ukdataservice.ac.uk/). The paradata used in this study, in particular variables indicating non-response and survey completion time, are available on request via the CLS Data Access Committee (clsdata@ucl.ac.uk).

6.5Missing data

The main analysis sample (N = 13,328) reflected those who were issued to COV‑3 and had no missing data on any of the participant characteristics of interest (i.e., sex, ethnicity, UK nation, child poverty, parent level of education). Although some missing data was observed (n = 23), the proportions within treatment conditions remained balanced (i.e., 25% control, 75% incentive), suggesting that little bias was introduced from excluding missing cases.

6.6Statistical analyses

Descriptive statistics were explored first to test for any baseline differences between groups using t‑tests. Following this, a series of eight linear probability regression models were estimated with robust standard errors, to examine whether the incentive boosted the response rate (N = 13,328). The first model examined the unadjusted association between the incentive treatment and the response rate. The second model additionally adjusted for the following participant characteristics: (i) sex (female, male), (ii) ethnicity (White, other non-White ethnic minority), (iii) UK nation (England, Scotland, Wales, Northern Ireland), (iv) childhood poverty (lowest income quintile, above lowest income quintile), (v) parent’s highest level of education (university degree or above, lower than university degree), and (vi) non-response at the last major wave of data collection at age 17. Further models (six in total) were run, testing the interaction between the incentive treatment and each of the participant characteristics, to test for differential responsivity to the incentive.

Following this, the association between incentive treatment and various indicators of data quality were investigated on the productive sample of web respondents, controlling for the participant characteristics of interest (N = 3601). Five models were tested on the following data quality outcomes: (i) survey break-off—indicated by those who started the survey but did not complete (6%; n = 223), (ii) item non-response—indicated by those who skipped a sensitive question on total household income (20%; n = 674), (iii) straight-lining—indicated by those who selected the same response category on the 3-item social provisions scale (7%; n = 174), (iv) skipping free text—indicated by those who skipped a free text question asking about general experiences during the pandemic (58%; n = 1958), and (v) completion time—indicated by the total time taken to complete the survey in minutes (restricted to > 1 & < 61 min; mean = 31.14, SD = 11.02). Models 2–5 were restricted to those who completed the web survey without breaking off (N = 3378). Straight-lining had fewer observations due to the way the questionnaire was rooted (N = 2501), and completion time had fewer observations due to the removal of extreme values (N = 2688).

Finally, in preparation for MCS wave 8 (at age 23, taking place in 2023–24), a short web survey was conducted in Autumn 2021 to gauge participants engagement with the on-going study, where all participants were offered a £10 voucher conditional upon completion. Although data from this survey are not deposited, for the purpose of this study, we compared the response rate to the Autumn web survey by COV‑3 incentive treatment group, using a t-test.

7Results

7.1Participant characteristics

Between group differences in participant characteristics were explored using t‑tests (Table 1). Due to the randomisation, we did not expect there to be any group differences, which was found for the most part (i.e., for sex, child poverty, parent education, and non-response at age 17). However, some small group differences were observed, as a slightly higher proportion of English participants and ethnic minority participants were in the incentive than the control group. Conversely, a slightly higher proportion of Scottish participants were in the control than the incentive group. However, differences were very minimal and were controlled for in subsequent analyses.

Table 1 Between group differences in participant characteristics (N = 13,328)

Control group

Incentive group

t

Std. Err.

%

n

%

n

Robust standard error (Std. Err.)

*p < 0.05 **p < 0.01 ***p < 0.001

Female sex

50

1648

50

 4967

  –0.15

0.01

White ethnicity

83

2759

81

 8082

   2.67**

0.01

England

59

1948

63

 6253

  –4.10***

0.01

Wales

15

 511

14

 1401

   1.92

0.01

Scotland

12

 408

11

 1065

   2.57**

0.01

Northern Ireland

10

 329

 9

  918

   1.21

0.01

Child poverty

23

 774

23

 2341

  –0.18

0.01

Parent higher education

28

 920

29

 2681

   0.94

0.01

Non-response age 17

19

 639

20

 1956

  –0.45

0.01

Total

25

3328

75

10000

7.2Impact of incentive on response rate

Table 2 shows parameter estimates from the following regression models: (i) the unadjusted model, without any covariates, (ii) the adjusted model, including covariates, and (iii) the only interaction model that was significant (i.e., incentive treatment by previous non-response at age 17). No differential (interaction) effects were observed between the incentive and any of the following participant characteristics: sex, ethnicity, UK nation, child poverty, or parental education (results not shown).

Table 2 Parameter estimates for the effect of the incentive on survey response (N = 13,328)

(i) Unadjusted

(ii) Adjusted

(iii) Interaction

B

Std. Err.

B

Std. Err.

B

Std. Err.

Robust standard error (Std. Err.)

a England was the reference category

*p < 0.05 **p < 0.01 ***p < 0.001

Constant

0.22***

0.01

   0.15***

0.01

   0.14***

0.01

Incentive

0.06***

0.01

   0.07***

0.01

   0.07***

0.01

Female sex

   0.15***

0.01

   0.15***

0.01

White ethnicity

   0.06***

0.01

   0.06***

0.01

Walesa

 –0.04***

0.01

 –0.04***

0.01

Scotlanda

   0.00

0.01

   0.00

0.01

Northern Irelanda

 –0.02

0.01

 –0.02

0.01

Child poverty

 –0.08***

0.01

 –0.08***

0.01

Parent higher education

   0.11***

0.01

   0.11***

0.01

Non-response age 17

 –0.26***

0.01

 –0.22***

0.01

Incentive x Non-response

 –0.05***

0.01

The unadjusted model showed that there was a significant increase in the response rate due to the incentive (B = 0.06, p < 0.001), increasing from 22% in the control group, to 29% in the incentive group. In the adjusted model, all covariates were found to predict survey response. Overall, females were much more likely to respond (B = 0.15, p < 0.001), as were those who had a parent with higher level of education (B = 0.11, p < 0.001). White participants were slightly more likely to respond than ethnic minority participants (B = 0.06, p < 0.001). Non-respondents at age 17 were much less likely to respond (B = −0.26, p < 0.001), as were those who had experienced poverty in childhood (B = −0.08, p < 0.001). Compared to those living in England, those living in Wales were slightly less likely to respond (B = −0.04, p < 0.001).

A significant interaction was observed between the incentive treatment and previous non-response (B = −0.05, p < 0.001). Follow-up analyses of the marginal effects revealed that although there was still a boost in response for previous non-respondents in the incentive relative to control group (4 vs. 7%), the boost was significantly lower than that observed for previous respondents (26 vs. 34%).

Sensitivity analyses were conducted by running each of the interaction models again without including any other covariates, to examine whether the inclusion of covariates affected results (results not shown). Results remained the same in all models, as the only significant interaction was observed between the incentive treatment and previous non-response, with a similar effect size (B = −0.04, p = 0.042). The marginal effects were also similar, with the response rate increasing in the incentive group from 2.7 to 4% for previous non-respondents, and from 27.3 to 35% for previous respondents.

7.3Incentives and data quality

Analyses were conducted on the productive sample of web respondents only (N = 3601), to examine whether the incentive had any effect on data quality (Table 3). The incentive had a small negative effect on survey break-off rates (B = −0.05, p < 0.001), as participants were slightly less likely to quit the survey in the incentive compared to control group (5 vs. 9%). The incentive had a large positive effect on time taken to complete the survey (B = 5.26, p < 0.001), as those incentivised spent approximately 5 min longer completing the survey. The incentive had a small positive effect on skipping the free text question (B = 0.05, p < 0.001), as participants were slightly more likely to skip this question in the incentive compared to control group (58 vs. 56%). The incentive did not have any significant impact on item non-response nor straight-lining.

Table 3 Parameter estimates for the effect of the incentive on data quality for the productive sample at COV‑3

1) Survey break-off

2) Item non-response

3) Straight-lining

4) Skipping free text

5) Completion time

B

Std. Err.

B

Std. Err.

B

Std. Err.

B

Std. Err.

B

Std. Err.

Sample design and attrition weights were applied; Models 2–5 were restricted to those who completed the survey; Models 3 and 5 had fewer observations due to questionnaire rooting and/or outlier exclusion. Robust standard error (Std. Err.)

a England was the reference category

*p < 0.05 **p < 0.01 ***p < 0.001

Constant

    1.19***

0.04

    0.26***

0.06

    0.10***

0.03

    0.86***

0.03

   23.41***

1.50

Incentive

  –0.05**

0.02

  –0.02

0.03

    0.01

0.01

    0.05**

0.02

    5.26***

0.68

Female sex

    0.03*

0.02

    0.03

0.03

  –0.01

0.01

  –0.12***

0.01

    0.78

0.75

White ethnicity

  –0.08**

0.04

  –0.05

0.05

  –0.04*

0.02

  –0.04*

0.02

    2.53

1.76

Walesa

    0.01

0.02

  –0.02

0.03

    0.02

0.02

    0.01

0.02

    0.43

0.95

Scotlanda

    0.02

0.02

  –0.08***

0.03

    0.01

0.02

  –0.02

0.02

    0.41

1.38

Northern Irelanda

    0.07**

0.03

  –0.09***

0.03

    0.02

0.02

    0.03*

0.02

  –0.12

1.01

Child poverty

    0.04

0.03

    0.01

0.05

  –0.00

0.02

    0.07***

0.02

  –2.50*

1.41

Parent education

  –0.02

0.02

    0.01

0.03

  –0.01

0.01

  –0.10***

0.02

  –0.92

0.57

Non-response age 17

  –0.01

0.05

  –0.03

0.07

    0.03

0.07

    0.15***

0.02

    1.38

2.07

N

3601

3378

2501

3378

2688

7.4Incentive and later survey participation

To investigate whether the COV‑3 incentive treatment had a lasting impact on later survey participation, we compared response rates achieved between the two groups in a web survey conducted six months later (where all participants were offered a £10 voucher). As shown in Table 4, the difference in response to the later web survey between those in the COV-3 incentive group and those in the control group (33% vs. 33%) was small and not significant (t = −0.54, p = 0.589).

Table 4 Response rates at the COV‑3 and later web survey by incentive group (N = 13,328)

Group

Productive COV‑3

Productive Autumn web

Total issued

%

n

%

n

n

Table shows proportion (n) productive at each timepoint out of total issued sample

Control

22

 741

33

1086

 3328

Incentive

29

2861

33

3314

10000

8Discussion

Monetary incentives were introduced in the MCS for the first time, during data collection for the third COVID-19 web survey. A randomised incentive experiment was conducted, where 75% of the issued sample were offered a £10 shopping voucher to participate, while the remaining 25% were offered no incentive as usual. The incentive boosted the response rate by 6‑percentage points from a baseline of 22%. In terms of sample size, this incentive boost amounted to an additional 840 participants,5 which was not insubstantial. No differential incentive effects were observed for participant’s sex, ethnicity, UK nation, parental education, nor childhood poverty status. This supported previous research from longitudinal studies that found little evidence for differential incentive effects (LeClere et al., 2012; Suzer-Gurtekin et al., 2016; Jackle & Lynn, 2008), and provides up-to-date evidence in a large-scale sample of young people in the UK.

However, it was observed that previous non-respondents (at the last major wave) showed a lower response to the incentive compared to previous respondents. While previous non-respondents still showed the expected effect of an increased response to the incentive (from 4 to 7%), it was not as pronounced as observed in previous respondents (from 26 to 34%). This is in contrast to a previous US study that compared the effect of $ 40 (relative to $ 0) incentives, offered to non-respondents in a long-standing cohort study, which found a greater incentive response in non-respondent cohort members (from 19 to 38%) compared to their family members (from 71 to 78%), who had previously responded (Zagorsky & Rhoton, 2008). These differences could be attributed to the much larger incentive value offered in the US study, or perhaps because the baseline response rate was very low in the current study for previous non-respondents.

An incentive value greater than £10 may have yielded higher response rates, as greater value incentives have typically shown a higher impact on response (Booker et al., 2011; Laurie, 2007). However, offering large incentives is not always feasible or cost-effective (Borsch-Supan et al., 2013). Some evidence from a review of face-to-face surveys in the US showed only marginally higher response rates with increasing incentive values (i.e., 1–2% per each $ 5 increase). This calls into question the benefit of offering higher value incentives, particularly for large-scale surveys. There are also cultural norms around incentives and setting participant expectations to consider. In the UK, an incentive value of £5 or £10 for surveys is typical, while in the US amounts tend to be much higher (e.g., PSID has offered incentives ranging from $75–$300) (Nicolaas et al., 2019).

In the interest of cost-effectiveness, many studies use targeted incentives for hard-to-reach participants, with the aim of reducing non-response bias. Evidence from longitudinal studies in the US, in which the use of differential incentives is common practice, shows that they effectively bring in reluctant respondents, decrease non-response bias, and are cost effective because they are only given to a subsample of participants (Westra et al., 2015; Singer et al., 2000). Therefore, the use of targeted incentives or larger incentives for non-respondents could be considered along with other engagement strategies to increase response rates in hard-to-reach cohort members. However, the ethical implications of offering differential incentives needs to be considered, and failing to offer incentives to previous respondents could have detrimental effects on participant loyalty (Laurie & Lynn, 2009). Our results suggest that previous non-respondents may actually show lower responsivity to certain types of incentives. Therefore, other incentivisation or engagement strategies may need to be considered.

Leverage-salience theory proposes that different survey design attributes, including incentives, have different ‘leverages’ for certain groups in their decision to take part. Monetary incentives may work best for those who are less engaged for other reasons (e.g., low interest in the research topic, or low civic responsibility) (Groves et al., 2000). This may help to explain the lack of differential effects found among other sociodemographic groups—as there may have been further environmental, personal, and/or survey attributes more important in the decision to take part. It is difficult to ascertain whether the COVID-19 pandemic, the timing and frequency of the surveys, or the push-to-web design may have affected overall response rates or sensitivity to the incentive. Additional response enhancement and participant engagement strategies are often used in the MCS (Brown & Calderwood, 2014; Carpenter & Burton, 2018), perhaps diminishing the importance of incentives for some participants in relation to other engagement efforts.

In terms of data quality, the incentive appeared to have a small positive effect on some outcomes, which was consistent with previous findings (Cole et al., 2015; Stanley et al., 2020). Those in the incentive group showed lower break-off rates, perhaps because they were being incentivised conditional upon completion. On average, they also spent longer completing the survey, which could possibly be related to them feeling a sense of renumeration for their time spent. However, those in the incentive group were slightly more likely to skip the free text question at the end, compared to controls. This may have been because they were already spending on average 5 min longer completing the survey, bringing their total average survey completion time to 30 min. Given that the survey was advertised as lasting around 20 min, respondents may have felt that their responses were already sufficient. No association was found between the incentive treatment and item non-response nor straight-lining, which was consistent with other findings in the literature (Singer et al., 1999; Stanley et al., 2020).

Finally, there was no lasting effects of the incentive treatment at COV‑3 on participation in a further web survey that took place six months later, when all participants were offered the same incentive to take part. Going forward, it is expected that MCS participants will be offered larger cash-like incentives at all future follow-ups, as a renumeration for their time and to encourage continued participation in the study.

8.1Strengths and limitations

A major strength of the current study was the inclusion of a large highly powered and nationally representative sample of young people in the UK, offering new insights into the effects of incentives on survey response in this generation. Further, the experimental design and randomisation process ensured that the risk of bias from specific sociodemographic groups was reduced. However, our preliminary analyses showed that the groups were not entirely equivalent regarding ethnicity or UK nation, as a slightly higher proportion of White and English participants were in the control group, and a slightly higher proportion of Scottish participants were in the incentive group. However, these differences were very minimal and were controlled for in subsequent analyses, thus were very unlikely to have affected results. Further, to increase clarity in our findings, the small proportion of telephone survey respondents were not analysed, because the incentive experiment had been implemented initially in the web response phase, and the data quality indicators observed were contingent on response mode.

Finally, there were some limitations regarding our measures of data quality, as we relied on existing variables within the survey. Household income was used to reflect a sensitive question, which is typically prone to item non-response (Angel et al., 2019). However, it is possible that MCS cohort members were unable to answer this question due to lack of knowledge about household finances, as many respondents lived with their parents. Our measure of straight-lining was not optimal, because the social provisions scale only contained 3‑items, which would not have required too much effort to complete. Usually, a longer scale would be used to measure straight-lining, although was not available in this survey. Finally, the completion time measure was highly positively skewed, resulting in the exclusion of extreme values, as some participants took far longer than 60 min to complete the survey, which may have reflected participants who took long breaks during completion.

8.2Conclusions

The near ubiquitous internet-use among young adults coupled with the rapid rise of costs for interviewer-administered surveys have meant that large-scale longitudinal surveys are increasingly looking towards moving from interviewer-administered modes to online modes of data collection (Couper & McGonagle, 2019; Cornesse et al., 2021). This study contributes to growing evidence that conditional incentives can be used to increase response rates in web surveys of young adults, with some small positive effects on data quality, including lower break-off rates and more time spent completing surveys. However, the observed boost in response rate from the incentive was relatively modest and did not achieve nearly as great a response as typical face-to-face surveys. Future research should continue to examine the longer-term effects of incentives, evaluate their cost-effectiveness, and examine how they may interact with other engagement strategies and data collection modes to increase response and minimise non-response bias in the MCS and other longitudinal studies.

Acknowledgements

This work was supported by the Economic and Social Research Council, through the Centre for Longitudinal Studies, Resource Centre Grant, ES/M001660/1. It would not have been possible to conduct this research without the contributions of Millennium Cohort Study participants and their families. We would also like to acknowledge the substantial input from many colleagues in the design and implementation of the study.

References

Angel, S., Disslbacher, F., Humer, S., & Schnetzer, M. (2019). What did you really earn last year?: explaining measurement error in survey income data. Journal of the Royal Statistical Society Series A: Statistics in Society, 182(4), 1411–1437.

Ayromloo, S. S., & Wilkin, K. R. (2022). Money Talks: The Effects of Monetary Incentives on Earnings Non-Response in the SIPP. SEHSD Working Paper No. 2022–02 and SIPP Working Paper No. 301.

Becker, R., Moser, S., & Glauser, D. (2019). Cash vs. vouchers vs. gifts in web surveys of a mature panel study—Main effects in a long-term incentives experiment across three panel waves. Social Science Research, 81, 221–234.

Booker, C. L., Harding, S., & Benzeval, M. (2011). A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health, 11, 249.

Borsch-Supan, A., Krieger, U., & Schroder, M. (2013). Respondent incentives, interviewer training and survey participation. SHARE Working Paper Series (12–2013). Munich: Munich Center for the Economics of Aging (MEA).

Braekman, E., Demarest, S., Charafeddine, R., Drieskens, S., Berete, F., Gisle, L., Van der Heyden, J., & Van Hal, G. (2022). Unit response and costs in web versus face-to-face data collection: comparison of two cross-sectional health surveys. J Med Internet Res, 24(1). 

Brown, M., & Calderwood, L. (2014). Can encouraging respondents to contact interviewers to make appointments reduce fieldwork effort? Evidence from a randomized experiment in the UK. Journal of Survey Statistics and Methodology, 2, 483–497.

Brown, M., Goodman, A., Peters, A., Ploubidis, G. B., Sanchez, A., Silverwood, R., & Smith, K. (2021). COVID-19 Survey in Five National Longitudinal Studies: Waves 1, 2 and 3 User Guide (Version 3. London: UCL Centre for Longitudinal Studies and MRC Unit for Lifelong Health and Ageing.a, b

Cabrera-Alvarez, P., & Lynn, P. (2023). Short-term impact of increasing the value of unconditional and conditional incentives in Understanding Society. Understanding Society Working Paper 2023-08. Colchester: University of Essex.

Calderwood, L., Connelly, R., Dex, S., George, A., Hancock, M., Hansen, K., et al. (2020). Millennium cohort study: user guide surveys 1–5 (9th edition). London: UCL Centre for Longitudinal Studies.

Carpenter, H., & Burton, J. (2018). Adaptive push-to-web: Experiments in a household panel study. Understanding Society Working Paper 2018-05. Colchester: University of Essex.

Castiglioni, L., Pforr, K., & Krieger, U. (2008). The effect of incentives on response rates and panel attrition: results of a controlled experiment. Survey Research Methods, 2(3), 151–158.a, b

Cole, J. S., Sarraf, S. A., & Wang, X. (2015). Does use of survey incentives degrade data quality? Paper presented at the Association for Institutional Research Annual Forum, Denver, 05.2015.a, b

Collins, R. L., Ellickson, P. L., Hays, R. D., & McCaffrey, D. F. (2000). Effects of incentive size and timing on response rates to a follow-up wave of a longitudinal mailed survey. Evaluation Review, 24(4), 347–363.

Cornesse, C., Felderer, B., Fikel, M., Krieger, U., & Blom, A. G. (2021). Recruiting a probability-based online panel via postal mail: experimental evidence. Social Science Computer Review, 40(5), 1259–1284.a, b

Coughlin, S. S., Aliaga, P., Barth, S., Eber, S., Malliard, J., Mahan, C., & Williams, M. (2011). The effectiveness of a monetary incentive on response rates in a survey of recent US veterans. Survey Practice, 4(1), 1–8.

Couper, M. P., & McGonagle, K. A. (2019). Recent developments in web-based data collection for longitudinal studies. Panel Study of Income Dynamics Technical Series Paper #19-03. Ann Arbor: Institute of Social Research, University of Michigan.

Daikeler, J., Bošnjak, M., & Manfreda, K. L. (2020). Web versus other survey modes: an updated and extended meta-analysis comparing response rates. Journal of Survey Statistics and Methodology, 8(3), 513–539.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method (4th edn.). New Jersey: Wiley.

Felderer, B., Müller, G., Kreuter, F., & Winter, J. (2018). The effect of differential incentives on attrition bias: evidence from the PASS wave 3 incentive experiment. Field Methods, 30(1), 56–69.

Fitzsimons, E., Haselden, L., Smith, K., Gilbert, E., Calderwood, L., Agalioti-Sgompou, V., Veeravalli, S., Silverwood, R., & Ploubidis, G. (2020). Millennium cohort study age 17 sweep (MCS7): user guide. London: UCL Centre for Longitudinal Studies.a, b

Fomby, P., Sastry, N., & McGonagle, K. A. (2016). Effectiveness of a time-limited incentive on participation by hard-to-reach respondents in a panel survey. Field Methods, 29(3), 238–251.

Friedel, S., Felderer, B., Krieger, U., Cornesse, C., & Blom, A. G. (2023). The early bird catches the worm! Setting a deadline for online panel recruitment incentives. Social Science Computer Review, 41(2), 370–389.

Göritz, A. S. (2006). Incentives in web studies: methodological issues and a review. International Journal of Internet Science, 1(1 Suppl), 58–70.a, b

Göritz, A. S. (2015). Incentive effects. In U. Engel, B. Jann, P. Lynn, A. Scherpenzeel & P. Sturgis (Eds.), Improving survey methods: lessons from recent research (pp. 339–350). New York: Routledge.

Göritz, A. S., & Neumann, B. P. (2016). The longitudinal effects of incentives on response quantity in online panels. Translational Issues in Psychological Science, 2(2), 163–173.a, b

Göritz, A. S., & Wolff, H. G. (2007). Lotteries as incentives in longitudinal web studies. Social Science Computer Review, 25(1), 99–100.

Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: description and an illustration. Public Opinion Quarterly, 64(3), 299–308.

Hanly, M., Savva, G., Clifford, I., & Whelan, B. (2014). Variation in incentive effects across neighbourhoods. Survey Research Methods, 8(1), 19–30.a, b

Jäckle, A., & Lynn, P. (2008). Respondent incentives in a multi-mode panel survey: cumulative effects on nonresponse and bias. Survey Methodology, 34(1), 105–117.a, b, c, d

Joshi, H., & Fitzsimons, E. (2016). Study Profile: The UK Millennium Cohort Study: the making of a multi-purpose resource for social science and policy in the UK. Longitudinal and Life Course Studies, 7(4), 409–430.

Knibbs, S., Lindley, L., Swordy, D., Stevens, J., & Clemens, S. (2018). Omnibus survey of pupils and their parents/carers (Department for Education): Research report wave 4. London: Ipsos MORI.a, b

Laurie, H. (2007). The effect of increasing financial incentives in a panel survey: an experiment on the British household panel survey. In Institute for social and economic research working paper 2007-05. Colchester: University of Essex.a, b

Laurie, H., & Lynn, P. (2009). The use of respondent incentives on longitudinal surveys. In P. Lynn (Ed.), Methodology of longitudinal surveys. Chichester: Wiley.a, b

LeClere, F., Plummer, S., Vanicek, J., Amaya, A., & Carris, K. (2012). Household early bird incentives: leveraging family influence to improve household response rates. Presented at the Joint Statistical Meetings, San Diego.a, b

Lepkowski, J. M., & Couper, M. P. (2002). Nonresponse in the second wave of longitudinal household surveys. In R. M. Groves, D. A. Dilman, J. L. Ettinge & R. J. A. Little (Eds.), Survey Nonresponse (pp. 259–272). New York: Wiley.

Lipps, O., Jaquet, J., Lauener, L., Tresch, A., & Pekari, N. (2022). Cost efficiency of incentives in mature probability-based online panels. In Survey methods: insights from the field (SMIF).

Mack, S., Huggins, V., Keathley, D., & Sundukchi, M. (1998). Do monetary incentives improve response rates in the survey of income and program participation? In The American Statistical Association (Ed.), Survey research methods section (pp. 529–534). American Statistical Association: Washington, DC.a, b

Martin, E., Abreu, D., & Winters, F. (2001). Money and motive: effects of incentives on panel attrition in the survey of income and program participation. Journal of Official Statistics, 17(2), 267–284.

McGonagle, K. A., & Freedman, V. A. (2017). The effects of a delayed incentive on response rates, response mode, data quality and sample bias in a national representative mixed mode study. Field Methods, 29(3), 221–237.

McGonagle, K., Schoeni, R., & Couper, M. (2013). The effects of a between-wave incentive experiment on contact update and production outcomes in a panel study. Journal of Official Statistics, 29(2), 261–276.

McMaster, H. W., LeardMann, C. A., Speigle, S., & Dillman, D. (2017). An experimental comparison of web-push vs. paper-only survey procedures for conducting an in-depth health survey of military spouses. BMC Medical Research Methodology, 17(73)., . a, b

Medway, R. L., & Tourgangeau, R. (2015). Response quality in telephone surveys: do prepaid cash incentives make a difference? Public Opinion Quarterly, 79(2), 524–543.

Nicolaas, G., & Stratford, N. (2005). A plea for the tailored use of respondent incentives. In C. van Dijkum, J. Blasius & C. Durand (Eds.), Recent developments and applications in social research methodology. CD-ROM proceedings of the RC33 Sixth International Conference on Social Science Methodology, Workshop No. 321. Amsterdam: Budrich.

Nicolaas, G., Corteen, E., & Davies, B. (2019). The use of incentives to recruit and retain hard-to-get populations in longitudinal studies. UK Research and Innovation Report for the Economic and Social Research Council.a, b

Plewis, I., Calderwood, L., Hawkes, D., Hughes, G., & Joshi, H. (2007). The millennium cohort study: technical report on sampling (4th edn.). London: UCL Centre for Longitudinal Studies.

Ryu, E., Couper, M. P., & Marans, R. W. (2006). Survey incentives: cash vs. In-kind; face-to-face vs. mail; response rate vs. nonresponse error. International Journal of Public Opinion Research, 18(1), 89–106.a, b

Scherpenzeel, A., Zimmermann, E., Budowski, M., Tillmann, R., Wernli, B., Gabadinho, A. (2002). Experimental Pre-Test of the Biographical Questionnaire. Working Paper, No. 5-02. Neuchatel: Swiss Household Panel. http://aresoas.unil.ch/workingpapers/WP5_02.pdf

Silverwood, R., Narayanan, M., Dodgeon, B., Ploubidis, G. (2021). Handling missing data in the national child development study: user guide (version 2). London: UCL Centre for Longitudinal Studies.

Singer, E., Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112–141.a, b

Singer, E., Hoewyk, J. V., Gebler, N., Raghunathan, T., McGonagle, K. (1999). The effects of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics, 15(2), 217–230.a, b, c

Singer, E., Van Hoewyk, J., Maher, M. P. (2000). Experiments with incentives in telephone surveys. Public Opinion Quarterly, 64, 171–188.a, b

Stanley, M., Roycroft, J., Amaya, A., Dever, J. A., Srivastav, A. (2020). The effectiveness of incentives on completion rates, data quality, and Nonresponse bias in a probability-based Internet panel survey. Field methods, 32(2), 159–179.a, b, c, d

Sundukchi, M. (1999). SIPP 1996: Some Results from the Wave 7 Incentive Experiment. Memorandum from Sundukchi for Documentation. Census Bureau.a, b

Suzer-Gurtekin, Z. T., ElKasabi, M., Liu, M., Lepkowski, J. M., Curtin, R. T., McBee, R. (2016). Effect of a pre-paid incentive on response rates to an address-based sampling (ABS) web-mail survey. Survey Practice, 9(4). a, b

Tzamourani, P., Lynn, P. (2000). Do respondent incentives affect data quality? Evidence from an experiment. Survey Methods Newsletter, 20(2), 3–7.

Westra, A., Sundukchi, M., Mattingly, T. (2015). Designing a multipurpose longitudinal incentives experiment for the survey of income and program participation. Proceedings of the 2015 Federal Committee on Statistical Methodology (FCSM) Research Conference.a, b, c

Wetzels, W., Schmeets, H., van den Brakel, J., Feskens, R. (2008). Impact of prepaid incentives in face-to-face surveys: a large-scale experiment with postage stamps. International Journal of Public Opinion Research, 20(4). 

Williams, D., Brick, J. M. (2018). Trends in U.S. face-to-face household survey nonresponse and level of effort. Journal of Survey Statistics and Methodology, 6(2), 186–211.

Yu, S., Alper, H. E., Nguyen, A. M., Brackbill, R. M., Turner, L., Walker, D. J., Maslow, C. B., Zweig, C. (2017). The effectiveness of a monetary incentive offer on survey response rates and response completeness in a longitudinal study. BMC Medical Research Methodology, 17(77). a, b

Zogorsky, J., Rhoton, P. (2008). The effects of promised monetary incentives on attrition in a long-term panel survey. Public Opinion Quarterly, 72(3), 502–513.a, b, c