The online version of this article (https://doi.org/10.18148/srm/2025.v19i3.8451) contains supplementary material.
Declining response rates may undermine survey data quality by reducing the sample size available for analysis, with implications for the precision of survey estimates. Also, if respondents and non-respondents differ in key concepts of interest for research, low survey participation may lead to non-response bias (Groves et al., 2009).
In the context of longitudinal studies, non-response is particularly problematic, as missing data at a specific survey wave limits the potential of the information collected at earlier and later time points. While statistical adjustments, such as multiple imputation or inverse probability weighting may be implemented post data collection with the aim of restoring sample representativeness, maximising respondents’ participation remains vitally important for achieving a representative sample of the target population.
Furthermore, in longitudinal studies, non-response in a particular wave seems to break the respondent habit of participation in the study, and may ultimately lead to attrition (Lugtig, 2014). Hence, keeping sample members engaged with the study is of primary importance.
Monetary incentives are one tool that survey practitioners can employ to increase survey participation. It has long been recognised that the offer of monetary incentives will typically have a positive impact on response rates and the use of incentives is widespread, especially in the United States, and to a lesser extent in the United Kingdom. The impact of incentives is however not necessarily uniform and much evidence suggests that the effect of incentives can be greatest on sub-groups with lower propensity to respond (Laurie, 2007; Zagorsky & Rhoton, 2008). These findings could present a case for the use of differential incentive strategies which involve offering higher value incentives to those less likely to respond, which, if successful, could reduce non-response bias. Furthermore, under tight budget constraints which may limit the possibility of offering monetary incentives (or the same value incentive) to all sample members, targeting incentives (or higher value incentives) at sample members that are more likely to be responsive to them seems a cost-effective strategy. The provision of larger incentives to hard-to-reach respondents may also reduce the fieldwork effort required (e.g. number of calls necessary to complete cases) and ultimately reduce survey costs.
However, the evidence on the efficacy of targeted incentives is limited and mixed. In this research we provide novel empirical findings on the effect of targeting higher value monetary incentives at prior wave non-respondents, using experimental data from a large-scale cohort study in England: the Next Steps Age 32 survey.
In the following we review evidence from the literature on the use of targeted incentives. First, we present survey methodological and economic literature which sets out the theoretical basis for the use of targeted incentives. Second, we discuss the evidence on whether incentives have a differential effect on response rates, depending on sample members’ characteristics or response behaviours and thus have an impact on reducing non-response bias. Third, we present examples on the use of targeted incentives in different contexts. Finally, we report results on studies which have adopted targeted incentives in an experimental setting.
As per other survey design features, incentives might impact participation differently across different respondent subgroups. This consideration is at the core of the leverage saliency theory which postulates that a single survey design attribute—in this context, the incentive level—can exercise different levels of “leverage” on how different sample members make a decision as to whether to participate (Groves et al., 2000). Monetary incentives may have a larger effect on sample members who value more highly the monetary reward, and may have a smaller effect on those who have other motivations for survey participation (e.g. altruistic motives, interest in the survey topic, commitment or habit).
The recognition that different sample members may react differently to survey design features is the rationale behind the implementation of targeted survey designs. These are designs in which i) survey design features are varied between sample subgroups (with the aim of minimising survey error and optimising survey costs) and ii) these variations are planned in advance of fieldwork rather than during data collection (for a discussion see Lynn, 2017). Longitudinal studies provide an optimal setting for the implementation of targeted designs, thanks to the wealth of information available on sample members, from prior survey waves.
One form of targeted design is to offer different levels of incentives to different sample members subgroups in order to maximise response rates and representativeness, under a fixed (and often tight) budget. Based on the evidence that less cooperative sample members tend to react more to monetary incentives than highly engaged respondents (Laurie, 2007; Zagorsky & Rhoton, 2008), several studies (e.g. the Panel Study of Income Dynamics, the Swiss Household Panel) have implemented tailored designs where higher incentives are offered to prior wave non-cooperative sample members. The underlying principle is that reducing the value of incentives offered to engaged sample members who do not need a monetary reward to participate would not reduce participation amongst this group, but that increasing the value of incentives offered to those less engaged could boost participation amongst this group and thereby increase overall survey response.
Furthermore, targeted incentives may reduce bias, by increasing participation among otherwise under-represented groups. Economic theory would suggest that the population subgroups which are most responsive to monetary incentives are those who value money the most (Felderer et al., 2018). Indeed, economic models of survey participation suggest that respondents may see incentives as compensation for their time and effort (Philipson, 1997). Hence, incentives of modest value may motivate low-income respondents who have a lower opportunity cost of time (Felderer et al., 2018). Other theories—e.g. the social exchange theory (Lipps, 2010)—stress that respondents do not perceive incentives as a payment for their time and effort in completing the survey but rather as a demonstration of trust that the respondent will answer the survey or as a symbolic sign of appreciation reciprocated by survey completion. However, it still seems reasonable to assume that the opportunity to receive additional money would be more highly valued by those with lower income or those in poverty who are often under-represented in social surveys.
Moreover, incentives of higher value tend to have a greater impact on response (Börsch-Supan et al., 2013; Laurie, 2007; Rodgers, 2002; Singer, Van Hoewyk, et al., 1999) (for a review see Booker et al., 2011) but offering incentives of high value to all sample members may not be feasible under tight budget constraints. Targeting higher value incentives to sample members with lower response propensity could be a cost-effective strategy, and the additional costs may be offset by savings in fieldwork effort if the incentives reduce the number of contact attempts and interviewer visits required (see Blohm & Koch, 2013).
One way to assess the effectiveness of targeted monetary incentives is to compare biases in sample composition in studies which have assigned incentives to one random treatment group and no incentives (or lower incentive amounts) to a control group. Since incentives are offered to a random subsample (and assuming that the randomisation process led to subsamples that are balanced in key socio-demographic and survey response behaviour variables), significant differences in sample composition between incentivised and un-incentivised subsamples should be interpreted as evidence of the differential efficacy of incentives.
Some studies have shown that monetary incentives increase participation of typically underrepresented population subgroups, such as people of low income (Felderer et al., 2018; Mack et al., 1998)—including those eligible for free school meals in England (Knibbs et al., 2018)—sample members with low education (Singer et al., 2000), and ethnic minorities (in the US)—Mack et al. (1998). However, this evidence seems to be mixed. For example, Knibbs et al. (2018) do not find evidence for differences in responsiveness to incentives by ethnicity (white versus non-white) and Singer et al. (2000) only found an effect on education, but not on other demographic characteristics.
Indeed, in a review of different studies, Singer et al. (2000) found three studies that support the indication that incentives increase participation among otherwise underrepresented sample members, five studies that find no effects on sample composition and one study that finds mixed results. Similarly, a review of the evidence from 10 experiments implemented in cross sectional and longitudinal surveys in Germany shows mixed results on the efficacy of incentives on reducing non-response bias (Pforr et al., 2015), (see also: Blohm & Koch, 2013; Börsch-Supan et al., 2013; Felderer et al., 2018). Also, evidence from several longitudinal studies in the US, the UK and Switzerland suggests a lack of support for the hypothesis that responsiveness to incentives varies across population subgroups (Cabrera-Álvarez & Lynn, 2025; Jäckle & Lynn, 2008; LeClere et al., 2012; Lipps et al., 2023; Lipps et al., 2022; Suzer-Gurtekin et al., 2016).
With respect to offering incentives to sample members which are typically non cooperative, some evidence from longitudinal studies shows that prior wave non respondents react more positively to incentives than prior wave respondents (Laurie, 2007), while one more recent study (Booth et al., 2024) finds the opposite. However, it should be noted this mixed evidence might arise from differences in target populations, incentive amounts/increases, and maturity of the panel. Indeed, the experiment reported in (Laurie, 2007) was implemented on a longitudinal study of the general population (the British Household Panel Survey, Wave 14) and experiments with a minor increase in the incentives value (from £7 to £10). Conversely, Booth and colleagues (2024) focus on a specific population subgroup—a cohort of young people aged approximately 20 years old in the United Kingdom, i.e. the Millennium Cohort Study Covid-19 Wave 3 survey where a £10 incentive was offered, for the first time in the history of the study1.
Table 1 Incentive levels by experimental group
Incentive amounts | |||
n | “Early-bird”+ standard (£) | Standard (£) | |
“Early-bird” incentive is offered to respondents who participate in the survey online within the first three weeks. Standard incentive is offered thereafter. The table includes only eligible cases. | |||
Not-targeted | 1521 | 30 | 20 |
Targeted approach | 1512 | – | |
Prior wave respondents | 904 | 25 | 15 |
Prior wave non-respondents | 608 | 35 | 25 |
Total | 3033 | – | |
Targeted incentives have been implemented in the US context since the late 1990s (Nicolaas et al., 2019), while the adoption of such designs in the European and UK context is more recent. For example, in the US-based Survey on Programme Dynamics incentives ($40, unconditional) were offered only to households who did not participate in prior waves or showed reluctance to participate (Kay et al., 2001). Similarly, in the 2003–2004 wave of the National Longitudinal Survey of Youth 1997, different levels of incentives ($35, $30, $25 or $20) were offered depending on sample members’ participation over the prior three waves (Bureau of Labor Statistics, undated; as quoted in Lynn, 2017).
An example of the use of targeted incentives in Europe is the Swiss Household Panel where targeted incentives were adopted in the 2007 wave. Fifty Swiss Francs2 were offered to households who refused participation in the prior survey wave, which lead to a significantly higher response rate amongst this group compared to that achieved by a “roughly similar” sample in the prior wave (Lipps, 2010, p. 87). However, as this study is observational rather than experimental, it doesn’t allow for the determination of a causal link (Lipps, 2010).
Targeted incentives were also employed in another Swiss study—the Panel Survey of the Swiss Election Study (Lipps et al., 2023; Lipps et al., 2022). In wave 5, a conditional incentive of 20 CHF was offered to participants identified as having a low likelihood of responding. In contrast, those with a high predicted response probability were randomly assigned to receive either a 10 CHF conditional incentive or entry into a lottery offering three prizes of 500 CHF each. Notably, response rates among the lottery group declined only slightly compared to the more expensive 10 CHF incentive group (Lipps et al., 2022).
Encouraged by these findings, the study adopted a revised approach in the following wave. All high-propensity participants were assigned to the lottery condition, while experimentation focused on the low-propensity group. This group was randomly divided to receive either the 20 CHF conditional incentive or entry into the same lottery. However, response rates in the lottery group were 8 percentage points lower than in the 20 CHF incentive group.
Given that the low-propensity group was offered double the amount previously provided to the high-propensity group, it remains unclear whether the difference between the lottery entry and the cash incentive in the lower-propensity group results from greater responsiveness to cash incentives per se, or simply to the higher monetary value offered. A threshold effect may also be at play, in which a minimum incentive amount is required to effectively motivate participation.
Additionally, carryover effects were observed. Among high-propensity participants, those assigned to the lottery in both waves had a response rate 3 percentage points lower than those who received the 10 CHF incentive in wave 5 and were assigned to the lottery in wave 6. This suggests that results from wave 5 should be interpreted with caution, as prior incentive experiences may influence subsequent participation behavior.
In the UK context, targeted incentives have been adopted in a few longitudinal and cross-sectional surveys. For example, in Understanding Society: the UK household Longitudinal Study, from wave 6 (2014–15) to wave 12 (2020–21), sample members living in households where everyone refused participation at the prior wave (or where it was not possible to establish contact at the prior wave) were offered higher incentives (£20 versus £10) compared to prior wave respondents—or non-responding adults in partially productive households at the prior wave (e.g. see Carpenter, 2021 for wave 12, and wave 6–11 technical reports for the prior waves). However, as this design was not implemented experimentally, it is not possible to assess whether the provision of higher incentives to some sample subgroups increased response rates compared to designs where incentives are equal for all sample members.
The first wave of the COVID Social Mobility and Opportunities Study (COSMO)—a new longitudinal study which recruited 16-year-olds through schools—sought to improve response and representation of those from disadvantaged backgrounds by offering higher value incentives (£20 versus £10) to students attending schools with a higher proportion of students receiving free school meals. For each young person invited, a parent or a guardian was also invited to take part where they were offered the same incentive as their child. The impact of the incentive was assessed using a regression discontinuity design (Anders et al., 2023). Authors found that the higher incentives seem to have led to higher response rates for young people, and increased participation of ‘full households’ where both a young person and their parent took part. Achieved sample representativity was assessed with data available for the population from the sampling frame, where the higher incentive group’s representativity was slightly better than that of the lower incentive group.
Other UK-based examples of the use of targeted incentives include:
the Skills and Employment Survey (2017) where sample members living in London were offered a £15 conditional incentive (versus £10 for all other sample members), to increase participation in this area (Glendinning et al., 2018).
the Omnibus Survey of Pupils and their Parent/Carer waves 5 and 6 where an incentive was offered to school pupils eligible for free school meals (FSM) and their parents/carers, if both completed the survey (Ipsos Mori, 2019; Lindley et al., 2019), and
Growing up in Scotland wave 9, when a pre-paid £15 incentive is sent to families which are under-represented in the study, e.g. teenage mothers, single parents, and sample members living in deprived areas (as reported in Nicolaas et al., 2019).
In some cases, survey designs using targeted incentives have been implemented experimentally, to evaluate their efficacy. In the US context, in the 2006 Survey of Recent College Graduates, sample members expected to have a low response propensity were experimentally assigned to either receiving or not receiving a conditional incentive for survey participation; then, later in the fieldwork period, conditional incentives were offered (again experimentally) to sample members who had not participated yet. Targeted incentives led to a substantial increase in survey participation (Zukerberg et al., 2007), (as quoted in Lynn, 2017).
In waves 8 and 9 of the Census Bureau’s Survey on Income and Programme Participation, prior wave non-respondents were assigned to receive either $40, $20 or no incentive (Abreu & Winters, 1999; Martin et al., 2001). The authors found that offering incentives (versus no incentive) to prior wave non-respondents increased response, but larger incentive amounts ($40 versus $203) did not have a significant effect.
Similar results emerged from the US based National Longitudinal Survey of Women. Sample members who previously refused to participate, were offered either a $20 conditional incentive, a $40 conditional incentive or no incentive4. Incentives again lead to higher rates of participation and higher levels of data quality (in terms of item completion) but no reduction in survey costs (Zagorsky & Rhoton, 2008).
Targeted incentives were also found to be effective in the 2014 Child Development Supplement to the U.S. Panel Study of Income Dynamics (Fomby et al., 2017). Specifically, a random subsample of hard-to-reach families—i.e. those whose predicted probability of non-response fell in the top quartile of the distribution—were offered a time-limited $50 incentive5 conditional on the primary caregiver completing a 75-min interview and eligible children participating in a 30 min interview, over the three-weeks U.S. winter holiday period. The incentive led to a significant increase in completed interviews over the time-limited period and did not lower final response rates (after the time-limited incentive was withdrawn). Within the hard-to-reach group, the incentive was most effective in achieving cooperation from those who had the highest non-response predicted probabilities. However, as the experiment was implemented on a specific demographic (primary caregivers), it remains unclear to what extent these results are generalisable to different population subgroups and the seasonal effect of implementing the approach during the Christmas holiday period may have also played an important role.
Table A1, in the online supplement, shows a summary of the incentive amounts offered by study. To facilitate comparison, these amounts are adjusted for inflation, exchange rate and Purchase Power Parity.
The presented literature has several limitations. First, most studies have applied targeted incentives in a non-experimental setting; hence, it is not possible to evaluate their effectiveness (as opposed to designs where incentives are allocated equally across sample members).
Second, based on the experimental studies comparing designs with targeted incentives against designs without targeted incentives, it is hard to understand to which extent the results are generalisable to different contexts: these studies are all embedded in the US-context, are limited to specific subpopulations, and, in the case analysed by Fomby et al. (2017), also to a specific timeframe. To the best of our knowledge there have been no experimental applications of a targeted incentive approach in a large-scale study in the UK.
Third, while the few available experimental studies find support for the efficacy of targeted incentive designs, those studies always compare offering versus not offering monetary incentives to a target group (usually, expected to be under-represented in the study). The comparison is therefore between a survey design where an overall significantly higher budget is allocated to incentives versus designs in which a lower overall budget is allocated to incentives. Other things being equal—i.e. in absence of variation (across the two designs) in fieldwork effort/costs to contact respondents and gauge their participation—it does not seem surprising that designs providing on average significantly higher incentives would lead to higher overall response rates.
In summary, while these experimental studies can provide valuable information about whether incentives can be effective at boosting response amongst particular subgroups they do not allow us to understand whether a fixed incentive budget is most effectively used by allocating incentives equally across all sample members or whether it could be more effective to offer higher incentives to certain sub-groups and to fund this by reducing the incentive offered to the remaining groups. To the best of our knowledge, this latter research question has not been yet analysed in the literature.
In this study we use experimental data from the Next Steps Age 32 survey to compare a targeted incentive approach where prior wave non-respondents are offered a higher value incentive which is funded by offering a lower value incentive to prior wave respondents, with a non-targeted approach in which all respondents are offered the same value incentive. We answer the following research questions:
Does the targeted incentive approach—with higher value monetary incentives for prior wave non-respondents and lower incentives for prior wave participants—lead to overall higher response rates6 than a non-targeted approach where all participants are offered an equal incentive?
We hypothesize that offering lower incentives to engaged prior wave respondents would have little impact on response rates within this group, as this subgroup is expected to be highly engaged in the study, and less motivated to participate by the incentive. In addition, we expect that offering higher incentives to prior wave non-respondents could substantially boost participation amongst this group. We hypothesise that by boosting participation amongst prior wave non-respondents without reducing participation amongst prior wave respondents the overall response rate will be increased relative to the non-targeted approach.
Furthermore, targeted incentives may be used not only to increase overall response rates but also to reduce non-response bias by increasing participation for subgroups of sample members, which might be otherwise underrepresented. Hence, our second research question is:
What is the effect on sample representativeness/non-response bias of the targeted incentive approach (i.e. offering higher monetary incentives to prior wave non-respondents and lower incentives to prior wave respondents) compared to the non-targeted approach where all participants are offered the same incentive?
We hypothesise that the targeted approach will boost participation amongst prior wave non-respondents and that by doing so non-response bias will be reduced.
Besides improvements in response rates and representativity, another potential benefit of targeted monetary incentives is the potential to achieve cost savings. Our third, fourth, and fifth research questions are:
We expect the targeted approach to lead to a higher number of interviews overall (with the number of prior wave respondents being interviewed being constant across the two designs and the number of interviews from prior wave non-respondents being higher in the targeted design). We hypothesise that the targeted design will be more cost effective (lower per interview incentive cost) than the non-targeted design. This is because in both designs we expect most interviews to be conducted with prior wave respondents, and in the targeted design, the cost-saving from the lower incentive amount paid to prior wave respondents would more than offset the higher costs of incentives offered to prior wave non-respondents. Furthermore, we hypothesize that the targeted approach will lead to lower fieldwork effort being required (number of calls per case) as sample members with lower cooperation propensity will be encouraged to participate promptly, and on a similar basis, given the web-first mixed mode design we also expect to achieve a higher share of web interviews in the targeted approach.
Next Steps is a longitudinal study following approximately 16,000 participants in England, born in 1989–90. The study began in 2004, when participants were aged 14, and it was known as the Longitudinal Study of Young People in England (LSYPE). The target population was young people who were in Year 9 in English state and independent schools and pupil referral units in February 2004. After the first wave of data collection, participants were interviewed yearly until age 20 (wave 7, in 2010) then at age 25 in 2015 (wave 8) and age 32 in 2023 (wave 9). Waves 1 to 7 were run by the Department for Education. During this period, only participants who took part in the prior wave were issued in the following wave. The study was then paused for five years until the Age 25 Survey in 2015 when the study was re-launched by the Centre for Longitudinal Studies, University College London. During the Age 25 Survey efforts were made to trace and contact everyone who ever took part in the study (Bailey et al., 2017; Calderwood et al., 2021). At the Age 32 survey, the issued sample was comprised of all cases who have ever participated in the study with the exception of those who have permanently withdrawn, those known to have died, those regarded as permanently untraced and those in prison or on probation.
Age 32 survey fieldwork was carried out in four main batches, and the incentive experiment analysed in this study was implemented in the first. A stratified random sub-sample consisting of 25% of all cases to be issued (n = 3113) was selected for issue to the first batch of fieldwork in which the incentive experiment was conducted7.
The survey used a sequential mixed mode approach where sample members were first invited to complete the survey online. After a three week online only period interviewers started attempting contact with sample members, either by telephone or by face-to-face (for unproductive sample members at the prior wave or sample members who did not provide a telephone number). In addition to offering face-to-face interviews, interviewers were also able to offer self-completion of the survey either on a device (small tablet) handed over to them by interviewers and collected at a later agreed time, video interviews (using Microsoft Teams) and, in exceptional circumstances a telephone interview. The web-survey also remained open during the interviewer-lead fieldwork period.
Topics covered included family and relationships, housing, employment and income, education, health and wellbeing, identity and attitudes, childhood, and other life events. The median survey duration online (or on a tablet provided by interviewers) was 55 min, and 88 min for in-person interviews8. In addition to the main questionnaire, sample members were invited to complete a cognitive assessment; to provide a saliva sample for DNA extraction, consent to data linkage, and to consent to contact their live-in partner to ask them to consent to linkage of their administrative records9.
Next Steps cohort members have been offered incentives for survey participation since the study’s inception. Incentives amounts and conditions have varied over time. In wave 1 (at age 13/14) all cohort members were offered a £5 high street voucher conditional on survey participation, while at wave 2 and 3 the £5 voucher was unconditional and at wave 4 the voucher amount increased to £8. From wave 5 to wave 7, incentives were offered to web respondents only—this shift coincided with the switch from face-to-face to a mixed mode design (with web followed by telephone and face to face interviewing) (Department for Education, 2011). In wave 8, i.e. the Age 25 survey, respondents received an “early bird” £20 incentive conditional on completing the survey online during the first three weeks of fieldwork, and a £10 conditional incentive after that period (Calderwood et al., 2023).
The incentive experiment analysed in this study was implemented in the first batch of fieldwork of the Age 32 Survey. The “early bird” incentive approach used in the previous wave in which a higher incentive was offered for web completion within the first three weeks was maintained: however, the incentive levels also varied experimentally, depending on prior wave participation.
Specifically, 50% of sample members were randomly assigned to a targeted incentive group, and 50% to a non-targeted incentive group. Stratification was implemented during experimental allocation in order to control for the random variability on observed characteristics between the experimental and the control group. Stratification variables were the same as those used when allocating the sample to batches (participation history, region of residence, and sex).
In the targeted incentive group, prior wave respondents were offered a £15 conditional incentive while prior wave non-respondents were offered a £25 conditional incentive. In the non-targeted group, sample members were offered a £20 incentive regardless of prior wave participation. In addition, all sample members who completed the survey online within the first three weeks of fieldwork received an additional £10 “early bird” conditional incentive. Table 1 shows incentive levels by experimental groups. Cohort members were also sent an additional £5 if they provided a saliva sample (see Tab. 1).
The sample used for this analysis excludes ineligibles—hence, it also excludes sample members who complete the web survey from abroad (i.e. “productive ineligibles”). This research is based on Next Steps Age 32 data and waves 1–8 (University College London et al., 2023).
At the start of fieldwork participants were sent an advance mailing by post and by email (where email addresses were held) (Ipsos, 2024). The postal and email invitation included the weblink to the survey including the respondents unique access code and three booklets—one providing general information about the survey, one covering administrative data linkage and one covering saliva collection. Information about the incentive was provided in the letter and email.
Reminders were sent throughout the web fieldwork period by email (n = 2), by text (n = 3) and by post (if no email address was held) (n = 1). Information about the incentives was repeated in the reminders. Despite variability in contact details held all participants were sent at least one reminder.
After the three weeks web only fieldwork period had passed, interviewers attempted to contact non-respondents. Interviewers made contact by telephone and in-person and were required to make multiple calls at different times on different days. Where interviewers discovered that participants had moved from the issued address they followed tracing procedures to try and establish contact at a new address.
First, we explore any differences between experimental and control group by socio-demographic characteristics using a series of chi-square tests. We compare sex, ethnicity and parental socio-economic status National Statistic’s Socio-economic Classification (NS-SEC) for the main parents, measured at baseline (wave 1)10 and find no significant difference across subgroups.
To answer RQ1 (i.e. the effect of targeted incentives on overall response rates), we compare response rates achieved in the targeted and non-targeted designs. We do this overall and separately for prior wave respondents and non-respondents in the targeted and non-target designs, and use a series of chi square tests to test significance11. To answer RQ2, first, we compare the distribution of sex, ethnicity, and parental socio-economic status in the issued sample, and among respondents in the targeted and non-targeted subsamples. We test for significant differences across the targeted versus non-targeted groups using chi-square tests. Second, we compare the achieved sample in the targeted and non-targeted groups. The comparison is based on key variables of interest for research purposes. These dimensions include: i. education (whether the respondent’s highest educational qualification is at least an undergraduate degree), ii. economic activity status (whether the respondent is employed or self-employed versus being a student, in unpaid work, taking care of the home, unemployed or any other status); iii. home ownership (the respondent’s housing situation, categorized as owning property, renting, or other arrangements), iv. marital status (whether the respondent is married or in a civil partnership versus any other status), v. parenthood (whether the respondent has at least one child), vi. self-reported health (grouped as excellent, very good, or good versus fair or poor), vii. mental health (captured through a set of four variables measuring experiencing in the past two weeks “nearly every day” or “more than half the days” any of the following symptoms: “feeling nervous, anxious, or on edge”; “being unable to stop or control worrying”; “having little interest or pleasure in activities”; or “feeling down, depressed, or hopeless”). Also in this case, we test for significant differences across groups using chi-square tests.
To answer RQ3, we multiply the number of respondents by the value of the conditional monetary incentives offered to them and then compute the percentage variation in cost across the two designs. We also divide the total incentive costs of the targeted and non-targeted designs by the number of achieved interviews in each design.
To answer RQ4, we compute the average number of call records per case (e.g. face-to-face visits, phone calls to cohort member or stable contact) and use a t-test to assess whether the average number of calls to achieve an interview is significantly different across the two designs. Finally, to answer RQ5, we calculate the share of interviews by mode of data collection (web versus face-to-face) and use a chi-square test to estimate any statistical difference across the two designs.
To answer RQ1 we investigate whether the targeted incentive approach leads to overall higher response rates than the non-targeted approach. Overall, after the web only fieldwork period (i.e. the first three weeks of fieldwork), response rates do not differ between the targeted (40%) and non targeted approach (40%) (see Tab. 2). The response rate amongst prior wave respondents—who were offered £15 in the targeted incentive group and £20 in the non-targeted group—was slightly lower in the targeted incentive group compared to the non-targeted incentive group (57% versus 58%); however, the difference was not statistically significant. The response rate among prior wave non-respondents—who were offered £25 in the targeted incentive group and £20 in the non-targeted group—was slightly higher in the targeted incentive group (15%) compared to the non-targeted group (13%); however, again the difference was not statistically significant.
Table 2 Survey response rate by experimental group
Response rate | ||||
Targeted | Non targeted | n | P | |
% | % | |||
P-values from Pearson Chi Squared test for the equality of the means. In the fixed (non-targeted incentive group) all sample members are offered a £20 conditional incentive; in the targeted incentive group, prior wave respondents are offered a £15 conditional incentive and prior wave non-respondents a £25 conditional incentive; all sample members are offered an additional £10 incentive conditional on survey completion by web in the first three weeks of fieldwork. | ||||
Early web completion | – | – | – | – |
Prior wave | – | – | – | – |
Respondent | 57 | 58 | 1811 | 0.560 |
Non-respondent | 15 | 13 | 1220 | 0.533 |
Total | 40 | 40 | 3031 | 0.876 |
After face-to-face | – | – | – | – |
Prior wave | – | – | – | – |
Respondent | 70 | 73 | 1811 | 0.180 |
Non-respondent | 27 | 25 | 1220 | 0.512 |
Total | 53 | 54 | 3031 | 0.578 |
When we analyse response after the face-to-face fieldwork period, we observe a similar trend. Overall, response rates are not significantly different between the targeted and the non-targeted incentive groups (53% versus 54%). Among prior wave respondents, the response rate was slightly lower in the targeted incentive group than in the non-targeted group though the difference was not statistically significant (70% versus 73%). Prior wave non respondents participated at a slightly higher rate in the targeted group than in the non-targeted incentive group (27% versus 25%), though again, the difference was not statistically significant.
To answer RQ2 (i.e. whether the targeted incentives successfully reduce non-response bias), we first compare the distribution of socio-demographic variables in the issued sample, with the samples achieved in the targeted versus non-targeted groups (Tab. 3). The comparison of the distributions and confidence intervals of the issued sample with the samples achieved in the targeted and non-targeted designs signals that both approaches lead to an overrepresentation of females, white respondents, and respondents with high parental socio-economic status but that there were no significant differences between the two approaches and that, therefore, neither seems clearly preferable in terms of the impact on bias.
Table 3 Issued sample and sample composition, after face-to-face
Issued sample | Sample composition, after face-to-face | |||||||||
Targeted | Not-targeted | |||||||||
% | 95% C.I. | % | 95% C.I. | % | 95% C.I. | |||||
lower | upper | lower | upper | lower | upper | P-value | ||||
P‑from chi-square test comparing the targeted versus not-targeted approaches. | ||||||||||
Male | 49 | 47.6 | 51.2 | 49 | 40.3 | 47.3 | 45 | 40.8 | 47.7 | 0.974 |
Female | 51 | 48.8 | 52.4 | 56 | 52.7 | 59.7 | 56 | 52.3 | 59.2 | |
White | 67 | 64.9 | 68.3 | 70 | 67.2 | 73.7 | 70 | 67.2 | 73.5 | 0.728 |
Non-white | 33 | 31.7 | 35.1 | 30 | 26.3 | 32.8 | 30 | 26.5 | 32.8 | |
Higher SES | 36 | 34.4 | 37.9 | 41 | 37.3 | 44.2 | 42 | 38.5 | 45.4 | 0.625 |
Intermediate SES | 19 | 17.4 | 20.2 | 20 | 17.4 | 23.0 | 18 | 15.1 | 20.4 | 0.213 |
Lower SES | 45 | 43.2 | 46.8 | 39 | 35.6 | 42.5 | 40 | 36.9 | 43.7 | 0.612 |
Furthermore, we compare the achieved samples in the targeted and non-targeted groups with respect to key variables of interest for research purposes (Tab. 4). Our analysis reveals no significant differences between the two groups across the dimensions considered (i.e. education, economic activity status, home ownership, marital status, parenthood, self-reported health, and mental health).
Table 4 Sample achieved (age 32 measures), after face-to-face
Sample composition, after face-to-face | |||||||||
Targeted | Not-targeted | ||||||||
% | N | 95% C.I. | % | N | 95% C.I. | ||||
lower | upper | lower | upper | P-value | |||||
P‑from chi-square tests. | |||||||||
Has a degree or higher education qualifications | 50 | 778 | 46.5 | 53.5 | 47 | 794 | 43.7 | 50.7 | 0.272 |
Employed or self employed | 86 | 786 | 84.0 | 88.8 | 87 | 801 | 84.7 | 89.3 | 0.712 |
Home ownership: Own | 56 | 648 | 52.2 | 59.9 | 57 | 663 | 53.5 | 61.1 | 0.826 |
Home ownership: Rent | 30 | 648 | 26.4 | 33.5 | 30 | 663 | 26.2 | 33.2 | |
Home ownership: other | 14 | 648 | 11.4 | 16.7 | 13 | 663 | 10.4 | 15.5 | |
Married or in civil partnership | 21 | 581 | 17.2 | 23.8 | 21 | 609 | 17.6 | 24.1 | 0.874 |
Has child(ren) | 43 | 781 | 39.2 | 46.1 | 41 | 795 | 37.2 | 44.1 | 0.419 |
Self-reported health: excellent, very good, or good. | 86 | 773 | 84.0 | 88.8 | 85 | 789 | 82.8 | 87.8 | 0.526 |
Feeling nervous, anxious, or on the edge: nearly every day or more than half days | 25 | 702 | 21.6 | 28.0 | 27 | 702 | 23.8 | 30.4 | 0.330 |
Not being able to stop or control worrying: nearly every day or more than half days | 22 | 208 | 18.4 | 24.5 | 22 | 720 | 18.8 | 24.8 | 0.877 |
Little interest or pleasure in doing things: nearly every day or more than half days | 19 | 703 | 15.6 | 21.4 | 20 | 726 | 17.1 | 22.9 | 0.478 |
Feeling down, depressed or hopeless: nearly every day or more than half days | 17 | 704 | 14.1 | 19.7 | 18 | 719 | 15.1 | 20.8 | 0.606 |
To answer RQ3, we compare the incentive costs of the two approaches by multiplying the number of respondents in each experimental group by the incentive they were offered (accounting also for the “early bird” incentive). The targeted approach was 13% less expensive than the non-targeted approach (Tab. 5). As the number of interviews varies across the two designs, we calculate the average incentive costs per interview, which is approximately £27.5 in the non-targeted design and approximately £24.6 in the targeted design. Hence, in terms of incentive payment the targeted approach appears to be cost-effective. This calculation however does not include the cost of implementing the more complex targeted incentive design (e.g. dispatching incentives of different levels, tailoring survey invitation materials, interviewer training).
Table 5 Costs and fieldwork efforts comparisons by targeted versus non-targeted experimental group
Targeted | Non-targeted | |
Overall incentive budget | −12.5 | – |
Incentive cost per interview | £24.6 | £27.5 |
Average calls per case (face-to-face) | 2.5 | 2.5 |
Average calls per case (telephone) | 1.3 | 1.4 |
Share of web interviews | 88% | 88% |
Furthermore, with respect to RQ4, no differences are found in the number of calls (face to face or telephone) required to reach the final outcome in the targeted versus not targeted group, as confirmed by a t-test. This result suggests little differences in fieldwork efforts across the two designs.
Finally, answering RQ5, we observe that the share of web interviews is similar across the two experimental groups: 88% of respondents in the targeted design participated online compared with 88% in the non-targeted design (P-value = 0.894) (Tab. 5).
We use novel experimental data from the Next Steps cohort study to test whether offering higher value incentives to prior wave non-respondents (and lower value incentives to prior wave respondents) leads to overall higher response rates, compared to offering to all sample members the same monetary incentive. Contrary to our hypothesis, we find that the targeted incentive approach does not lead to significantly higher response rates. Also, we do not see a clear indication that the targeted design improves sample representativeness, or that the achieved samples differ in dimensions of interest for research purposes between the targeted and non-targeted group. Incentive costs were lower in the targeted design but there was no significant impact on fieldwork effort (i.e. calls per case) or share of web interviews.
The results appear to contradict the theoretical literature (be this leverage saliency theory or social exchange theory), which suggest that incentives will appeal differently to different population subgroups or to respondents with different response propensities.
The decision to vary the incentive amounts by £10 across the two groups was driven by the goal of ensuring that prior wave respondents in the targeted group were not offered an amount which was too low to motivate survey response (i.e. to avoid negative impacts on their participation) while also managing the expectations of prior wave non-respondents on the incentive amount to be offered in future waves. In addition, the study team had concerns around the ethics of offering vastly different sums to different sub-groups for completing the same survey. It is possible that a larger difference between incentives offered to prior wave respondents and non-respondents would have led to different conclusions on the efficacy of a targeted incentive approach. Similarly, the same difference in incentives across the two groups, but at a different level might also have led to different conclusions.
Our expectation that reducing the incentive value offered to prior wave respondents would not lead to a reduction in response was specific to the amount we offered. If the amount offered had been drastically reduced we would expect to observe lower participation rates, at least among some prior wave survey respondents. Similarly, if the amount offered to prior wave non-respondents had been significantly increased, this may well have increased participation amongst this group.
Recent findings lend support to this line of reasoning. Lipps et al. (2022) showed that among sample members classified as having a “high response propensity” a 10 CHF conditional incentive led to only a marginal increase in response rates compared to entry into a lottery. In contrast, at the subsequent wave, “low response propensity” sample members responded significantly more positively to a 20 CHF conditional incentive compared to the same lottery (Lipps et al., 2023). This suggests that beyond the general tendency for low-response propensity individuals to be more sensitive to incentives, there may also be a threshold effect—a minimum level of monetary incentive required to effectively motivate participation. Future research may further experiment with incentive amounts with the aim of finding the optimal allocation for response maximisation (and minimisation of non-response bias) under specific budget constraints.
A different approach to targeting may also have led to different results. In this case targeting was based on participation in the prior wave which happened around 7 years before the Age 32 Survey. Although other studies (e.g. Lynn et al., 2024) have adopted prior wave participation as a proxy of co-operation of sample members, there are many other characteristics that could have been used as the basis for targeting.
It must also be acknowledged that non-contact is a significant contributor to non-response, particularly among prior wave non-respondents. Where letters and emails were not received, as contact details held by the study were out of date and interviewers were unable to trace participants to a new address, the offer of the higher incentive would never have been seen. As a result, this subgroup could not have been influenced by the incentive.
Prior experimental research on targeted incentives has typically found a positive effect. We do not view our results as contradictory with this prior evidence but rather as not fully comparable. Prior experimental research has compared offering versus not offering monetary incentives to a target group. Hence, the comparison is between a design, in which a lower overall budget is allocated to incentives, with a targeted design, in which a higher budget is allocated to incentives. Conversely, we explore the impact on survey participation of increasing incentives to some subgroups of respondents while decreasing incentives to other subgroups. Furthermore, none of these experimental studies focused on the UK context. As such, we consider this work as providing a novel contribution to the literature.
The evidence presented here should not discourage further attempts at targeting incentives: further experimentation is needed to identify whether there are other population subgroups that might be more responsive to targeted incentive approaches (or whether targeting based on more precise measures of response propensity might prove more successful); in this respect, longitudinal surveys offer an ideal setting for testing differential effectiveness of incentives due to the availability of information on panel members from prior survey waves. Further research may also consider how to ideally allocate the overall budget for incentives across the targeted and non-targeted designs.
This research has some limitations. First, we acknowledge that the implementation of this experiment in a specific survey wave of a cohort study does not allow us to evaluate the effectiveness of incentives for maximising response among other age groups/cohorts, nor across multiple countries, nor at different levels of maturity of the panel. Furthermore, survey designs which include incentives targeted to non-cooperative sample members are applicable only to longitudinal studies, where information about participation at prior survey waves is available.
Finally, besides the effectiveness of varying incentive levels by population subgroups, survey practitioners need to consider the ethical aspect of this design, considering whether offering different level of monetary incentives to respondents who complete the same survey violates expectations of equity. Nicolaas and colleagues (2019) argue that the use of targeted incentives appears to be fair if conceptualised within the motivations that persuade sample members to take part in surveys. Survey participation may be driven by altruist motives but also by individualistic reasons, like self-interest (e.g. importance of the study for the respondent/those close to them) or survey specific factors (sense of obligation towards the survey sponsor, relevance of the study). In this context, it does not seem unfair to compensate hard-to-persuade respondents who may not attach similar value to motivations which may persuade others or for whom participation may come at greater costs, as equity does not necessarily imply equality of treatment.
Another concern is the potential detrimental effect on response or attrition that may arise if respondents become aware of being offered lower incentives compared to other sample members. While awareness of unequal treatment is unlikely to occur in general population studies, it may arise when survey sample members belong to the same institutions (e.g. school, workplace, etc.). On this respect, empirical evidence is reassuring: Singer, Groves and Corning (1999) notice that while most respondents perceive targeted incentives as unfair, this consideration does not affect subsequent survey participation.
AAPOR (2023). Standard definitions. Final dispositions of case codes and outcome rates for surveys the American Association for public opinion research. https://aapor.org/wp-content/uploads/2024/03/Standards-Definitions-10th-edition.pdf. Accessed 8 Aug 2024. →
Abreu, D. A., & Winters, F. (1999). Using monetary incentives to reduce attrition in the survey of income and program participation. Proceedings of the American Statistical Association, Survey Research Methods Session. →
Anders, J., Calderwood, L., Adali, T., Yarde, J., & Taylor, L. (2023). Do targeted higher-value conditional incentives improve survey response and representation in longitudinal studies? Evidence from the COVID Social Mobility and Opportunities Study (COSMO) in England Paper presented at the 10th Conference of the European Survey Research Association, Milan. →
Bailey, J., Breeden, J., Jessop, C., & Wood, M. (2017). Next steps age 25 survey: Technical report. NatCen Social Research. https://doc.ukdataservice.ac.uk/doc/5545/mrdoc/pdf/age_25_technical_report.pdf →
Blohm, M., & Koch, A. (2013). Respondent incentives in a national face-to-face survey: effects on outcome rates, sample composition and fieldwork efforts. methods, data, analyses, 7(1), 89–122. https://doi.org/10.12758/mda.2013.004. a, b
Booker, C. L., Harding, S., & Benzeval, M. (2011). A systematic review of the effect of retention methods in population-based cohort studies. BMC public health, 11, 249. https://doi.org/10.1186/1471-2458-11-249. →
Booth, C., Wong, E., Brown, M., & Fitzsimons, E. (2024). Evaluating the effect of incentives on web survey response rates in the UK Millennium Cohort Study. Survey Research Methods, 18(1), 47–58. https://doi.org/10.18148/srm/2024.v18i1.8210. a, b
Börsch-Supan, A., Krieger, U., & Schröder, M. (2013). Respondent incentives, interviewer training and survey participation. SHARE Working Paper Series, Issue 12-2013. Munich: Survey of Health Ageing and Retirement in Europe—European Research Infrastructure Consortium (SHARE-ERIC). https://share-eric.eu/fileadmin/user_upload/SHARE_Working_Paper/WP_Series_12_2013.pdf a, b
Bureau of Labor Statistics National Longitudinal Survey of Youth 1997: Index to the NLSY97 Cohort
Cabrera-Álvarez, P., & Lynn, P. (2025). Benefits of increasing the value of respondent incentives during the course of a longitudinal mixed-mode survey. International Journal of Social Research Methodology. https://doi.org/10.1080/13645579.2024.2443630. →
Calderwood, L., Peycheva, D., Henderson, M., Silverwood, R., Mostafa, T., & Rihal, S. (2021). Next steps: sweep 8—age 25 user guide (3rd edn.). London: UCL Centre for Longitudinal Studies. →
Calderwood, L., Peycheva, D., Wong, E., & Silverwood, R. (2023). Effects of a time-limited push-to-web incentive in a mixed-mode longitudinal study of young adults. Survey Research Methods, 17, 147–157. https://doi.org/10.18148/srm/2023.v17i2.7980. →
Carpenter, H. (2021). UK household longitudinal study wave 11 technical report. https://www.understandingsociety.ac.uk/sites/default/files/downloads/documentation/mainstage/technical-reports/wave-11-technical-report.pdf →
Department for Education (2011). LSYPE user guide to the datasets: wave 1 to wave 7. https://doc.ukdataservice.ac.uk/doc/5545/mrdoc/pdf/lsype_user_guide_wave_1_to_wave_7.pdf →
Felderer, B., Müller, G., Kreuter, F., & Winter, J. (2018). The effect of differential incentives on attrition bias: evidence from the PASS wave 3 incentive experiment. Field methods, 30(1), 56–69. https://doi.org/10.1177/1525822X17726206. a, b, c, d
Fomby, P., Sastry, N., & McGonagle, K. A. (2017). Effectiveness of a time-limited incentive on participation by hard-to-reach respondents in a panel study. Field methods, 29(3), 238–251. https://doi.org/10.1177/1525822X16670625. a, b
Glendinning, R., Young, V., & Bogdan, A. (2018). Skills and employment survey 2017. Technical report. https://www.cardiff.ac.uk/research/explore/find-a-project/view/626669-skills-and-employment-survey-2017 GfK UK Social Research. →
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: description and an illustration. Public opinion quarterly, 64(3), 299–308. http://www.jstor.org/stable/3078721. →
Groves, R. M., Fowler, F. J. Jr, Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd edn.). Hoboken: John Wiley & Sons, Inc.. →
Ipsos (2024). Next steps—sweep 9 survey technical report. https://cls.ucl.ac.uk/cls-studies/next-steps/next-steps-age-32-sweep/ →
Ipsos Mori (2019). Omnibus survey of pupils and their parents or carers: wave 6 research report. https://www.gov.uk/government/publications/pupils-and-their-parents-or-carers-omnibus-wave-1-survey →
Jäckle, A., & Lynn, P. (2008). Respondent incentives in a multi-mode panel survey: cumulative effects on nonresponse and bias. Survey Methodology, 34(1), 105–117. →
Kay, W. R., Boggess, S., Selvavel, K., & McMahon, M. F. (2001). The use of targeted incentives to reluctant respondents on response rate and data quality. Proceedings of the Survey Research Methods Section. American Statistical Association. →
Knibbs, S., Lindley, L., Swordy, D., Stevens, J., & Clemens, S. (2018). Omnibus survey of pupils and their parents/carers research report wave 4. https://assets.publishing.service.gov.uk/media/5b8fdfdded915d1ed1494d18/Omnibus_survey_of_pupils_and_their_parents_or_carers-wave_4.pdf a, b
Laurie, H. (2007). The effect of increasing financial incentives in a panel survey: an experiment on the British Household Panel Survey, wave 14. Working Paper 2007-05. Institute for Social and Economic Research. ISER Working Paper Series, Issue 2007-05 a, b, c, d, e
LeClere, F., Plumme, S., Vanicek, J., Amaya, A., & Carris, K. (2012). Household early bird incentives: leveraging family influence to improve household response rates. American Statistical Association Joint Statistical Meetings, Section on Survey Research, San Diego. →
Lindley, L., Clemens, S., Knibbs, S., Stevens, J., & Bagge, L. (2019). Omnibus survey of pupils and their parents or carers: wave 5 Research report. Department for Education. https://www.gov.uk/government/publications/pupils-and-their-parents-or-carers-omnibus-wave-1-survey →
Lipps, O. (2010). Effects of different incentives on attrition and fieldwork effort in telephone household panel surveys. Survey Research Methods, 4(2), 81–90. https://doi.org/10.18148/srm/2010.v4i2.3538. a, b, c
Lipps, O., Jaquet, J., Lauener, L., Tresch, A., & Pekari, N. (2022). Cost efficiency of incentives in mature probability-based online panels. Survey Methods: Insights from the Field. https://doi.org/10.13094/SMIF-2022-00007. a, b, c, d
Lipps, O., Felder, M., Lauener, L., Meisser, A., Pekari, N., Rennwald, L., & Tresch, A. (2023). Targeting incentives in mature probability-based online panels. Survey Methods: Insights from the Field. https://doi.org/10.13094/SMIF-2023-00010. a, b, c
Lugtig, P. (2014). Panel attrition: Separating stayers, fast attriters, gradual attriters, and lurkers. Sociological Methods & Research, 43(4), 699–723. https://doi.org/10.1177/0049124113520305. →
Lynn, P. (2017). From standardised to targeted survey procedures for tackling non-response and attrition. Survey Research Methods, 11(1), 93–103. https://doi.org/10.18148/srm/2017.v11i1.6734. a, b, c
Lynn, P., Bianchi, A., & Gaia, A. (2024). The impact of day of mailing on web survey response rate and response speed. Social Science Computer Review, 42(1), 352–368. https://doi.org/10.1177/089443932311738. →
Mack, S., Huggins, V., Keathley, D., & Sundukchi, M. (1998). Do monetary incentives improve response rates in the survey of income and program participation. Proceedings of the Section on Survey Methodology. American Statistical Association. a, b
Martin, E., Abreu, D., & Winters, F. (2001). Money and motive: effects of incentives on panel attrition in the survey of income and program participation. Journal of Official Statistics, 17(2), 267. →
Nicolaas, G., Corteen, E., & Davies, B. (2019). The use of incentives to recruit and retain hard-to-get populations in longitudinal studies. Natcen Social Research. https://www.ukri.org/wp-content/uploads/2020/06/ESRC-220311-NatCen-UseOfIncentivesRecruitRetainHardToGetPopulations-200611.pdf a, b, c
Pforr, K., Blohm, M., Blom, A. G., Erdel, B., Felderer, B., Fräßdorf, M., Hajek, K., Helmschrott, S., Kleinert, C., Koch, A., Krieger, U., Kroh, M., Martin, S., Saßenroth, D., Schmiedeberg, C., Trüdinger, E.-M., & Rammstedt, B. (2015). Are incentive effects on response rates and nonresponse bias in large-scale, face-to-face surveys generalizable to Germany? Evidence from ten experiments. Public opinion quarterly, 79(3), 740–768. https://doi.org/10.1093/poq/nfv014. →
Philipson, T. (1997). Data markets and the production of surveys. The Review of Economic Studies, 64(1), 47–72. https://doi.org/10.2307/2971740. →
Rodgers, W. (2002). Size of incentive effects in a longitudinal study. Proceedings of the Survey Research Methods Section of the American Statistical Association. →
Singer, E., Groves, R. M., & Corning, A. D. (1999). Differential incentives: beliefs about practices, perceptions of equity, and effects on survey participation. Public opinion quarterly, 63(2), 251–260. https://doi.org/10.1086/297714. →
Singer, E., Van Hoewyk, J., Gebler, N., Raghunathan, T., & McGonagle, K. (1999). The effect of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics, 15(2), 217–230. →
Singer, E., Van Hoewyk, J., & Maher, M. P. (2000). Experiments with incentives in telephone surveys. Public opinion quarterly, 64(2), 171–188. https://doi.org/10.1086/317761. a, b, c
Suzer-Gurtekin, Z. T., Elkasabi, M., Liu, M., Lepkowski, J. M., Curtin, R., & McBee, R. (2016). Effect of a pre-paid incentive on response rates to an Address-Based Sampling (ABS) Web-Mail survey. Survey Practice, 9(4), 1–7. https://doi.org/10.29115/sp-2016-0025. →
University College London, UCL Institute of Education, & Centre for Longitudinal Studies (2023). Next Steps: Sweeps 1–8, 2004–2016. https://doi.org/10.5255/UKDA-SN-5545-8. SN: 5545; Version 16th Edition →
Zagorsky, J. L., & Rhoton, P. (2008). The effects of promised monetary incentives on attrition in a long-term panel survey. Public opinion quarterly, 72(3), 502–513. https://doi.org/10.1093/poq/nfn025. a, b, c
Zukerberg, A., Hall, D., & Henly, M. (2007). Money can buy me love: experiments to increase response through the use of monetary incentives. In. Washington DC: US Census Bureau. →