The online version of this article (https://doi.org/10.18148/srm/2025.v19i2.8178) contains supplementary information.
The accurate measurement of household consumption expenditure is important for official statistics as well as economic research across a wide range of areas (Browning, Crossley, & Winter, 2014). National budget surveys, such as the Consumer Expenditure Survey in the United States or the Living Costs and Food Survey in the United Kingdom, are the traditional method of collecting data on household expenditure, typically employing a combination of recall surveys and expenditure diaries (Browning, Crossley, & Winter, 2014; Silberstein & Scott, 1991). In recall surveys, respondents are asked to report how much they spent on different expenditure categories over a specified period, which is typically used to capture relatively infrequent expenses. Diary-based methods complement the recall approach and are designed to collect data about more frequent expenses: respondents are asked to keep a diary over a specified period in which they can record detailed information about every single expense close in time to the actual purchase event.
Both types of expenditure surveys are burdensome, often asking about a large number of expenditures over an extended period of time. In recent years, there are growing concerns that the quality of expenditure survey data is declining (Browning, Crossley, & Winter, 2014). In line with the general trend of decreasing response rates in surveys (Luiten, Hox, & de Leeuw, 2020; Williams & Brick, 2018), the response rates in national budget surveys were found to decline across several countries (Barrett, Levell, & Milligan, 2015). Another concern is measurement error that has been reported for both recall surveys and expenditure diaries (Browning, Crossley, & Winter, 2014; National Research Council, 2013). In recall surveys, measurement error arises if respondents forget to report expenses, report incorrect amounts, or report expenses that occurred prior to the specified recall period, also known as telescoping (Fricker et al., 2015; Geisen et al., 2011; Maki & Garner, 2010; Neter & Waksberg, 1964). Diary-based methods aim to facilitate respondents’ recall of expenditures by shortening the recall period. In practice, however, a large proportion of respondents complete their diary at the end of the study period, resulting in similar recall errors as in retrospective survey questions, although the recall period in surveys is typically longer than that for diaries (Browning, Crossley, & Winter, 2014; Silberstein & Scott, 1991). An additional problem with diary-based methods is that the level of reporting tends to decline over the course of the study period, also known as diary fatigue. Previous research on two-week expenditure diaries, for example, has shown that the level of reported expenditure is significantly lower in the second week compared to the first week, and lower on later days compared to earlier days of either week (Ahmed, Brzozowski, & Crossley, 2006; Brzozowski, Crossley, & Winter, 2017; McWhinney & Champion, 1974; Silberstein & Scott, 1991; Stephens, 2003; Turner, 1961).
This paper reports on a novel approach using smartphone technology to reduce the burden of reporting on expenditures and improve the quality of reporting in a probability household panel of the general population in Great Britain. Rather than reporting the details of each expenditure, respondents were asked to download an app on their smartphone to scan receipts, thereby limiting direct data entry to non-receipted payments. Respondents were asked to use the app to report their purchases of goods and services over the period of one month. The app directed respondents to use the built-in camera of their mobile device to photograph all paper receipts that they received at a point of sale. In a separate diary section of the app, they were also able to manually enter other expenditures, including online payments, regular payments made by standing order or direct debit, non-receipted payments outside structured shopping environments, for example at market stalls, or payments for which they did not retain or lost the paper receipt before scanning. The data collection approach aims to combine the advantages of existing forms of expenditure measurement. The receipt-scanning component allows capturing expenditure information directly from sales receipts and, thus, collecting detailed data about each purchase as well as reducing burden and measurement error due to recall. The diary component aims to ensure that non-receipted purchases are also covered. Since most study participants are likely to carry around their smartphone throughout the day (Keusch, Wenz, & Conrad, 2022), implementing this data collection procedure within a mobile app, rather than relying on external scanners or using paper diaries, has the advantage that they have access to the receipt scanning app at all times. Obtaining paper receipts at a point of sale might serve as a physical cue to record the spending activity, and respondents are able to scan the receipts or manually report their expenses shortly after the purchase event. Scanning whole receipts enables collecting information about multiple expenditures in one action, rather than requiring respondents to enter each item purchased separately. In addition, regular notifications can be implemented within the app that remind and encourage respondents to report their expenditures every day rather than at the end of the study period. This receipt scanning study is, to the best of our knowledge, the first to be implemented on a sample that is representative of the general population.
We have previously reported on participation in the app study and potential non-participation bias (Jäckle et al., 2019). In this paper, we focus on the quality of data provided by those who participated in the app study. We compare the quality of expenditure data with benchmark data from the Living Costs and Food Survey, the national budget survey in the United Kingdom. While we cannot assess the quality of both data sources on the participant level and the benchmark data are not free from survey errors, this approach enables an initial investigation to what extent the app-based estimates of consumer expenditure on aggregate mirror those from a high-quality survey. We address the following research questions:
Major research programmes have been initiated to improve the measurement of consumer expenditure in budget surveys (Browning, Crossley, & Winter, 2014). Examples include the Gemini Project initiated by the U.S. Bureau of Labor Statistics in 2009 to redesign the Consumer Expenditure Survey1 and the Conference on Research in Income and Wealth sponsored by the National Bureau of Economic Research in 2011 (Carroll, Crossley, & Sabelhaus, 2015). At the same time, the collection and use of expenditure data from other sources has rapidly expanded (Browning, Crossley, & Winter, 2014; Jäckle et al., 2021). Measures of consumer expenditure are, for example, created based on process-generated data from online financial aggregators that link individuals’ financial accounts and provide summaries of their income and expenditures (e.g., Angrisani, Kapteyn, & Samek, 2018; Baker, 2018; Gelman et al., 2014; Kuchler & Pagel, 2021). Similarly, transactional data from store loyalty cards scanned at a point of sale (e.g., Andreyeva et al., 2012; Felgate et al., 2012; Newing et al., 2014; Panzone et al., 2016; Tin et al., 2007) or credit and debit cards (e.g., Agarwal et al., 2007; Gross & Souleles, 2002) provide new forms of expenditure measurement. Home scanner data collected by market research organisations are also being used for research on consumer expenditure: consumers are asked to scan the barcodes of all items they purchased with a barcode scanner installed in their home and the barcode data are then linked to other data such as prices and nutritional information (e.g., Leicester, 2015; Leicester & Oldfield, 2009; Aguiar & Hurst, 2007; Broda et al., 2009; Griffith et al., 2009; Griffith & O’Connell, 2009; Lusk & Brooks, 2011; Zhen et al., 2009). The advantage of these data sources over existing forms of expenditure measurement is that they do not rely on the study participants’ ability to recall and report information and are, thus, not susceptible to the associated recall errors (Browning, Crossley, & Winter, 2014; Jäckle et al., 2021). In addition, they can collect expenditure data at a much more detailed level and a higher frequency, which reduces burden on respondents and allows measuring changes in expenditure patterns over time. However, there are also a number of limitations to these data, which mean that they are not necessarily suitable as substitutes for consumer expenditure surveys. Users of consumer expenditure data vary in their requirements: some require full COICOP (Classification of Individual Consumption by Purpose; United Nations, 2000) expenditure classifications, for others, less detailed classifications are sufficient; some require data that identify the spending of individual households, others only need expenditure for the consumer sector as a whole; some require the geolocation of the consumer or of the point of sale where the purchase was made; and some require additional information about the characteristics of the household, such as its composition or income. While bank transactions, credit and debit card data could cover all electronic payments made by consumers in a country if data from all banks/card issuers can be obtained, the data only include the total value of a purchase, and no information on individual items. The Merchant Category Codes identify the type of retailer where a card payment was made, and can be coded to high-level COICOP classifications, which may be sufficient for data users (e.g., Alatrista-Salas et al., 2021; Hoseini & Valizadeh, 2021). Payments by direct debits and standing orders are included in bank transactions data, but the string that identifies the transaction is not necessarily codable. In addition, the data do not identify consumers, or include information about their characteristics, and do not include cash purchases. Studies based on financial aggregator data have the advantage that the accounts of a consumer are linked across banks; however, they mostly rely on convenience samples that are not representative of the general population and have limited sociodemographic information (see Angrisani, Kapteyn, & Samek, 2018 for an exception). Store scanner data include the product barcode or Universal Product Code (UPC), quantity and price of goods sold, but do not identify individual consumers. Till receipt data include all information on the shopping receipts of a retailer, but again do not identify consumers. Loyalty card data are linked to an individual consumer and can include information about the account holder from when they signed up for the card but provide incomplete data on expenditure. Home scanner data can be expensive to purchase, rely on convenience samples, and have been shown to record lower levels of spending in comparison with consumer expenditure surveys in both the United Kingdom and the United States (Leicester, 2015; Zhen et al., 2009).
Receipt scanning apps have become popular methods of data collection in the market research industry, with the primary aim of studying shopping behaviour (Jäckle et al., 2021). Examples of such apps include ReceiptPal2, Receipt Hog3, Ibotta4, and Worldpanel Plus5. In academic research, however, there has only been a small body of literature using receipts for data collection (e.g., Cullen et al., 2007; DeWalt et al., 1990; French et al., 2009; Martin et al., 2006; Rankin et al., 1998; Ransley et al., 2001; Smith et al., 2013a; Smith et al., 2013b; Tang et al., 2016; Weerts & Amoran, 2011). In these studies, rather than using receipt scanning apps, respondents have been asked to keep and annotate paper receipts and return them via postal mail. A major limitation of existing studies is that they are typically based on small-scale volunteer samples and conducted in small geographic areas, such as food shoppers recruited at local supermarkets.
Prior research on the quality of data generated from receipt scanning studies is even further limited and has mainly focused on representation error6 (Jäckle et al., 2019; Ransley et al., 2001; Smith et al., 2013a). For example, Jäckle et al. (2019), relying on the same receipt scanning app as the present study, examined potential biases in which types of sample members have access to mobile devices (coverage bias) and in which types of sample members participate in the app study (participation bias). They found evidence for extensive coverage bias: mobile device users differed from non-users in terms of sociodemographic characteristics and financial behaviours. Conditional on coverage, comparatively less participation bias was found, but the differences between study participants and non-participants mirrored the coverage bias patterns.
In a previous paper, Lessof (2022, Chapter 4) studied indicators of process quality in the receipt scanning app data. Participants used the app, on average, on 21.7 study days, with approximately one third of participants using the app on at least 28 days. They reported an average of 0.89 spending events per day, with almost two thirds of spending events recorded, on average, by receipt photographs. The receipts were photographed, on average, 7.7 h after the time of the spending event, with 3% of participants photographing their receipts within less than an hour but a vast majority of 95% within 24 h. Over the study period, the process quality decreases for each of these indicators. For each additional study day, for example, the predicted probability of using the app decreases by 0.6 percentage points.
Our study builds on this previous work by focusing on the outcome quality of the data provided by respondents who used the receipt scanning app, by comparing the reported expenditure with benchmark data from the Living Costs and Food Survey.
The Spending Study was implemented on the Understanding Society Innovation Panel, a nationally representative household panel in Great Britain7 (University of Essex, Institute for Social and Economic Research, 2022). The Innovation Panel is based on a stratified, clustered sample of households in England, Scotland, and Wales (Lynn, 2009). The original sample, first interviewed in 2008 (wave 1), consists of approximately 1500 households, supplemented with refreshment samples of approximately 500 households each in waves 4, 7, 10, 11, and 14. The interviews are conducted annually with all household members aged 16+ and focus on individuals’ socio-economic, health, housing, and family situation. The Spending Study was implemented on the Innovation Panel sample between waves 9 and 10. Fieldwork for wave 9 ran from May to September 2016. Sample households were randomly allocated to survey mode: one third of households were allocated to face-to-face interviews and the other two thirds to a sequential mixed-mode (web, face-to-face) design. In the final phase of fieldwork, nonrespondents in both experimental allocations were given the option to complete the survey online or by telephone. The household response rate was 85% (AAPOR RR5; The American Association for Public Opinion Research, 2016), with 85% of individuals within those households completing the individual interviews (Institute for Social and Economic Research, 2021). More details about the Innovation Panel survey design and fieldwork are available in the online documentation.8
In October 2016, all sample members aged 16+ in households where at least one adult had completed the wave 9 interview were invited to the Spending Study (Jäckle et al., 2018). They received an invitation letter by post and by email, if their email address was known. The invitation letter informed sample members about the study and contained a URL to an online registration survey, at the end of which they were given login details and instructions on how to download the receipt scanning app on their smartphone or tablet. The app was available for iOS and Android devices and could be downloaded from the Apple App Store or the Google Play Store. Sample members who did not complete the registration survey were followed up with reminder emails, sent twice per week for the duration of three weeks, and a final reminder letter sent by post after four weeks. They were asked to use the app every day for one month to report their purchases of goods and services and received the following incentives for their study participation: £6 or £2 for downloading the app (with households randomly allocated to either of the two incentive conditions), £0.50 for every day on which they used the app, a bonus of £10 if they used the app every day for one month, and £3 for completing a debrief survey.
There were three sections in the receipt scanning app: 1) Scan receipts, where respondents were asked to take photos of their receipts; 2) Direct entry, where respondents were asked to report purchases for which they did not have any receipts, by indicating the amount and selecting the expenditure categories included in that purchase; and 3) Report no purchases today, where respondents were asked to confirm if they did not purchase any goods or services on that day. See Figures A1–A4 in the Appendix for screenshots of the app. The app sent a push notification every day at 5 pm to remind respondents to report their expenses or indicate that they did not have any expenses on that day.
The Spending Study fieldwork was conducted between October and December 2016. Of 2112 invited sample members, 13% used the app at least once (n = 270) and among those, 82% used the app for at least 29 days (Jäckle et al., 2019). In this paper, we use data from 262 app users who reported at least one purchase, either by receipt scanning or direct entry. Due to the different data collection periods of the Spending Study (one month) and the benchmark data (two weeks), we restrict the Spending Study data to expenditures reported by respondents in their first two weeks of study participation.
The information contained in the receipt images was manually transcribed by Kantar Worldpanel sub-contractors. The data captured include receipt-level information, such as the store name, the purchase date, and the total purchase amount, as well as item-level information, such as the description and price of each item purchased. The scanned receipts also contain information on promotions and price reductions, but these are not structured in a consistent way across receipts and cannot always be attributed to the relevant items: while some receipts record the reductions in the line below the reduced item, others list all reductions at the bottom of the receipt. In the analyses reported in this paper, reductions were, therefore, subtracted from the total purchase amounts but not subtracted from the category-level expenditures.
The items transcribed from the scanned receipts were coded into expenditure categories by matching the item description with the Volume D: Expenditure codes 2015–16 dataset provided by the Living Costs and Food Survey (Office for National Statistics, 2017). This dataset contains a comprehensive list of consumer items classified according to the Classification of Individual Consumption by Purpose (COICOP; United Nations, 2000). The coding was done by first collapsing COICOP codes into ten expenditure categories, following suggestions by d’Ardenne and Blake (2012): 1) Food and groceries, 2) Clothes and footwear, 3) Transport, 4) Child costs, 5) Home improvement and household goods, 6) Health, 7) Socialising and hobbies, 8) Other goods and services, 9) Holidays, and 10) Gifts. The items on the scanned receipts were then assigned to these categories by using exact string matching. The classifications were checked by manual coders to evaluate the quality of the matching procedure and recode items that were not successfully matched. Of the 13,366 items recorded in the scanned receipts in the respondents’ first two weeks of study participation, 74% were matched correctly, 21% were matched incorrectly or did not match and were coded manually, 3% could not be assigned to a category, neither by the matching procedure nor the manual coders, and 2% were on a receipt image that was not readable. More details about the matching procedure can be found in Read (2023).
When reporting expenses through direct entry, respondents were asked to indicate the total amount of their purchase and select one or more expenditure categories included in that purchase. The expenditure categories were the same ten categories used to code the scanned receipts. For purchases where respondents selected a single expenditure category (98% of entries), the reported amount was assigned to the respective category. For purchases where respondents selected multiple categories (2% of entries), the reported amount was divided and assigned to the different categories. Rather than simply dividing the reported amount by the number of selected categories, the ratio of category-level expenditures was determined based on data from the scanned receipts and the direct entries with a single expenditure category selected. The reported amount was then distributed according to that ratio.
Descriptive statistics of the reported expenditure are shown in Table 1 for the analysis period (week 1 and 2) and in Appendix Table A1 for the full data collection period. In their first two weeks of study participation, app users submitted 2092 scanned receipts (64% of all submissions) and made 1200 direct entries (37% of all submissions).
Table 1 Expenditure reports in the Understanding Society Spending Study in week 1 and 2.
Scanned receipt | Direct entry | |||
n | % | n | % | |
The scanned receipts contain the following additional items: promotions or price reductions (n = 920), value-added tax (n = 31), and other items (n = 13). The percentages for direct entry items do not add up to 100% since respondents were able to select multiple expenditure categories per purchase | ||||
Total submissions (n = 3292) | 2092 | 64 | 1200 | 37 |
Total items | 13,366 | 100 | 1200 | 100 |
Food and groceries | 9425 | 71 | 507 | 42 |
Clothes and footwear | 390 | 3 | 57 | 5 |
Transport | 194 | 2 | 128 | 11 |
Child costs | 74 | 1 | 22 | 2 |
Home improvements and household goods | 914 | 7 | 43 | 4 |
Health | 119 | 1 | 17 | 1 |
Socialising and hobbies | 631 | 5 | 217 | 18 |
Other goods and services | 1000 | 8 | 141 | 12 |
Holidays | 0 | 0 | 7 | 1 |
Gifts | 11 | 0 | 78 | 7 |
Item cannot be assigned to category | 384 | 3 | – | – |
Item not readable on receipt | 224 | 2 | – | – |
The scanned receipts contain a total of 13,366 items and are highly skewed across expenditure categories. The large majority of items (71%) are Food and groceries, followed by Other goods and services (8%), and Home improvements and household goods (7%). The other categories make up a smaller share of less than 5% each. The 1200 direct entries, in turn, are more evenly distributed across expenditure categories. While Food and groceries also make up the largest share (42%), other frequently reported expenditure categories include Socialising and hobbies (18%), Other goods and services (12%), and Transport (11%). Since expenditures on Holidays and Gifts were rarely reported in the sample, we exclude these categories from the analysis.
We use benchmark data from the Living Costs and Food Survey (LCF) (Office for National Statistics. Department for Environment, Food and Rural Affairs, 2018), the national budget survey in the United Kingdom9. The survey provides information on spending patterns for the Retail Price Index and is used for National and Regional Accounts to provide estimates of household consumption expenditure (Bulman, Davies, & Carrel, 2017). The LCF is based on a stratified, clustered sample of households in England, Scotland and Wales, and a systematic random sample of private addresses in Northern Ireland. The study has two main components: a questionnaire and a spending diary.
The questionnaire is administered in a face-to-face interview and consists of a household and an individual section. The household section is completed by the household reference person and covers information about the sociodemographic characteristics of household members and regular items of household expenditure. These expenditures include regular payments, such as mortgage or rent payments, utility bills, insurances, education fees, subscriptions of magazines and newspapers, and gym memberships, as well as large but infrequently purchased goods and services, such as vehicle purchases, vehicle service, season tickets, package holidays, and furniture and other home improvements. The individual section is completed by each adult within the household and collects information about income.
All adult household members aged 16 and older are then invited to record their daily expenditure in a paper spending diary for two weeks. Children aged between 7 and 15 receive a simplified version of the diary. The diary is organised into ten sections for different types of expenditures. Six sections cover daily expenditures, including 1) Food and drink brought home, 2) Takeaway meals and snacks eaten at home, 3) Meals, snacks and drinks consumed away from the home, 4) Clothing and footwear, 5) Other payments and purchases, and 6) Winnings from lottery, bingo, betting shops, football pools, raffles. The other four sections cover larger but rare purchases that are not likely to occur on a daily basis, including 7) Home-grown and wild food, 8) Holidays expenditure abroad, 9) Business refunds, and 10) Expenditure due to special circumstances. Respondents are asked to record each item and the amount paid in the appropriate section. In the analysis, we only focus on expenditures reported in the spending diary.
As an incentive for study participation, sampled households receive a booklet of stamps with the advance letter. Adults within responding households also receive an incentive of £20 for completing the questionnaire and spending diary.
In this paper, we use data from the 2016–2017 survey which was conducted between April 2016 and March 2017. Among eligible households, 45% completed the interview and returned at least one spending diary (n = 4641 households; Williams, 2019). Of these households, 125 provided a partial response, where one or more adults declined to keep the diary, but the diary of the person who does most of the shopping for the household was present. Missing diaries were imputed by the Office for National Statistics with data from a respondent in another household with matching characteristics of age, employment status and relationship to the household reference person (Bulman, Davies, & Carrel, 2017). In total, spending diary data are available from 9272 respondents aged 16+ in the 2016–2017 survey. To match the time frame and geographic coverage of the Spending Study, we exclude 6876 respondents who were interviewed outside the period October-December 2016 and 183 respondents who were resident in Northern Ireland, i.e., outside Great Britain. We also exclude 4 respondents who did not report any expenditure, which leaves an analysis sample of n = 2209.
We use inverse probability weighting (IPW) to match the sample composition of the Spending Study to the LCF (Horvitz & Thompson, 1952). The weights are computed with a two-step approach: First, we fit a logistic regression to estimate the respondents’ probability of being in the LCF sample as opposed to the Spending Study sample based on a set of respondent characteristics X collected in both samples:
The following respondent characteristics are included in the model10: age (in years), age-squared, gender (male, female), education (degree, no degree), personal monthly gross income (in £), household size, urbanicity (urban, rural), and the interaction of age and education. In the Spending Study, missing values on the respondent characteristics were imputed for 5 respondents with the values from previous waves and for 3 respondents with the median values. Table A2 in the Appendix shows the results of the logistic regression.
Second, we use the inverse of the estimated probability to calculate the weights:
After examining the weight distribution in both samples, we winsorized outliers that are outside the range of mean weight ±3 × standard deviation of the weights (Valliant & Dever, 2018) to reduce the effect of excessive weights on the variance.
To assess whether the IPW has successfully matched the Spending Study sample composition to the LCF, we calculate the standardised differences in respondent characteristics between the two samples before and after weighting (Austin, 2009). Fig. 1 and Table A3 in the Appendix show that the standardised differences for all respondent characteristics fall between −10% and 10% after weighting; the remaining differences can, thus, be considered a negligible imbalance between the samples (Normand et al., 2001).
To deal with outliers in total expenditure and category-level expenditure, we recode values greater than the 99th percentile to the 99th percentile separately for the LCF and the Spending Study. The data preparation and analysis were conducted in Stata version 15.1 (StataCorp, 2017).
To compare the expenditure recorded in the mobile receipt scanning app with that recorded in the national budget survey we proceed as follows. We compare both the data from the scanned receipts only (SR) and the data from the scanned receipts plus the direct entry (SR+DE) with the national budget survey (LCF). Because there are respondents in both samples who reported zero expenditure for some categories (Table A4 in the Appendix), we calculate weekly expenditure in two ways: (a) including non-zero and zero expenditure, that is, based on all respondents, and (b) including non-zero expenditure only, that is, based on respondents who reported purchases in the relevant category. For each of the four comparisons (SR or SR+DE; non-zero plus zero expenditure or non-zero expenditure only), we test for differences in the overall distribution, mean and median weekly expenditure. We provide the results of all statistical tests in Appendix Tables A5 and A6. The discussion in the text focuses on the distribution and median, unless the test of means leads to different conclusions. Similarly, the discussion focuses on weekly expenditure calculated for all respondents (zero plus non-zero expenditure), unless the findings for non-zero expenditure only show a different pattern.
For total expenditure (Fig. 2), the Spending Study distribution from SR+DE (dashed line) aligns closely with the LCF distribution (solid line). In contrast, the Spending Study distribution based on SR only (dotted line) suggests that the reported amount of expenditure is lower compared to the benchmark. Focusing on median expenditure similarly shows that SR+DE comes closer to the LCF than SR (Appendix Table A5): the estimated total expenditure for the LCF benchmark is £122.80, compared with £101.30 for SR+DE and £70.10 for SR. Both differences are statistically significant (p = 0.048 and p < 0.001, respectively). Looking at mean total expenditure, the estimate from SR+DE does not differ significantly from the LCF estimate (£149.50 vs. £156.50, p = 0.258). A Kolmogorov-Smirnov (KS) test for equality of distributions, however, shows that both Spending Study distributions differ significantly from the benchmark (p = 0.007 for SR+DE; p < 0.001 for SR only).
We next examine the extent to which category-level expenditure aligns between the app and benchmark data. Across all expenditure categories, the percentage with zero expenditures in the two-week period is higher when only looking at scanned receipts. When direct entry is included, this proportion is generally lower, and closer to the LCF benchmark (see Table A4). For example, 32% reported zero expenditures for Transport in the LCF, compared with 41% for SR+DE and 59% for SR. In the case of two categories (Food and groceries and Clothes and footwear), the percentage with zero expenditure is actually lower for SR+DE than LCF.
For most categories, the reported expenditure amount from SR+DE aligns more closely with the benchmark than SR only. For example, from Appendix Table A6, the LCF median for Socialising and hobbies is £27.60, compared with £12.40 for SR+DE and £7.50 for SR. However, the differences in the expenditure distribution and median expenditure between the Spending Study and LCF are still significant for most of the categories (Fig. 3 and 4; Table A5 and Table A6 in the Appendix). Two exceptions are Clothes and footwear and Transport; in both categories, the median expenditure does not differ significantly from the benchmark, either for SR or for SR+DE (see Appendix Table A6).
The pattern is different for the category Food and groceries. Here the median expenditure for SR (£27.50, p = 0.402) is not significantly different from the benchmark (£24.80), whereas SR+DE is significantly higher than the benchmark expenditure (£35.30, p = 0.014). Similarly, the expenditure distribution from SR is not significantly different from the LCF distribution (p = 0.540) whereas that from SR+DE is (p < 0.001).
We next examine to what extent the total expenditure recorded in the app align with the benchmark data for different population subgroups. Our analysis focuses on the respondent characteristics that were used in the IPW, including age (recoded into 16–50, 51–82), gender (male, female), education (no degree, degree), personal monthly gross income (recoded into below median, above median), household size (recoded into single, non-single), and urbanicity (rural, urban). Fig. 5, 6, and Appendix Table A7 present results for both SR and SR+DE from the Spending Study, along with LCF estimates. Given the finding earlier that the SR+DE estimates are generally closer to the LCF benchmark, our discussion focuses on these estimates.
For respondents aged 16–50, the expenditure distribution from the app (SR+DE) is significantly different from the LCF expenditure distribution (p = 0.009), whereas the respective distributions for respondents aged 51–82 are not significantly different (p = 0.245). Examining median expenditure similarly shows that for those aged 16–50 the app data (£89.90, p = 0.005) provide estimates of expenditure that are significantly lower than those in the LCF (£118.00), whereas for those aged 51–82 the app data (£118.30, p = 0.632) are not significantly different from the benchmark (£127.30).
We also find gender differences for the alignment between the mobile app and the benchmark data. The expenditure distribution from the app is significantly different from the LCF distribution for women (p < 0.001) but not for men (p = 0.458). Comparing median expenditure also shows that the expenditure reported in the app is significantly lower than the LCF estimate for women (£93.70 vs. £139.90, p < 0.001) but not for men (£118.20 vs. £103.70, p = 0.426).
We do not find differences by educational attainment. The expenditure distributions for both respondents with a degree (p = 0.039) and those without a degree (p = 0.044) differ significantly from the respective LCF distributions. The median expenditure estimates are not significantly different from the LCF for respondents with a degree (£129.60 vs. £157.70, p = 0.108), but are for respondents without a degree (£82.40 vs. £109.40, p = 0.039).
The alignment between the mobile app and benchmark data differs by personal monthly gross income (Fig. 6). For respondents with a below-median income, the expenditure distribution is significantly different from the LCF expenditure distribution (p = 0.005), whereas the distribution for respondents with an above-median income is not significantly different (p = 0.267). Comparing median expenditure similarly shows that for those with a below-median income, the expenditure reported in the app is significantly lower than the LCF estimate (£74.00 vs. £92.40, p = 0.004) whereas for those with an above-median income, the estimate is not significantly different from the benchmark (£134.70 vs. £153.20, p = 0.216). Comparing the means presents a different picture: the two estimates do not differ for those with lower income (£127.20 vs. £124.00, p = 0.735), whereas for those with higher income, the app estimate is significantly lower than the LCF (£168.50 vs. £187.50, p = 0.015).
We do not find differences by household size. Although a KS test indicates significant differences in the expenditure distributions for respondents in single households (p = 0.010), but non-significant differences for those in multi-person households (p = 0.453), a comparison of median and mean expenditure suggests that the app data from both population subgroups similarly align with the respective benchmark data.
Finally, the alignment between the app and benchmark data differs by urbanicity. For respondents from urban areas, the expenditure distribution is significantly different (p = 0.002) whereas the distribution for respondents from rural areas is not significantly different (p = 0.816). For those living in urban areas, median expenditure reported in the app is significantly lower than the LCF estimate (£89.50 vs. £117.50, p = 0.004), whereas for those in rural areas, the median expenditures do not differ (£123.00 vs. £141.40, p = 0.491). Neither of the mean expenditure estimates is significantly different from the benchmark (p = 0.355 for those in urban areas, p = 0.429 for rural areas).
To examine whether the differences between the app and national budget data lead to differences in conclusions about economic relationships, we compare budget shares calculated from the two data sets. Budget shares are indicators of consumer behavior that are frequently used in the economics literature (e.g., Ahmed et al., 2006; Leicester, 2015). They denote the ratio between category-level expenditure on category k and total expenditure:
In line with the general pattern observed in RQ1, the mean budget shares of SR+DE are closer to the national budget survey than the mean budget shares of SR only. For example, the budget share of Food and groceries is higher by 19.9 percentage points with SR and by 14.8 percentage points with SR+DE (Table 2). The categories, however, differ substantially in the extent that expenditure is higher or lower in the Spending Study compared to the benchmark.
Table 2 Mean budget shares.
LCF | Spending Study: Scan + Direct Entry | Spending Study: Scan Only | |||
% | % | ∆ | % | ∆ | |
Food and groceries | 25 | 40 | +14.8 | 45 | +19.9 |
Clothes and footwear | 7 | 9 | +2.2 | 10 | +2.5 |
Transport | 16 | 12 | −4.2 | 10 | −5.4 |
Child costs | 2 | 1 | −1.2 | 1 | −1.4 |
Home improvements and household goods | 7 | 10 | +2.7 | 11 | +3.8 |
Health | 2 | 2 | +0.2 | 2 | −0.3 |
Socialising and hobbies | 27 | 14 | −12.2 | 11 | −15.4 |
Other goods and services | 14 | 12 | −2.3 | 10 | −3.7 |
Expenditure on Food and groceries is substantially higher in the Spending Study compared to the LCF, which is reflected by a larger budget share (SR+DE: +14.8pp). Similarly, the budget share is larger in the Spending Study than in LCF for Clothes and footwear (SR+DE: +2.2pp) and Home improvements and household goods (SR+DE: 2.7pp). In turn, the budget share is smaller in the Spending Study for Socialising and hobbies (SR+DE: −12.2pp), Transport (SR+DE: −4.2pp), and Other goods and services (SR+DE: −2.3pp). Finally, the differences in budget shares are rather small for Health (SR+DE: +0.2pp) and Child costs (SR+DE: −1.2pp).
We report on a novel approach using smartphone technology to collect expenditure data in a probability household panel of the general population in Great Britain. Respondents were asked to report their purchases of goods and services, by using the built-in camera of their device to photograph all paper receipts for one month. In a separate diary section of the app, they could manually enter expenditures such as non-receipted payments. In this paper we compare the data collected with the app with benchmark data from the Living Costs and Food Survey, the national budget survey in the United Kingdom. The results suggest that the level of total expenditure reported in the app is comparable with the benchmark data. The option to manually enter purchases in the app in addition to the receipt scanning turned out to be crucial: the scanned receipts on their own provide estimates of expenditure that are considerably lower than the benchmark.
The category-level expenditure reported in the app is also comparable with the national budget survey, although the expenditure categories vary in their alignment with the benchmark data. The app-based expenditure data on Clothes and footwear and Transport, for example, align closely with the Living Costs and Food Survey whereas greater differences are found for categories such as Socialising and hobbies and Child costs. Similarly, the percentage of zero expenditure reported also varies by category. There are different potential reasons for these differences. First, the expenditure categories have varying likelihoods of generating paper receipts that respondents could scan with the app. Expenditure in the category Socialising and hobbies, for example, might be more likely to consist of regular payments made by standing order or direct debit than expenditure on Clothes and footwear. In a follow-up study, one option would be to add an explicit question in the app to confirm zero expenditures, especially for frequent items like food and groceries. Second, the overreporting of expenditure on Food and groceries and underreporting of expenditure on Socialising and hobbies in the app could be due to a lack of guidance on where to report food eaten outside the home, for example in restaurants (see National Household Food Acquisition and Purchase Survey. FoodAPS, 2016). Third, the benchmark data are of course themselves collected with a survey, and therefore also not entirely error free (see Eckman, 2022 for an analysis of underreporting in the equivalent US Consumer Expenditure Survey). We cannot rule out that the benchmark data are also potentially subject to recall biases, which might be due to different mechanisms compared to the app study. For example, it is possible that survey respondents in the LCF tend to report larger expenses but forget smaller expenses whereas the opposite might be the case for app participants.
We find that the alignment between the app-based data collection on consumer expenditure and the national budget survey varies across subgroups of the population. For respondents who are older, male, have an above-median income, or live in rural areas, we find that the app data align more closely with the national budget survey. These patterns might relate to differences in willingness to adhere to the app study protocol, suggesting that certain subgroups might have different participation patterns when using a receipt scanning app as opposed to responding to an expenditure diary. However, these findings are based on bivariate analyses with relatively small sample sizes; further analyses with larger samples are needed to tease out the mechanisms underlying these subgroup differences. For example, older respondents might be less likely to use the app, but more likely to keep paper receipts and (if they use the app) to diligently report their expenditure compared to younger respondents.
Finally, the implications of measurement differences for economic estimates are likely to vary, depending on the estimates. Our examination of budget shares, for example, suggests that the estimated shares for some categories are closer to the benchmark data than others. Future research could examine the implications for other economic applications.
Overall, the receipt scanning app seems to be a promising method for collecting population-representative consumer expenditure data in a probability sample alongside individual- and household-level characteristics. The scanned receipts can provide information on people’s expenditure coded to the full COICOP expenditure classifications, complemented by expenditure reports coded into higher-level expenditure categories. However, a limitation of our study is that the combined app data from scanned receipts and direct entries only allows capturing higher-level expenditure categories. While this level of detail might be sufficient for some users of consumer expenditure data, others might require classifications at a more detailed level. To capture expenditures at a more detailed COICOP level, future research might consider involving participants more actively in the processing of receipt images, for example, by asking participants to code items to the COICOP categories when submitting the images. Furthermore, participants would need to select among more detailed expenditure categories in the direct entry section of the app. An additional limitation of the app study is the relatively low participation rate. Non-participation biases can be introduced if participants in the app study differ from non-participants on the outcomes of interest. In a previous paper examining participation in the Spending Study, for example, we found non-participation biases in sociodemographic characteristics and financial behaviours but no biases in correlates of spending (Jäckle et al., 2019). Further research is needed on how to increase participation rates as well to reduce any non-participation biases in smartphone-based data collection.
This research was funded by the Economic and Social Research Council (ESRC) and the National Centre for Research Methods (NCRM) (ES/N006534/1). We are grateful for the in-kind contributions of our project partner Kantar Worldpanel who implemented Spending Study 1. The Understanding Society Innovation Panel is funded by the Economic and Social Research Council (ES/N00812X/1) and various Government Departments, with scientific leadership by the Institute for Social and Economic Research, University of Essex, and survey delivery by NatCen Social Research and Kantar Public. We thank Thomas F. Crossley, Joachim Winter, Paul Fisher, and Carli Lessof for comments on earlier versions of this paper.
Agarwal, S., Liu, C., & Souleles, N. S. (2007). The reaction of consumer spending and debt to tax rebates—Evidence from consumer credit data. Journal of Political Economy, 115(6), 986–1019. →
Aguiar, M., & Hurst, E. (2007). Life-cycle prices and production. American Economic Review, 97(5), 1533–1559. →
Ahmed, N., Brzozowski, M., & Crossley, T. F. (2006). Measurement errors in recall food consumption data. Institute for Fiscal Studies Working Paper, W06/21. London: Institute for Fiscal Studies. a, b
Alatrista-Salas, H., Gauthier, V., Nunez-del-Prado, M., & Becker, M. (2021). Impact of natural disasters on consumer behavior: case of the 2017 El Niño phenomenon in Peru. PLoS ONE, 16(1), e0244409. →
Andreyeva, T., Luedicke, J., Henderson, K. E., & Tripp, A. S. (2012). Grocery store beverage choices by participants in federal food assistance and nutrition programs. American Journal of Preventive Medicine, 43(4), 411–418. →
Angrisani, M., Kapteyn, A., & Samek, S. (2018). Real time measurement of household electronic financial transactions in a population representative panel. Paper presented at the 35th IARIW General Conference, Copenhagen. http://old.iariw.org/copenhagen/angrisani.pdf a, b
Austin, P. C. (2009). Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Statistics in Medicine, 28, 3083–3107. →
Baker, S. R. (2018). Debt and the response to household income shocks: validation and application of linked financial account data. Journal of Political Economy, 126(4), 1504–1557. →
Barrett, G., Levell, P., & Milligan, K. (2015). A comparison of micro and macro expenditure measures across countries using differing survey methods. In C. D. Carroll, T. F. Crossley & J. Sabelhaus (Eds.), Improving the measurement of consumer expenditures (pp. 263–286). Chicago: University of Chicago Press. →
Benedikt, L., Joshi, C., Nolan, L., de Wolf, N., & Schouten, B. (2020). @ HBS > An App-Assisted Approach for the Household Budget Survey. Optical Character Recognition and Machine Learning Classification of Shopping Receipts →
Bosch, O. J., Revilla, M., & Paura, E. (2019). Answering mobile surveys with images: An exploration using a computer vision API. Social Science Computer Review, 37(5), 669–683. →
Broda, C., Leibtag, E., & Weinstein, D. E. (2009). The role of prices in measuring the poor’s living standards. Journal of Economic Perspectives, 23(2), 77–97. →
Browning, M., Crossley, T. F., & Winter, J. (2014). The measurement of household consumption expenditures. Annual Review of Economics, 6, 475–501. a, b, c, d, e, f, g, h
Brzozowski, M., Crossley, T. F., & Winter, J. K. (2017). A comparison of recall and diary food expenditure data. Food Policy, 72, 53–61. →
Bucher, H., Keusch, F., de Vitiis, C., de Fausti, F., Inglese, F., van Tienoven, T. P., McCool, D., Lugtig, P., & Struminskaya, B. (2023). Smart survey implementation. Workpackage 2: research methodology. Deliverable M6: review stage →
Bulman, J., Davies, R., & Carrel, O. (2017). Living Costs and Food Survey. Technical report for survey year: April 2015 to March 2016. Great Britain and Northern Ireland. Newport: Office for National Statistics. a, b
Carroll, C. D., Crossley, T. F., & Sabelhaus, J. (2015). Improving the measurement of consumer expenditures. Chicago: University of Chicago Press. →
Cullen, K., Baranowski, T., Watson, K., Nicklas, T., Fisher, J., O’Donnell, S., Baranowski, J., Islam, N., & Missaghian, M. (2007). Food category purchases vary by household education and race/ethnicity: results from grocery receipts. Journal of the American Dietetic Association, 107(10), 1747–1752. →
d’Ardenne, J., & Blake, M. (2012). Developing expenditure questions: Findings from focus groups. Institute for Fiscal Studies Working Paper, W12/18. London: Institute for Fiscal Studies. →
DeWalt, K. M., D’Angelo, S., McFadden, M., Danner, F. W., Noland, M., & Kotchen, J. M. (1990). The use of itemized register tapes for analysis of household food acquisition patterns prompted by children. Journal of the American Dietetic Association, 90(4), 559–562. →
Eckman, S. (2022). Underreporting of purchases in the US Consumer Expenditure Survey. Journal of Survey Statistics and Methodology, 10(5), 1148–1171. →
Felgate, M., Fearne, A., DiFalco, S., & Garcia Martinez, M. (2012). Using supermarket loyalty card data to analyse the impact of promotions. International Journal of Market Research, 54(2), 221–240. →
French, S. A., Wall, M., Mitchell, N. R., Shimotsu, S. T., & Welsh, E. (2009). Annotated receipts capture household food purchases from a broad range of sources. International Journal of Behavioral Nutrition and Physical Activity, 6, 37. →
Fricker, S., Kopp, B., Tan, L., & Tourangeau, R. (2015). A review of measurement error assessment in a U.S. household consumer expenditure survey. Journal of Survey Statistics and Methodology, 3(1), 67–88. →
Geisen, E., Richards, A., Strohm, C., & Wang, J. (2011). U.S. Consumer expenditure records study. →
Gelman, M., Kariv, S., Shapiro, M. D., Silverman, D., & Tadelis, S. (2014). Harnessing naturally occurring data to measure the response of spending to income. Science, 345(6193), 212–215. →
Griffith, R., & O’Connell, M. (2009). The use of scanner data for research into nutrition. Fiscal Studies, 30(3–4), 339–365. →
Griffith, R., Leibtag, E., Leicester, A., & Nevo, A. (2009). Consumer shopping behavior: how much do consumers save? Journal of Economic Perspectives, 23(2), 99–120. →
Gross, D. B., & Souleles, N. S. (2002). Do liquidity constraints and interest rates matter for consumer behavior? Evidence from credit card data. Quarterly Journal of Economics, 117(1), 149–185. →
Horvitz, D. G., & Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47, 663–685. →
Hoseini, M., & Valizadeh, A. (2021). The effect of COVID-19 lockdown and the subsequent reopening on consumption in Iran. Review of Economics of the Household, 19(2), 373–397. →
Iglesias, P. A. (2024). Beyond boundaries: leveraging visual data for advancements in social science research. Barcelona: Universitat Pompeu Fabra. Doctoral Dissertation →
Ilic, G., Lugtig, P., Schouten, B., Streefkerk, M., Mulder, J., Kumar, P., & Höcük, S. (2022). Pictures instead of survey questions: an experimental investigation of the feasibility of using pictures in a housing survey. Journal of the Royal Statistical Society Series A: Statistics in Society, 185(S2), S437–S469. →
Institute for Social and Economic Research (2021). Understanding Society: The UK Household Longitudinal Study. Innovation Panel, Waves 1–13, User guide. Colchester: University of Essex. →
Jäckle, A., Burton, J., Wenz, A., & Read, B. (2018). Understanding Society: The UK Household Longitudinal Study. Spending Study 1, User guide. Colchester: Institute for Social and Economic Research, University of Essex. https://beta.ukdataservice.ac.uk/datacatalogue/studies/study?id=8749 →
Jäckle, A., Burton, J., Couper, M. P., & Lessof, C. (2019). Participation in a mobile app survey to collect expenditure data as part of a large-scale probability household panel: Coverage and participation rates and biases. Survey Research Methods, 13(1), 23–44. a, b, c, d, e
Jäckle, A., Couper, M. P., Gaia, A., & Lessof, C. (2021). Improving survey measurement of household finances: A review of new data sources and technologies. In P. Lynn (Ed.), Advances in longitudinal survey methodology (pp. 337–367). Hoboken: Wiley. a, b, c
Keusch, F., Wenz, A., & Conrad, F. (2022). Do you have your smartphone with you? Behavioral barriers for measuring everyday activities with smartphone sensors. Computers in Human Behavior, 127, 107054. →
Kuchler, T., & Pagel, M. (2021). Sticking to your plan: the role of present bias for credit card paydown. Journal of Financial Economics, 139(2), 359–388. →
Leicester, A. (2015). The potential use of in-home scanner technology for budget surveys. In C. D. Carroll, T. F. Crossley & J. Sabelhaus (Eds.), Improving the measurement of consumer expenditures (pp. 441–491). Chicago: University of Chicago Press. a, b, c
Leicester, A., & Oldfield, Z. (2009). Using scanner technology to collect expenditure data. Fiscal Studies, 30(3–4), 309–337. →
Lessof, C. (2022). Investigating the impact of technologies on the quality of data collected through surveys. Southampton: University of Southampton. Doctoral Dissertation →
Luiten, A., Hox, J., & de Leeuw, E. (2020). Survey nonresponse trends and fieldwork effort in the 21st century: results of an international study across countries and surveys. Journal of Official Statistics, 36(3), 469–487. →
Lusk, J. L., & Brooks, K. (2011). Who participates in household scanning panels? American Journal of Agricultural Economics, 93(1), 226–240. →
Lynn, P. (2009). Sample design for Understanding Society. Understanding Society Working Paper, 2009–01. Colchester: Institute for Social and Economic Research, University of Essex. →
Maki, A., & Garner, T. (2010). Estimation of misreporting models using micro-data sets derived from the Consumer Expenditure Survey: The gap between macro and micro economic statistics on consumer durables. Journal of Mathematical Sciences: Advances and Applications, 4(1), 123–152. →
Martin, S. L., Howell, T., Duan, Y., & Walters, M. (2006). The feasibility and utility of grocery receipt analyses for dietary assessment. Nutrition Journal, 5, 6–12. →
McWhinney, I., & Champion, H. E. (1974). The Canadian experience with recall and diary methods in consumer expenditure surveys. Annals of Economic and Social Measurement, 3(2), 411–437. →
National Household Food Acquisition and Purchase Survey. FoodAPS (2016). User’s guide to survey design, data collection, and overview of datasets. Washington DC: U.S. Department of Agriculture, Economic Research Service. →
National Research Council (2013). Measuring what we spend: toward a new consumer expenditure survey. Washington DC: National Academy Press. →
Neter, J., & Waksberg, J. (1964). A study of response errors in expenditures data from household interviews. Journal of the American Statistical Association, 59(305), 18–55. →
Newing, A., Clarke, G., & Clarke, M. (2014). Exploring small area demand for grocery retailers in tourist areas. Tourism Economics, 20(2), 407–427. →
Normand, S.-L. T., Landrum, M. B., Guadagnoli, E., Ayanian, J. Z., Ryan, T. J., Cleary, P. D., & McNeil, B. J. (2001). Validating recommendations for coronary angiography following acute myocardial infarction in the elderly: a matched analysis using propensity scores. Journal of Clinical Epidemiology, 54(4), 387–398. →
Office for National Statistics (2017). Living Costs and Food Survey, 2015–2016. User guide. Volume D: Expenditure codes. Newport: Office for National Statistics. https://beta.ukdataservice.ac.uk/datacatalogue/studies/study?id=8210 →
Office for National Statistics. Department for Environment, Food and Rural Affairs (2018). Living Costs and Food Survey, 2016–2017. [data collection]. UK Data Service. SN: 8351. https://doi.org/10.5255/UKDA-SN-8351-1. →
Ohme, J., Araujo, T., de Vreese, C. H., & Piotrowski, J. T. (2021). Mobile data donations: assessing self-report accuracy and sample biases with the iOS Screen Time function. Mobile Media & Communication, 9(2), 293–313. →
Panzone, L., Hilton, D., Sale, L., & Cohen, D. (2016). Socio-demographics, implicit attitudes, explicit attitudes, and sustainable consumption in supermarket shopping. Journal of Economic Psychology, 55, 77–95. →
Rankin, J. W., Winett, R. A., Anderson, E. S., Bickley, P. G., Moore, J. F., Leahy, M., Harris, C. E., & Gerkin, R. E. (1998). Food purchase patterns at the supermarket and their relationship to family characteristics. Journal of Nutrition Education and Behavior, 30(2), 81–88. →
Ransley, J. K., Donnelly, J. K., Khara, T. N., Botham, H., Arnot, H., Greenwood, D. C., & Cade, J. E. (2001). The use of supermarket till receipts to determine the fat and energy intake in a UK population. Public Health Nutrition, 4(6), 1279–1286. a, b
Read, B. (2023). Automated coding of data from shopping receipts for survey research. Unpublished Manuscript →
Schouten, B., Bulman, J., Järvensivu, M., Plate, M., & Vrabic-Kek, B. (2020). @HBS > An App-Assisted Approach for the Household Budget Survey. Report on the Action @HBS →
Silberstein, A. R., & Scott, S. (1991). Expenditure diary surveys and their associated errors. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz & S. Sudman (Eds.), Measurement errors in surveys (pp. 303–326). Wiley. a, b, c
Smith, C., Parnell, W. R., Brown, R. C., & Gray, A. R. (2013a). Providing additional money to food-insecure households and its effect on food expenditure: A randomized controlled trial. Public Health Nutrition, 16(8), 1507–1515. a, b
Smith, C., Parnell, W. R., Brown, R. C., & Gray, A. R. (2013b). Balancing the diet and the budget: food purchasing practices of food-insecure families in New Zealand. Nutrition and Dietetics, 70(4), 278–285. →
StataCorp (2017). Stata Statistical Software: Release 15. College Station: StataCorp LLC. →
Stephens, M. (2003). ‘3rd of tha month’: do social security recipients smooth consumption between checks? American Economic Review, 93(1), 406–422. →
Tang, W., Aggarwal, A., Liu, Z., Acheson, M., Rehm, C. D., Moudon, A. V., & Drewnowski, A. (2016). Validating self-reported food expenditures against food store and eating-out receipts. European Journal of Clinical Nutrition, 70(3), 352–357. →
The American Association for Public Opinion Research (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys (9th edn.). →
Tin, S. T., Mhurchu, C. N., & Bullen, C. (2007). Supermarket sales data: feasibility and applicability in population food and nutrition monitoring. Nutrition Reviews, 65(1), 20–30. →
Turner, R. (1961). Inter-week variations in expenditure recorded during a two-week survey of family expenditure. Journal of the Royal Statistical Society. Series C: Applied Statistics, 10(3), 136–146. →
United Nations (2000). Classification of expenditure according to purpose: COFOG, COICOP, COPNI, COPP, series M: miscellaneous. Statistical Papers, No. 84. New York. a, b
University of Essex. Institute for Social and Economic Research (2022). Understanding Society: Spending Study 1, 2016–2017. [data collection]. UK Data Service. SN: 8749. https://doi.org/10.5255/UKDA-SN-8749-1. →
Valliant, R., & Dever, J. A. (2018). Survey weights: a step-by-step guide to calculation. College Station: Stata Press. →
Weerts, S. E., & Amoran, A. (2011). Pass the fruits and vegetables! A community-university-industry partnership promotes weight loss in African American women. Health Promotion Practice, 12(2), 252–260. →
Williams, T. (2019). Living Costs and Food Survey Technical report: Financial years ending March 2017 and March 2018. Newport: Office for National Statistics. →
Williams, D., & Brick, J. M. (2018). Trends in U.S. face-to-face household survey nonresponse and level of effort. Journal of Survey Statistics and Methodology, 6(2), 186–211. →
Zhen, C., Taylor, J. L., Muth, M. K., & Leibtag, E. (2009). Understanding differences in self-reported expenditures between household scanner data and diary survey data: a comparison of Homescan and Consumer Expenditure Survey. Review of Agricultural Economics, 31(3), 470–492. a, b