Public Confidence in Official Statistics in the UK: Characteristics of Respondents and Changes Over Time

Survey Research Methods
ISSN 1864-3361
828410.18148/srm/2025.v19i3.8284Public Confidence in Official Statistics in the UK: Characteristics of Respondents and Changes Over Time
Olga Maslovskaya om206@soton.ac.uk University of SouthamptonDepartment of Social Statistics and Demography Southampton UK
Annamaria Bianchi annamaria.bianchi@unibg.it University of BergamoDepartment of Economics Bergamo Italy
321142025European Survey Research Association

The survey climate is a complex phenomenon and determines the amount of efforts which are needed to implement surveys in efficient way and to obtain high quality data. In this paper we assess specific aspects of survey climate in the UK, with reference to public confidence in Official Statistics (OS): general confidence in OS, trust in the statistics produced specifically by the Office for National Statistics (ONS) and confidence that respondents’ data would be kept confidential. We identify charactersitics of respondents with lower confidence in OS in the UK in 2021 and then investigate change in the confidence in OS over time between 2014 and 2021. To address the research questions, we use data collected in the UK about public confidence in OS in 2014, 2016, 2018 and 2021 and employ binary logistic regression models. The results suggest that public confidence in Official Statistics in the UK has increased over time between 2014 and 2021. It seems that Covid-19 pandemic period associated with higher exposure of the public to OS may have had a positive impact on public confidence in OS in the UK. Additionally, we find that those who are older and less educated are more likely to have negative views about official statistics in general, lower level of trust specifically in the statistics produced by the ONS, and lower level of confidence that their data would be kept confidential. For some outcomes the participation in census is important and those respondents who took part in census are more likely to express more positive attitudes towards the outcomes of interest. The level of awareness about the ONS is another significant factor for all three key outcomes and the probability of having more negative attitudes is associated with lower awareness about the ONS.

Supplementary Information

The online version of this article (https://doi.org/10.18148/srm/2025.v19i3.8284) contains supplementary information.

1Introduction

According to Loosveldt and Joye (2016), “survey climate determines the design and efforts needed to implement surveys in an adequate and efficient way and to obtain high-quality data” (p. 67), as such, it represents a crucial concept to study. A decline in survey climate can cause dramatic consequences for survey research. If large numbers of people are reluctant to share personal information, this signals problems with the survey climate. Since policy and financial decisions are largely reliant on survey data—especially in contexts where population registers are not available and administrative data linkages are not easily implemented—there is a pressing need to implement additional efforts to collect survey data effectively. Survey climate is a complex phenomenon, and it can be conceptualised within three key dimensions as proposed by Loosveldt and Joye (2016): 1) the willingness of respondents to participate in surveys, 2) public opinions about surveys, and 3) the way surveys are organised and reported in the media. The concept of survey climate is generally quite broad and refers to both official and non-official statistics. In this paper, the focus is specifically on Official Statistics.

Official Statistics (OS) are statistics produced within a national statistical system that are collected, processed, and disseminated on behalf of the national government and within a legal framework (Eurostat, 2017). These data are produced by conducting censuses and surveys or by utilising existing administrative records. OS play a crucial role in societies worldwide, as they are used as a basis for developing policies and making important decisions (Radermacher, 2020). OS are collected not only for use by experts but also for utilisation by the public (Radermacher, 2020). Consequently, it is important to understand how the public perceives the available OS and whether they have confidence in how the OS are produced and reported, as well as in how the data are stored and utilised.

The second dimension of the survey climate measurement framework (public opinion about surveys), introduced above, includes public confidence in OS. Understanding the various related factors and changes in public confidence in OS over time is essential for maintaining a solid grasp of the present survey climate. A high level of confidence in OS is a necessary but not sufficient condition for a positive survey climate, which is conductive to the collection of high-quality survey data.

To the best of our knowledge, the existing literature contains little work with specific reference to OS, and most of existing research has been conducted using United States (US) data. In their study, Kim et al. (2011) reported declining trust in the US Census Bureau. Between 1979 and 2000, approximately two-thirds of Americans believed that it was very important to be counted in the census. From 1990 to 2000, they observed a decline in the proportions of Americans believing that the Census Bureau kept personal information confidential, and that the Bureau asked only what it had a legitimate reason to know.

Brackfield (2011) and Fellegi (1996, 2004, 2011) developed a conceptual model of trust in OS that identifies the factors influencing public trust in statistical data and includes three main components: the quality of statistics, the trustworthiness of the producer, and contextual factors. By understanding these factors, statistical agencies can develop effective strategies to increase public trust in their data. In a relatively recent study, Childs et al. (2019) examined the impact of various factors on trust in the US Federal Statistical System, including attitudes (i.e., belief in the credibility and transparency of federal statistics) and behaviours (i.e., the actual use of federal statistics). This study supports the model of trust in OS proposed by Brackfield and Fellegi mentioned above by providing evidence of a significant relationship between the credibility of statistical products and trust in statistics more generally. Childs et al. (2019) also suggest that promoting trust in statistical products could help increasing trust in the agencies that produce them. Against this background, Pullinger (2020) provided a historical perspective on trust in OS and addressed the current challenges in this context, especially in relation to capacity building, thus making an urgent call for action for all those stakeholders who rely on evidence in the public sector for decision-making, including both providers and users of statistical services. More research is needed to address issues associated with public confidence in OS, as both policy-makers and the public, who are the recipients of the policy and financial decisions, must be confident in the reliability and accuracy of data.

This paper focuses on understanding some aspects of public confidence in OS in a specific country, the United Kingdom (UK). The UK is home to many high-quality OS and non-OS data sources used not only by UK researchers, but also by the international research community to understand different social, economic, and health-related phenomena. However, little is known about the current state of the UK’s survey climate beyond the observed reduction of response rates and general decline in survey participation—a trend that is not unique to the UK (de Leeuw et al., 2019). The Office for National Statistics (ONS) is the main provider of OS in England and Wales. It collects census data and executes many other important social and business surveys, including the Labour Force Survey (LFS), Covid-19 Infection Survey (CIS), Crime Survey for England and Wales (CSEW), Living Costs and Food Survey (LCF), and International Passenger Survey (IPS). In Scotland, the National Records of Scotland (NRS) conducts census and collects other OS, whereas OS data collection is managed by the Northern Ireland Statistics and Research Agency (NISRA) in Northern Ireland. This paper focuses only on England, Wales, and Scotland, and not on Northern Ireland due to data limitations.

To our knowledge, the only analysis into public confidence in OS available in the UK was conducted by Butt et al. (2022a), who performed a descriptive analysis of the 2021 Public Confidence in Official Statistics (PCOS) data, which was presented as a report accompanying the data source. This paper builds on this descriptive analysis and aims to shed more light on some aspects of the survey climate in the UK by investigating different aspects of public confidence in OS. We employ logistic regression to model data from four surveys in the UK that collected data about public confidence in OS in 2014, 2016, 2018, and 2021 (NatCen, 2016, 2017, 2021, 2023). The results of the analyses will improve our understanding of some essential aspects of the current survey climate in the UK, as well as changes in public confidence in OS over time. The findings will have important survey practice implications and could help improve certain aspects of survey climate in the UK, potentially reducing the effort required in survey implementation to obtain high-quality data.

The remainder of this paper is structured as follows. The next section will provide further background to the survey climate framework and will discuss the aspects of survey climate we address in this paper. The methods section will first describe the data sources used for the analysis, followed by a discussion of the groups of variables and the statistical approach employed in this study. This will be followed by a presentation of the results of the analysis. The paper concludes with a summary of the main findings, limitations, avenues for future work, and implications of the results for survey practice.

2Background and Research Questions

The first reference to the concept of survey climate can be found in a paper about non-response research in Statistics Sweden (Lyberg & Lyberg, 1991). This paper refers to the observed declining response rates in data collection that started during the 1970 census of population and housing. To monitor the survey climate, a non-response barometer entailing a time series of non-response rates was proposed.

A few years later, the concept of survey climate was also present in the conceptual framework for survey cooperation developed by Groves and Couper (1998). Within this framework, several factors influencing respondents’ decisions to participate in surveys were identified. Survey climate was considered a characteristic of the social environment that was beyond the researcher’s control. It referred to the number of surveys conducted in a society, the perceived legitimacy of surveys, trends in survey participation, and discussions in newspapers about the National Statistical Institutes (NSIs), censuses, and the results of various surveys.

In a more recent study, Loosveldt and Joye (2016) systematically identified the relevant aspects and dimensions of survey climate. They characterised survey climate as encompassing three key dimensions: the willingness of respondents to participate in surveys, public opinion about surveys, and the way surveys are organised and reported in the media. The first two dimensions are individual-level characteristics, whereas the third is a society-level characteristic of survey climate.

Various measures can be used to assess the willingness to participate in surveys. These include the response rates and paradata capturing the “reasons for refusals”. Additionally, a short non-response survey asking a few questions about surveys could be conducted, or some questions about surveys could be integrated into the main questionnaire and asked of all respondents (Loosveldt & Joye, 2016).

Surveys focusing on public opinions about surveys can be used as a tool to collect the data for the second dimension (public opinion about surveys) (Loosveldt & Joye, 2016). According to Goldman (1944), perceptions about polls provide relevant information that can help in understanding the accuracy and fairness of the results of public opinion polls. However, surveys on surveys suffer from a main methodological problem, as they use the survey instrument itself to measure its own performance (Goyder, 1986). Within the Total Survey Error framework (Groves, 1989), this methodological problem translates into non-response bias and measurement error.

The third dimension (the way surveys are organised and reported in the media) refers to the societal characteristics of opinions about data collection and the way the results of polls and surveys are reported in the media (Loosveldt & Joye, 2016). In this respect, information about the number and characteristics of surveys organised in a country or region, as well as the way results are reported and discussed in the media, is important for understanding the survey context.

Kim et al. (2011) reported on trends in surveys on surveys and identified four broader topics that emerged in the available data: 1) the usefulness of the survey research industry and the value of polls to the public, as well as trust in survey organisations, 2) the evaluation of public opinion polling and the importance and influence of polls, 3) knowledge about public opinion polling and the public’s belief in the accuracy of polls and surveys, and 4) experience with polling.

Regarding the measurement of respondents’ opinions about surveys, several attempts have been made to develop ad-hoc measurement instruments. In this respect, Loosveldt and Storms (2008) identified key dimensions relevant to individual decisions to participate in surveys, drawing on the leverage-salience theory for survey participation (Groves et al., 2000). They developed a measurement instrument encompassing five dimensions: survey enjoyment, survey value, survey reliability, survey cost, and survey privacy. The results indicated that respondents would participate in a survey when they consider it to be a pleasant activity (survey enjoyment) that produces useful (survey value) and reliable (survey reliability) results, and when the perceived cost (survey cost: time and cognitive effort) of participation and impact on privacy (survey privacy) are minimal. More recently, de Leeuw et al. (2019) developed a survey attitude scale to measure survey enjoyment, survey value, and survey burden.

With specific reference to OS, Lorenc et al. (2013) presented a framework for measuring the state of an NSI’s external survey environment. This framework specifies a simple mediation model in which the subjective experience of the survey climate mediates the general survey climate and the respondent’s decision to participate. The subjective experience manifests itself in the individual’s opinion about surveys and willingness to participate. The proposed framework builds on Loosveldt and Storm’s (2008) work and is consistent with Loosveldt and Joye (2016). Furthermore, Lorenc et al. (2013) emphasised the need for increased efforts by NSIs to positively influence the external environment in which they operate, and proposed several potentially effective strategies to improve the external survey environment of OS. They speculated that the external survey environment and survey climate may be out of a researcher’s control in a small- or medium-sized research organisation, while NSIs can be contributors to the survey-taking climate in the broader society and actively influence that climate as well.

Unfortunately, it is hard to find a single data source that would measure all the aspects of the survey climate as formulated above. Nonetheless, some sources do collect data on the individual aspects that comprise this definition. Therefore, we were able to conduct a partial assessment of the survey climate with the available data. Specifically, we assessed some aspects of survey climate in the UK in 2021 and demonstrated how these aspects of survey climate changed over time between 2014 and 2021. In our analysis, we focused on OS and certain aspects related to the second dimension proposed by Loosveldt and Joye (2016), while also addressing the first and third dimensions in the discussion of our results.

In this analysis, we addressed two of the aspects of the second dimension (public opinion about surveys) of survey climate: survey reliability and survey privacy (Loosveldt & Joye, 2016; Loosveldt & Storm, 2008).

The best way to assess willingness to participate in surveys is via the survey participation itself. To enable this analysis, access to the data on non-respondents is required, but is not always available for cross-sectional surveys, as is the case with the surveys used for this analysis.

When carrying out a survey on survey climate, the population of respondents belongs to two main groups: survey respondents and survey non-respondents. This paper specifically focuses on the former group. The respondents can be further split into two important sub-groups: those with positive attitudes and opinions about OS and those with negative ones. This latter group is of particular interest, as it includes respondents who, despite participating, hold attitudes that may negatively impact future survey participation and the broader survey climate. Once we know more about this specific group, targeted measures or interventions could be designed and implemented to increase confidence in OS and improve general attitudes towards surveys. The aim would be to encourage some members of this group to reconsider their views, potentially contributing to an improved survey climate in a particular country. It is therefore important to understand the characteristics of this specific group of respondents. Therefore, the first research question (RQ) of this paper focuses on this issue:

RQ1: What are the characteristics of the respondents with different attitudes towards OS?

When investigating various aspects of public confidence in OS, an important aspect to consider is its evolution over time. Relevant changes in survey implementation procedures must be considered. In this respect, one prominent feature is the mode used to collect the data, especially when different modes are used over time, as these modes may introduce measurement errors. In particular, social desirability is expected to bias the results in a positive direction in face-to-face surveys, with respondents reporting more positive opinions in the presence of an interviewers (Coffey et al., 2024; Loosveldt & Joye, 2016; Vannieuwenhuyze et al., 2010, 2012). According to Butt et al. (2022a), changes in the mode of data collection for UK surveys may have negatively impacted the comparability of data on public confidence in OS over time. Our hypothesis is that the attitudes we are assessing will be more positive in surveys that use the face-to-face data collection mode, where social desirability bias is more likely to occur, compared to self-completion modes, such as online mode and paper questionnaires (de Leeuw, 2005; Tourangeau et al., 2000). The second research question was, therefore, formulated as follows:

RQ2: Are there differences in attitudes and opinions towards OS observed in the UK over time? Can any such differences be explained by changes in the mode of data collection?

To address these two research questions and to understand the level of confidence in OS in the UK as measured by survey reliability and survey privacy indicators, four datasets (2014, 2016, 2018, 2021) collected by National Centre for Social Research (NatCen) have been analysed (NatCen, 2016, 2017, 2021, 2023). Details about the surveys and methods used for the analysis are provided in the next section.

3Methods

3.1Data

To address our RQs, we used four high-quality probability-based surveys conducted in the UK, which asked various questions about public confidence in OS. The first three surveys contained a module on public confidence in OS, which was part of the British Social Attitudes (BSA) surveys (NatCen, 2016, 2017, 2021). The fourth survey was a stand-alone survey on Public Confidence in Official Statistics (PCOS), also conducted by NatCen in 2021 (NatCen, 2023).

For the BSA surveys, the target population comprised adults aged 18 and over, living in England, Wales, and Scotland. A multi-stage sampling design was used to select households from the list of addresses from the Postcode Address File (PAF), and one respondent was randomly selected to take part in the survey from each of the selected households using the Kish grid. Data were collected face-to-face (with a self-completion element) between July and November of each survey year. The response rates were 47%, 46%, and 42%, respectively.

For the PCOS 2021 survey, the target population was the same as for the BSA surveys. Multi-stage stratified random sampling was used to select households from the list of addresses from the PAF, and up to two adults per sampled address were able to complete the survey. As for the data collection mode, a push-to-web strategy was adopted, entailing web and postal modes, with paper questionnaires offered to non-responding households. The response rate was 24%. Data were collected between October and December 2021. Further details about the surveys can be found in the user guides (NatCen, 2014a, 2016a, 2018a, 2023).

The 2014 and 2016 BSA samples were randomly split into three equally sized groups, whereas the 2018 BSA was split into four equally sized groups, and each group was asked a different version of the questionnaire (Curtice et al., 2019; NatCen, 2014a, 2016a). The questions of interest for this paper were asked to a random two-thirds of the sample in 2014 and 2016 and a random half of the sample in 2018. Some explanatory variables were collected via the self-completion component, which had lower response rates (39%, 38%, and 33%, respectively). All response rates were calculated and reported by NatCen (Curtice et al., 2019; NatCen, 2014a, 2016a).

All datasets contained the three key outcome variables, explanatory variables and a mode indicator.

The first two outcome variables were used to assess one aspect of the second dimension of the survey climate: survey reliability. The first key outcome variable was measured by the question: “Official figures are generally accurate” (NatCen, 2014b, 2016b, 2018b) or “Official statistics are generally accurate” (Butt et al., 2022b). The following options were available to respondents: 1) strongly agree, 2) tend to agree, 3) tend to disagree, and 4) strongly disagree.

The second outcome variable was measured by the question: “Personally, how much trust do you have in statistics produced by ONS. For example, on unemployment, inflation, economic growth, or life expectancy?” (Butt et al., 2022b; NatCen, 2014b, 2016b, 2018b), with the following response options available: 1) trust them greatly, 2) tend to trust them, 3) tend not to trust them, 4) distrust them greatly.

The third outcome variable was used to assess another aspect of the second dimension of the survey climate, which is survey privacy, as formulated by Loosveldt and Joye (2016). The exact question wording is as follows: “I believe that the personal information I provide to ONS will be kept confidential”, with the following response options: 1) strongly agree, 2) tend to agree, 3) tend to disagree, 4) strongly disagree (Butt et al., 2022b; NatCen, 2014b, 2016b, 2018b).

Responses for these three variables were collected using four-point Likert scales with a “don’t know” option also available to respondents. We combined “strongly agree” with “tend to agree” and “trust them greatly” with “tend to trust them”. This allowed us to create categories for positive attitudes. Similarly, “tend to disagree” and “strongly disagree” as well as “tend not to trust them” and “distrust them greatly” were combined to represent negative attitudes. All outcome variables were thus coded as follows:

yi=1;negativeattitudes0;positiveattitudes,

where yi denotes a binary response variable for individual i.

A consistent approach was used for recording “don’t know” answers across all surveys, regardless of the mode of data collection, with “don’t know” responses initially hidden (Butt et al. 2022b). In face-to-face surveys, no prompts were given for “don’t know” responses either through showcards or in the questionnaire, but respondents could skip questions or spontaneously give a “don’t know” response to move on. It was made clear at the start of both the online and paper questionnaires in 2021 that respondents could skip any question. If a respondent skipped a question, they could select either “don’t’ know” or “prefer not to say”. If a respondent skipped a question on the paper questionnaire, it was recorded as “not answered” and subsequently treated as “don’t’ know” in the dataset. It is important to note that, despite consistency in the approaches across different modes, the interpretation of this option may vary among respondents. This suggests that excluding this category may benefit the analysis due to the potential inconsistency in how different respondents interpret it. The “don’t know” category for the three original outcome variables in 2021 accounted for only 4%, 2% and 1% of cases in the sample, respectively. In the PCOS report (Butt et al., 2022a), those who responded “don’t know” were excluded from the analysis. In our analysis, we decided to implement the same approach (Gilljam & Granberg, 1993). We conducted a sensitivity analysis in which we included “don’t know” responses in the negative attitude categories, and the results remained consistent with those reported below.

We used two groups of explanatory variables: 1) demographic and socio-economic characteristics of the respondents, such as age, gender, number of children, religion, ethnicity, economic activity, education, tenure, and country; and 2) OS-related variables, such as census participation and awareness of the ONS, as we believed these characteristics might be associated with survey reliability and survey privacy indicators. Census participation was captured via a general variable that included a list of different ONS surveys, including the census, with a “select all that applies” option: “Have you participated in any of the ONS surveys listed on this card: Census, Labour Force Survey, International Passenger Survey, Other surveys (carried out by ONS)?”. Responses were then recoded as separate variables to represent participation in each individual survey listed. The question regarding awareness of the ONS was asked in the following format: “I will give you the names of some organisations. Have you ever heard of them on radio, TV, newspapers, or somewhere else?”. If the respondents selected “yes” for the ONS, they were then asked the following: “The Office for National Statistics (ONS) is the organisation that produces official statistics on the state of our economy, society, and our environment. To what extent did you know ONS before the survey?”. The options available to the respondents were: 1) I knew it well, 2) I knew it somewhat, 3) I have only heard the name, and 4) not sure or “don’t know”.

We did not consider other ONS survey participation variables, as participation in ONS surveys is not mandatory, unlike in the census. Hence, some of the respondents may not have been invited and, therefore, did not take part for reasons outside of their own decisions. Therefore, this variable would not truly reflect the respondents’ survey participation behaviour. In contrast, census participation better reflects behaviour of respondents in the target population.

To address the first RQ, we used the most recent 2021 dataset to ensure the relevance of the results. For the second RQ, we employed a pooled dataset containing the relevant observations from all four datasets available for the analysis. All four surveys were comparable, and the wording of the questions across the surveys was mainly identical, with some minor variations, as seen in the wording for the first key outcome variable above. However, we did not expect the slight modification in wording to represent a threat to the validity of the results of the analysis.

Details of the samples used for the analysis (2021 and pooled data) are provided in Table A1 in the Appendix.

3.2Statistical Methods

We began with a univariate unweighted analysis to describe our analytical samples and obtained weighted estimates for three variables of interest at four points in time. Next, we performed a bivariate analysis and investigated the associations between the three dependent variables and explanatory variables for 2021 and the pooled datasets.

We then applied the Kuder-Richardson Formula 20 (KR-20) and conducted Multiple Correspondence Analysis (MCA), incorporating tetrachoric correlations, to gain further insights into public confidence in official statistics in the UK and to assess the association structure among the three outcome variables—specifically, whether it is reasonable to combine them. KR-20 (Kuder and Richardson, 1937) is a statistical measure used to assess the internal consistency or reliability of dichotomous items, evaluating how well they measure a common underlying construct. MCA is a dimension-reduction technique used to explore underlying relationships among categorical variables (Bartholomew et al., 2008). Tetrachoric correlation estimates the association between dichotomous variables and was used to examine the strength of the relationships among the outcome variables.

For the three dependent variables, we used binary logistic regression models, with response probabilities denoted by πi=Pryi=1. They were related to explanatory variables (see Agresti (2013)) as follows:

logit(πi)=logπi1-πi=βTxi,

where xi is a vector of covariates, including the intercept, and β is a vector of coefficients. Respondents with missing values were excluded from the analysis, leading to small differences in the sample sizes across the three models for each context (see Table A1 for details).

These analyses were first performed on the 2021 data to address the first RQ and to understand the current situation in the UK in relation to several components of the second dimension of survey climate discussed above and characteristics of those with different attitudes. Due to changes in data collection modes between surveys, we also analysed the pooled dataset to address the second RQ and assess changes over time, as well as the impact of the mode change on the different attitudes of respondents related to confidence in OS.

We adopted a model-based approach to assess relationships between dependent and explanatory variables and did not use survey weights in modelling. However, we conducted a sensitivity analysis and applied weights for modelling the 2021 data, and the results were consistent with those obtained in the unweighted analysis (see Table A3 in the Appendix).

To decide on the final model, explanatory variables were added group by group, using a forward stepwise model selection procedure. When we modelled the three key variables of interest in both contexts, we first included the demographic and socio-economic variables, followed by the census behaviour and ONS awareness variables. We used an exploratory approach to the analysis, and therefore, no hypotheses (apart from the one related to the mode of data collection) were formulated prior to the analysis. Only the final selected models are presented and discussed in this paper. All data analyses were conducted using IBM SPSS Statistics software package (version 25) and STATA/SE (version 13.1).

4Results

Table A2 in the Appendix presents the characteristics of the respondents in the 2021 dataset and the pooled dataset used for the analysis. These two samples were similar, and both had high proportions of people with positive attitudes towards all three key variables of interest: these proportions were above 80% for all three variables, and for the confidentiality variable, it was as high as 92% in both contexts analysed. The distributions of the respondents were also similar in both samples, with a smaller proportion of people in the younger group and a larger proportion of older respondents. Both samples had higher proportions of females and very high proportions of people with no children. Around half of both samples reported being Christians, and around 90% of respondents were of White ethnicity. Nearly half of the respondents in both samples were employed, and over 30% were retired. Around half of both samples had a degree or higher qualifications, and over 70% of respondents lived in their own accommodations.

Interestingly, around 18% of the 2021 sample, but only around 9% of the pooled sample, had never heard of the ONS. This result may suggest social desirability bias and overclaiming of positive behaviour in the context of face-to-face surveys in pooled datasets. It is likely that when people were asked if they had heard of a specific organisation in a face-to-face context, they were more inclined to answer “yes” (Parry & Crossley, 1950). Also, as expected, these values varied by country in the UK and were higher in Scotland, where the census was conducted by the NRS and not the ONS. The corresponding proportions in the 2021 sample are around 26% in Scotland and 19% and 17% in England and Wales, respectively.

Over one fifth of all respondents in both samples did not report participating in the census, which is not surprising given that, although participation is mandatory in the UK, only one individual per household could complete all individual questionnaires on behalf of all household members in the 2021 census, if they wished to do so (Census Act, 1920; UK Parliament, 2021). Further investigation suggests that in both study contexts, nearly half of all respondents from Scotland reported not taking part in the census, whereas these proportions in England and Wales were much lower. This is not surprising, as the 2021 census household response rate in England and Wales was 97%, whereas in Scotland it was only 88%, and this was achieved only after the introduction of an extension period for census completion (ONS, 2022; ONS, 2023).

Before modelling, we investigated the association structure among the three outcome variables to assess whether it was appropriate to combine them. The KR-20 test statistic was 0.667, which falls below the commonly accepted threshold of 0.750, suggesting that the three outcome variables—accuracy, trust, and confidentiality—do not represent a single, consistent construct.

To further explore the relationships among these variables, we conducted a Multiple Correspondence Analysis (MCA). The results indicate that 60% of the variance is explained by the first dimension, with an additional 24% explained by the second. Table A5 in the Appendix presents the tetrachoric correlations among the outcome variables: the correlation between accuracy and trust is high (r = 0.809), while the correlations between accuracy and confidentiality (r = 0.607) and between trust and confidentiality (r = 0.627) are notably lower.

The Discrimination Measures Plot (Figure A1 in the Appendix) further demonstrates that the third outcome variable (confidentiality) aligns with the second dimension of the underlying construct, whereas accuracy and trust align with the first. This indicates that confidentiality should not be combined with the other two variables (accuracy and trust), as joint analysis could obscure important item-level insights. Overall, the KR-20 results, MCA findings, and tetrachoric correlations support the decision to analyse the three outcome variables separately rather than as a composite construct. While there is an argument for combining accuracy and trust, the exploratory nature of this analysis and the limited number of variables (only two) make it both manageable and preferable to analyse them separately, as this allows for a more nuanced interpretation.

When considering 2021 data, the following variables were included in the models but were not found to be statistically significant: age, sex, number of children, economic activity, and country of residence. The results of the modelling presented in Table 1 suggests that the following four demographic and socio-economic variables were significant, at least in terms of one main variable of interest: religion, ethnicity, education, and tenure. Those who were Christians had a lower probability of having negative attitudes towards the accuracy of OS and the confidentiality of personal data when compared to those who reported having no religion. Those who belonged to other ethnicities were less likely to report negative attitudes regarding the accuracy of OS when compared to those who were White. Those who had lower education had a higher probability of reporting negative attitudes towards the accuracy of OS and trust in statistics produced by ONS. Those who rented their accommodation were more likely to report a lower level of trust in the ONS statistics when compared to those who owned their accommodations. Additionally, in the weighted analysis, those respondents who lived in Scotland were less likely to report negative attitudes about trust and confidentiality than those who lived in England (see Table A3 in the Appendix).

Table 1 Results of Binary Logistic Regression Modelling—2021 Data

Accuracy (N = 2970)

Trust (N = 3171)

Confidentiality (N = 3226)

Variable and categories

β

SE

OR

β

SE

OR

β

SE

OR

Ref reference category

*p < 0.05; **p < 0.01; ***p < 0.001

Intercept

 −1.832***

0.244

 −3.597***

0.335

 −2.312***

0.305

Religion (ref: No religion)

Christian

 −0.366**

0.115

0.694

 −0.373**

0.138

0.688

Other religion

  0.233

0.291

1.262

 −0.1354

0.268

0.874

Ethnicity (ref: White)

Other ethnicity

 −0.930**

0.271

0.394

Education (ref: Degree or Above)

A‑levels

  0.007

0.165

1.007

  0.348

0.132

1.400

Below A‑levels

  0.249

0.132

1.283

  0.510***

0.143

1.665

No qualification

  0.474**

0.177

1.606

  0.623**

0.182

1.865

Tenure (ref: Own)

Rent

  0.336*

0.132

1.400

Other/no information

 −0.140

0.325

0.870

Mode (ref: Paper)

Online

 −0.453***

0.115

0.636

 −0.578***

0.122

0.561

 −0.836***

0.133

0.433

Participation in Census (ref: No)

Yes

 −0.406**

0.118

0.667

 −0.301*

0.139

0.740

How well know ONS (ref: New it well)

Knew it somewhat

  0.653**

0.213

1.625

  1.158**

0.339

3.184

  0.494

0.283

1.640

Only heard the name

  1.130***

0.219

3.096

  1.903***

0.337

6.705

  1.241***

0.279

3.458

Not heard of it

  1.385***

0.226

3.995

  2.180***

0.341

8.844

  1.512***

0.281

4.536

The mode variable was significant in all three models. Since respondents self-selected into the mode, the mode was confounded with the characteristics of the individuals. The results in all three models suggest that those who chose to use the online mode of survey completion had a lower probability of reporting negative attitudes on the three items when compared to those using the paper mode. This is not surprising, as those who used paper questionnaires joint the survey later, when non-respondents were offered paper questionnaires as an attempt to improve response rates, so they were potentially less willing to participate in the survey. For the accuracy and confidentiality outcomes, census participation was associated with less negative attitudes than non-participation. Awareness of the ONS was a significant variable in all three models, and lower awareness was associated with a higher probability of expressing negative attitudes or distrust across the three key outcomes.

To address the second RQ, univariate descriptive analysis of the key outcome variables (weighted estimates) over time were obtained and presented in Table 2. The results suggest that for the first two dependent variables (accuracy and trust), the positive attitudes of respondents increased over time, with stable estimates in 2016 and 2018. For the confidentiality variable, attitudes remained stable over time.

Table 2 Weighted Estimates for Three Key Dependent Variables

2014

2016

2018

2021

Frequency

%

Frequency

%

Frequency

%

Frequency

%

Respondents who selected the “don’t know” option were excluded from this analysis

Accuracy of official statistics

Positive attitude

1115

73

1271

78

1259

78

2658

83

Negative attitude

 417

27

 351

22

 361

22

 565

18

Trust in ONS statistics

Positive attitude

1260

81

1359

85

1374

85

2858

87

Negative attitude

 299

19

 245

15

 241

15

 423

13

Confidentiality of personal information

Positive attitude

1525

91

1542

92

1563

92

3021

91

Negative attitude

 156

 9

 140

 8

 143

 8

 318

10

The main aim of modelling the pooled data was to investigate whether there were significant differences in attitudes over time when we controlled for mode, as well as for other characteristics. It was not possible to include both the year of the survey and the mode in the same model due to multicollinearity, as the online mode was introduced only in 2021.

The results of the modelling from the pooled dataset are presented in Table 3 in this section and in Table A4 in the Appendix. Table 3 reports the results when we controlled for mode, while Table A4 presents comparable models controlling instead for survey year. The results from both final models presented in Table 3 and A4 were very similar. In the pooled datasets, the following variables were not found to be significant in all three models: sex, number of children, ethnicity, and country. The results suggest that for all three key outcome variables, the likelihood of reporting negative attitudes or distrust was lower in all years compared to 2014, and this likelihood decreased over time (see Table A4). This contradicts our hypothesis that the introduction of an online mode and subsequent reduction in potential social desirability bias would result in a higher likelihood of negative attitudes being expressed in 2021.

Table 3 Results of Binary Logistic Regression Modelling—Pooled Data (Mode)

Accuracy (N = 6729)

Trust (N = 6936)

Confidentiality (N = 7087)

Variable and categories

β

SE

OR

β

SE

OR

β

SE

OR

Ref reference category

*p < 0.05; **p < 0.01; ***p < 0.001

Intercept

 −1.863***

0.1339

 −2.695***

0.163

 −2.612***

0.125

Age (ref: 18–34)

35–44

  0.062

0.125

1.064

 −0.007

0.153

1.013

45–54

  0.381**

0.119

1.464

  0.359*

0.143

1.432

55–64

  0.599***

0.119

1.820

  0.692***

0.140

1.997

65

  0.702***

0.117

2.018

  0.798***

0.137

2.222

Religion (ref: No religion)

Christian

 −0.242***

0.068

0.785

 −0.293***

0.079

0.746

 −0.214*

0.092

0.807

Other religion

 −0.316*

0.160

0.729

 −0.230

0.185

0.795

  0.078

0.190

1.081

Economic activity (ref: Employed)

Self-employed

  0.377*

0.162

1.458

Retired

  0.262**

0.100

1.300

Unemployed

 −0.017

0.222

0.983

Other

 −0.036

0.174

0.965

Education (ref: Degree or above)

A‑levels

  0.078

0.096

1.081

  0.223

0.114

1.250

Below A‑levels

  0.278**

0.080

1.320

  0.406***

0.093

1.584

No qualification

  0.429***

0.103

1.536

  0.655***

0.115

1.925

Tenure (ref: Own)

Rent

  0.283***

0.067

1.327

  0.290**

0.087

1.337

Other/no information

  0.513*

0.224

1.671

  0.035

0.287

1.035

Participation in Census (ref: No)

Yes

 −0.189**

0.071

0.828

 −0.178*

0.082

0.837

How well know ONS (ref: Knew it well)

Knew it somewhat

  0.190*

0.092

1.209

  0.285*

0.115

1.330

  0.116

0.129

1.124

Only heard the name

  0.519***

0.099

1.681

  0.720***

0.120

2.054

  0.442**

0.133

1.556

Not heard of it

  0.820***

0.139

2.271

  1.281***

0.158

3.601

  0.923***

0.168

2.518

Mode (ref: Face-to-face)

Paper

 −0.323**

0.102

0.724

 −0.195

0.115

0.823

  0.371**

0.124

1.449

Online

 −0.680***

0.0819

0.507

 −0.660***

0.098

0.533

 −0.438***

0.110

0.645

When we substituted the year variable with the mode variable, the results suggested that for both the paper and online modes of data collection, the probability of reporting negative attitudes or distrust was lower compared to the face-to-face mode of data collection after controlling for other individual characteristics (see Table 3). These results are reassuring and may suggest that there was indeed an increase in certain aspects of public confidence in OS over the years in the UK. This finding is discussed further in the next section.

As for the other characteristics, in the pooled data, we found a significant association between age and both accuracy and trust outcomes, with older respondents more likely to express negative attitudes or distrust compared to younger ones. The findings related to religion were consistent with the results for the 2021 dataset. Additionally, those who reported having a religion other than Christianity were less likely to report negative attitudes towards the accuracy of OS compared to those with no religion.

The results for education were consistent with these observed in the 2021 data. Respondents with lower levels of education were more likely to report negative attitudes or distrust, but only for the accuracy and trust variables, as education was not found to be significant when the confidentiality variable was analysed. As for tenure, the results were consistent with the 2021 results and relevant for assessing the accuracy and trust in ONS statistics variables, with those renting their accommodation being more likely to report negative attitudes compared to those who owned their houses. Another additional variable found to be significant in the context of confidentiality was economic activity, with people who were self-employed and retired having a higher probability of expressing negative attitudes about the confidentiality of personal information compared to those who were employed. Participation in the census was a significant variable for the accuracy and trust models, with those who reported taking part being more likely to express positive attitudes or trust compared to those who did not participate in the census. Awareness of the ONS was again a significant variable in all three contexts, and the higher the level of awareness, the lower the likelihood of reporting negative attitudes or distrust.

5Discussion and Conclusions

In this paper, we examined the characteristics of UK respondents with varying levels of public confidence in OS and analysed the change in public confidence in OS over time between 2014 and 2021.

We found that certain demographic and socio-economic characteristics were associated with a higher likelihood of negative attitudes or distrust in 2021, at least with respect to one of the outcomes of interest. These included religion, ethnicity, education, and tenure. Respondents with lower levels of education were more likely to express negative attitudes or distrust. This was also the case for those with no religion compared to Christians, those of White ethnicity compared to respondents from other ethnic backgrounds, and those renting their accommodations compared to homeowners. For some outcomes of interest (accuracy and confidentiality), census participation was significantly associated with more positive attitudes. Awareness of the ONS was also a significant variable for all three outcomes, and negative attitudes were associated with a higher likelihood of lower awareness.

Additional characteristics such as age and economic activity were associated with negative attitudes or distrust for the key outcome variables in the analysis of the pooled dataset. Older respondents and those who were retired or self-employed, compared to those who were employed, were more likely to report negative attitudes. Understanding these factors associated with survey climate in relation to OS in the UK can inform survey practice strategies to improve the survey climate, and as a result, survey implementation may require less effort to obtain high-quality data. Further exploration of negative attitudes towards surveys among specific sub-groups should be carried out with the aim to identify effective intervention, which might improve their confidence in OS and as a result, potentially improve survey climate. For example, qualitative research could be conducted targeting respondents with lower education levels to understand their attitudes towards OS and their needs regarding survey participation.

We also analysed whether the change in attitudes over time could be attributed to the shift in data collection mode in 2021. While some aspects of public confidence in OS in the UK—particularly regarding accuracy and trust—demonstrated improvement over time, attitudes towards confidentiality remained stable.

The results regarding changes in public confidence in OS in the UK, as reported above, cannot be explained by the change in the mode of data collection over time. We would expect that respondents would have a higher likelihood of expressing negative attitudes in 2021 due to the reduction in social desirability bias associated with the introduction of the self-completion mode of data collection. Instead, we observed the opposite direction of the effect. The reasons for this observed positive change can be attributed to the timing of the 2021 survey, as it took place in the same year as Census 2021 (Butt et al., 2022a; ONS, 2022). Also, the frequent reports of results from the ONS Covid-19 Infection Survey (CIS) on the news during the Covid-19 pandemic in the UK might have influenced this positive change, as exposure of the public to the OS had increased, which possibly helped to raise awareness about surveys generally and made OS more visible to the UK public (Butt et al., 2022a). These results are consistent with the third society-level dimension of survey climate identified by Loosveldt and Joye (2016) and further discussed by Lorenc et al. (2013). Specifically, the increased efforts by NSIs to influence the survey environment through the media offer effective means of improving the external survey environment for Official Statistics. In contrast, during previous years when other surveys were conducted (2014–2018), public exposure to media coverage about the ONS, various surveys, and the census was comparatively limited.

However, it is important to note that response rates decreased between 2014 and 2021, falling from 47% to 24%. This may be partially attributed to changes in the mode of data collection and the absence of an individual-level sampling frame in the UK, but it also suggests a general decline in willingness to participate in surveys. Additionally, in 2021 survey, a self-selection mechanism was introduced for the within-household selection of individuals as a part of self-completion design. While this reflects current best practice in the UK, it has its limitations (Nicolaas, 2022). Moreover, the questions used for the analysis shifted from being a part of a larger survey covering a wide range of topics to becoming a stand-alone survey focused specifically on public confidence in OS, which may have attracted a different respondetn profile.

Although, according to Butt et al. (2022b), the weighted PCOS 2021 sample broadly matched the composition of the previous BSA 2018 survey, as well as national population estimates across a wide range of demographic and socio-economic variables—including sex, age, number of adults in the household, ethnicity, region, tenure, education, and economic activity—it is important to note that unobserved characteristics could not be accounted for in weighting and statistical modelling. These unobserved characteristics may differ across different points in time, potentially influencing the differences in public confidence in OS observed. It is possible that negative opinions about surveys may lead to non-participation, a known limitation of surveys about surveys (Goldman, 1944). As a result, we report observed increase in confidence in OS in the UK with caution. Since willingness to participate did not increase over the studied period, and we were unable to investigate all components of the survey climate, we cannot conclude that the overall survey climate in the UK has improved over time.

The results regarding the perceptions of data confidentiality contradicted the findings reported by Kim et al. (2011). However, it is important to note that their study reported changes between 1990 and 2000 in the US, and therefore, the findings are not directly comparable to those in our study.

In light of the third society-level dimension of survey climate identified by Loosveldt and Joye (2016) and further discussed by Lorenc et al. (2013), it is important to discuss factors associated with attitudes towards OS that can be effectively influenced or managed. First, previous census participation was found to be associated with positive attitudes with respect to the accuracy and confidentiality variables in the 2021 data, and with the accuracy and trust variables in the pooled data analysis. However, it is important to note that, given the observational nature of the data, it is impossible to establish the direction of causal effects. Two possible interpretations exist: 1) census participation increases confidence in OS, or 2) confidence in OS affects the likelihood of participation in the census. Given that participation in the census is strongly associated with positive attitudes towards OS, it is important to further investigate the direction of causal effects. If it is established that census participation improves some aspects of survey climate, stronger measures to enforce census participation—possibly requiring individuals to complete their questionnaires themselves, if they are able to—could potentially have a positive impact on the survey climate in the UK.

Awareness of the ONS was also positively associated with attitudes towards different dimensions of survey climate in the UK, that is, the higher the awareness, the more positive the attitudes. Therefore, measures to increase public awareness of the ONS, the UK Statistics Authority, and other organisations producing OS outside the exceptional context of the Covid-19 pandemic may be beneficial. Statistics Canada similarly identified key elements for improving its external survey environment, including strengthening its brand, enhancing interactions with the media, engaging directly with survey respondents, and developing other partnership initiatives (Lorenc et al., 2013).

The analysis presented here has several limitations. First, the results reported in this paper are exploratory and the manuscript does not feature pre-registration. More importantly positive attitudes or trust may be overreported due to the nature of surveys on surveys and associated non-response bias, as some potential respondents who do not hold positive attitudes about surveys may have opted out (Goldman, 1944). This bias may be particularly pronounced in the 2021, as this survey is a stand-alone survey about surveys. Additionally, the outcome measures may be subject to acquiescence response bias, potentially leading to an overestimation of agreement with the statements relating to confidence in OS (Krosnick and Presser, 2010). Moreover, due to social desirability bias, the attitudes expressed in the face-to-face context in the presence of an interviewer (2014, 2016, and 2018) could also be overreported.

Another limitation is the inability to assess all aspects of the survey climate framework developed by Loosveldt and Joye (2016) with the available data. We could not directly assess willingness of respondents to participate in surveys (although we did observe the declining response rates, which suggested reduced willingness in survey participation in the UK), nor important components of opinions about surveys dimension such as survey enjoyment, survey cost and survey value (Loosveldt & Storms, 2008). Despite these limitations, this paper contributes new evidence about public confidence in OS in the UK, particularly regarding the characteristics of respondents with different attitudes towards OS and other associated factors.

To gain a fuller understanding of the UK survey climate, it is essential to collect relevant data and conduct an in-depth analysis of all components of the framework. In this regard, the CROss-National Online Survey‑2 (CRONOS-2) employed a new survey attitude scale developed by de Leeuw et al. (2019) in an attempt to conduct a more detailed survey climate assessment in the UK (and other countries), and this scale would complement our analysis well as it addressed the aspects of public opinion about surveys, which we were not able to analyse in this paper. It is also important to note that attitudinal questions generally have limitations, as according to Converse (2006), responses may not always be the result of a rigorous mental process but could be just a function of a “mental coin flip”. Nevertheless, attitudinal questions remain useful as they allow researchers to obtain valuable information and can be helpful in many research contexts, including the study of the survey climate.

Finally, the study did not include data from Northern Ireland. As a result, we cannot describe the situation across the entire UK, therefore, it would be valuable to expand the geographical remit of the PCOS surveys and include Northern Ireland in the data collection in the future in case survey climate and attitudes of respondents in Northern Ireland differ from those in other parts of the UK. As expected, the proportion of respondents that would not have heard of the ONS was higher in Scotland compared to England and Wales in 2021. This finding highlights the need for a separate assessment of awareness regarding the National Records of Scotland (NRS), which conducts the census in Scotland, in future rounds of the PCOS and other relevant surveys conducted in Scotland.

Due to the inherent challenges associated with non-response in surveys about surveys, further research is needed to determine the most effective approach for conducting this specific type of survey—whether as a stand-alone instrument or as a module within a broader survey, and whether it should be interviewer-administered or self-administered.

1Supplementary Information

The Appendix contains five tables and one graph. The first table presents the stages involved in creating the analytical samples used for the analysis. The second table demonstrates the characteristics of the respondents in the 2021 dataset and in the pooled dataset used for the analyses. The third table presents the results of the binary logistic regression models using weighted 2021 data. The forth tables shows the results of the binary logistic regression models using the pooled data, with the year variable instead of the mode variable (presented in the main text of the manuscript). The last table presents the tetrachoric correlations between the three main outcome variables used for the analyses. The graph demonstrates the discrimination measures plot from the multiple correspondence analysis.

Acknowledgements

Dr Maslovskaya gratefully acknowledges the Department of Economics of the University of Bergamo for the opportunity to visit Dr Bianchi, which enabled collaboration and supported the development of this manuscript. Dr Bianchi acknowledges support from the 60% University of Bergamo, Bianchi grant. The authors also thank the guest editor and the reviewers for their valuable comments on the manuscript.

References

Agresti, A. (2013). Categorical data analysis (3rd edn.). Hoboken: Wiley.

Bartholomew, D. J., Steele, F., & Moustaki, I. (2008). Analysis of multivariate social science data. CRC press.

Brackfield, D. (2011). OECD work on measuring trust in official statistics. Proceedings of the 58th World Statistical Congress, Dublin, 2011. International Statistical Institute. Session STS070

Butt, S., Swannell, B., & Pathania, A. (2022a). Public confidence in official statistics—2021. NatCen. Report prepared for UK Statistics Authority. https://natcen.ac.uk/sites/default/files/2022-12/NatCen_Public-Confidence-in-official-statistics_2021.pdf (Created 25 Apr 2022). a, b, c, d, e

Butt, S., Swannell, B., Pathania, A., Keyes, A., & Messling, J. (2022b). Public confidence in official statistics 2021—Technical report. NatCen. Report prepared for UK Statistics Authority. https://natcen.ac.uk/sites/default/files/2023-07/P16329%20PCOS%202021%20Technical%20Report%20Final_Signed%20off%20by%20UKSA_Formatted%20amended.pdf (Created 04.2022). a, b, c, d, e

Census Ac (1920). Chapter 41. Census Act 1920. https://www.legislation.gov.uk/ukpga/Geo5/10-11/41

Childs, J. H., Fobia, A. C., King, R., & Morales, G. (2019). Trust and credibility in the U.S. federal statistical system. Survey Methods: Insights from the Field. https://surveyinsights.org/?p=10663a, b

Coffey, S., Maslovskaya, O., & McPhee, C. (2024). Recent innovations and advances in mixed-mode surveys. Journal of Survey Statistics and Methodology. https://doi.org/10.1093/jssam/smae025

Converse, P. E. (2006). The nature of belief systems in mass publics (1964). Critical Review, 18(1-3), 1–74.

Curtice, J., Clery, E., Perry, J., Phillips, M., & Rahim, N. (2019). British social attitudes: the 36th report. London: The National Centre for Social Research. https://www.bsa.natcen.ac.uk/media/39363/bsa_36.pdf a, b

Eurostat (2017). Beginners: statistical concept—what are official statistics. https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Beginners:Statistical_concept_-_What_are_official_statistics?#:~:text=Official%20statistics%20are%20statistics%20produced,behalf%20of%20the%20national%20government

Fellegi, I. (1996). Characteristics of an effective statistical system. Canadian Public Administration, 39(1), 5–34.

Fellegi, I. (2004). Maintaining the credibility of official statistics. Statistical Journal of the United Nations ECE, 21, 191–198.

Fellegi, I. (2011). Report of the electronic working group on measuring trust in official statistics. OECD Meeting of the Committee on Statistics, Paris, 06.2011.

Gilljam, M., & Granberg, D. (1993). Should we take don’t know for an answer? Public Opinion Quarterly, 57(3), 348–357.

Goldman, E. F. (1944). Poll on the polls. Public Opinion Quarterly, 8(4), 461–467.a, b, c

Goyder, J. (1986). Surveys on surveys: limitations and potentialities. Public Opinion Quarterly, 50(1), 27–41.

Groves, R. (1989). Survey errors and survey costs. New York: Wiley.

Groves, R., & Couper, M. (1998). Nonresponse in household surveys. New York: Wiley.

Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: description and an illustration. Public Opinion Quarterly, 64(3), 299–308.

Kim, J., Gershenson, C., Glaser, P., & Smith, T. (2011). The polls-trends: trends in surveys on surveys. Public Opinion Quarterly, 75(1), 165–191.a, b, c

Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In J. D. Wright & P. V. Marsden (Eds.), Handbook of survey research (2nd edn., pp. 263–313). West Yorkshire: Emerald Group.

Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2(3), 151–160.

de Leeuw, E. (2005). To mix or not to mix data collections modes in surveys. Journal of Official Statistics, 21, 233–255.

de Leeuw, E., Hox, J., Silber, H., Struminskaya, B., & Vis, C. (2019). Development of an international survey attitude scale: Measurement equivalence, reliability, and predictive validity. Measurement Instruments for the Social Sciences, 1(1), 1–10. https://doi.org/10.1186/s42409-019-0012-x.a, b, c

Loosveldt, G., & Joye, D. (2016). Defining and assessing survey climate. In C. Wolf, D. Joye, T. W. Smith & Y. C. Fu (Eds.), The SAGE handbook of survey methodology (pp. 67–76). London: SAGE.a, b, c, d, e, f, g, h, i, j, k, l, m, n

Loosveldt, G., & Storms, V. (2008). Measuring public opinions about surveys. International Journal of Public Opinion Research, 20, 74–89.a, b, c, d

Lorenc, B., Loosveldt, G., Mulry, M. H., & Wrighte, D. (2013). Understanding and improving the external survey environment of official statistics. Survey Methods: Insights from the Field. https://doi.org/10.13094/SMIF-2013-00003.a, b, c, d, e

Lyberg, I., & Lyberg, L. (1991). Nonresponse research at Statistics Sweden. Proceedings of the Survey Research Methods Section. Alexandra: American Statistical Association. http://www.asasrms.org/Proceedings/y1991f.html

NatCen Social Research (2014a). British social attitudes 2014: user guide. http://doc.ukdataservice.ac.uk/doc/7809/mrdoc/pdf/bsa2014_userguide.pdfa, b, c

NatCen Social Research (2014b). British social attitudes 2014: Documentation of the Blaise questionnaire. https://www.bsa.natcen.ac.uk/media/39000/bsa-32-questionnaires-2014.pdfa, b, c

NatCen Social Research (2016a). British social attitudes 2016: user guide. http://doc.ukdataservice.ac.uk/doc/8252/mrdoc/pdf/8252_bsa_2016_user_guide.pdfa, b, c

NatCen Social Research (2016b). British social attitudes 2016: Documentation of the questionnaire. https://www.bsa.natcen.ac.uk/media/39198/bsa34-questionnaire.pdfa, b, c

NatCen Social Research (2018a). British social attitudes 2018: user guide. https://doc.ukdataservice.ac.uk/doc/8606/mrdoc/pdf/8606_bsa_2018_user_guide_final.pdf

NatCen Social Research (2018b). British social attitudes 2018: Documentation of the questionnaire. https://www.bsa.natcen.ac.uk/media/39198/bsa34-questionnaire.pdfa, b, c

NatCen Social Research (2016). British Social Attitudes Survey, 2014. (data collection) (2nd edn.). UK Data Service. SN: 7809. https://doi.org/10.5255/UKDA-SN-7809-2.a, b, c

NatCen Social Research (2017). British social attitudes survey, 2016 (data collection). UK Data Service. SN: 8252. https://doi.org/10.5255/UKDA-SN-8252-1.a, b, c

NatCen Social Research (2021). British social attitudes survey, 2018 (data collection) (2nd edn.). UK Data Service. SN: 8606. https://doi.org/10.5255/UKDA-SN-8606-2.a, b, c

NatCen Social Research (2023). Public confidence in official statistics, 2021 (data collection). UK Data Service. SN: 9051. https://doi.org/10.5255/UKDA-SN-9051-1.a, b, c, d

Nicolaas, G. (2022). Within-household selection for push-to-web surveys. GenPopWeb2 Report. University of Southampton. https://www.ncrm.ac.uk/documents/Within%20household%20selection%20for%20push%20to%20web%20surveys.pdf

ONS (2022). Maximising the quality of Census 2021 population estimates. https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/methodologies/maximisingthequalityofcensus2021populationestimates#:~:text=Where%20our%20Census%202021%20response,rates%20when%20compared%20with%202011a, b

ONS (2023). 2022 Census in Scotland. https://osr.statisticsauthority.gov.uk/publication/2022-census-in-scotland/pages/2/#:~:text=At%20the%20end%20of%20April,whose%20return%20rate%20was%2066%25

Parliament, U. K. (2021). Preparing for the 2021 census. https://commonslibrary.parliament.uk/research-briefings/cbp-8531/#:~:text=Census%20legislation&text=In%20October%202019%2C%20the%20government,don’t%20answer%20these%20questions 

Parry, H. J., & Crossley, H. M. (1950). Validity of responses to survey questions. Public Opinion Quarterly, 14(1), 61–80.

Pullinger, J. (2020). Trust in official statistics and why it matters. Statistical Journal of the IAOS, 36, 343–346. https://doi.org/10.3233/SJI-200632.

Radermacher, W. J. (2020). Official statistics 4.0. Springer.a, b

Tourangeau, R., Rips, L. J., & Rasinski, K. A. (2000). The psychology of survey response. Cambridge: Cambridge University Press.

Vannieuwenhuyze, J., Loosveldt, G., & Molenberghs, G. (2010). A method for evaluating mode effects in mixed-mode surveys. Public Opinion Quarterly, 74(5), 1027–1045.

Vannieuwenhuyze, J. T., Loosveldt, G., & Molenberghs, G. (2012). A method to evaluate mode effects on the mean and variance of a continuous variable in mixed-mode surveys. International Statistical Review, 80(2), 306–322.