Question Wording Matters in Measuring Frequency of Fear of Crime: A Survey Experiment of the Anchoring Effect

Survey Research Methods
ISSN 1864-3361
825910.18148/srm/2024.v18i1.8259Question Wording Matters in Measuring Frequency of Fear of Crime: A Survey Experiment of the Anchoring Effect
Aubrey L. Etopio aubreyetopio@gmail.com University of Maryland,
Baltimore County Baltimore USA
Emily R. Berthelot eberthelot@unr.edu University of Nevada,
Reno Nevada USA
3982024European Survey Research Association

For decades, fear of crime researchers have disagreed about how to best measure fear of crime. One approach proposed that measuring frequency of fear of crime within the past year has the highest validity. We argue that a frequency approach is vulnerable to the anchoring effect, in which participants base their numerical estimate on an available anchor. We conducted a survey experiment to test the effect of question wording on reported frequency of fear of crime. Participants were randomly assigned to report the number of times they felt fearful of crime within either the past 12 months, a typical month, or a typical week. There was also a fourth condition that asked a forced-choice question with many response options. They also reported the intensity of their most recent instance. We hypothesized that the year condition would yield lower frequency and higher intensity reports, followed by the month condition, and then the week condition. We did not find differences in intensity between conditions, but we found stark differences in frequencies between the year, month, and week conditions in the hypothesized direction. This is consistent with the anchoring effect: the specified time period signaled an anchor to participants, and they adjusted their estimates from those anchors. We advise caution regarding frequency measures of fear of crime, because such questions may lead participants to anchor and adjust. Lastly, we strongly caution researchers who wish to measure the frequency of other emotions, feelings, or behaviors. Lastly, we discuss the potential implications for policy.

1Introduction

Crime has been a growing concern among Americans. Violent crime in the United States has been steadily declining since the early 1990s, but worry about crime has been increasing since 2001, presumably due to the lasting unease after 9/11 (Donohue, 2017). Crime and fear of crime have become increasingly salient issues in American politics, regardless of party. Fear of crime has been a motivating force in policy decisions, beginning in the 1960s with the “war on crime” and persisting for decades, with recent presidential races centered on reducing crime and violence (Simon, 2018). Most notably, the Trump Administration promised to “restore law and order” by curbing immigration and “standing up for our law enforcement community” (White House, 2017a, b). Perceptions of crime undoubtedly influence national resource allocation. In September 2018, just before Hurricane Florence was poised to make landfall, the Department of Homeland Security diverted $ 10 million away from the Federal Emergency Management Agency (FEMA) to fund immigration detention centers (Nixon, 2018). Rhetoric that fuels fear of crime clearly supports such an agenda. As Donohue (2017) put it: “Unscrupulous politicians and their supporters, the gun industry (using fear of crime to help sell guns and elect pro-gun legislators), and parts of the media constantly seek to scare the public with alarmist crime stories.” (p. 1309). Clearly, fear of crime has far-reaching implications, and thus deserves research attention.

Fear of crime has been a popular topic of study since early research revealed that fear of crime affects far more people than crime itself (Fattah & Sacco, 1989). In fear of crime research, there is a long history of disagreement regarding how to best measure fear of crime (Etopio & Berthelot, 2022; Hale, 1996; Hart et al., 2022; Henson & Reyns, 2015). Inconsistency in measurement can lead to inconsistent estimates of prevalence of fear of crime. Collins’ (2016) meta-analysis of 114 studies found that age differences in fear of crime were largely impacted by study design, including the number of questions asking about fear of crime. She also found that, while gender was the strongest predictor of fear of crime, question phrasing accounted for 30% of the gender difference. Collins concluded that a study’s survey design “has profound impacts on the conclusions that study makes about which people/groups are most afraid” (2016, p. 25).

Decades earlier, Farrall and colleagues (1997) had similarly lamented that “the results of fear of crime surveys appear to be a function of the way the topic is researched, rather than the way it is” (p. 677, emphases in original). They concluded that the typical fear of crime measure overestimates the prevalence of fear of crime. To address their concerns, Farrall and Gadd (2004) departed from previous approaches and prioritized the frequency and intensity of fear of crime. To do so, they asked three questions (p. 128):

  1. 1.

    “In the past year, have you ever felt fearful about the possibility of becoming a victim of crime? [yes, no, can’t remember]

  2. 2.

    [if YES at Q1] How frequently have you felt like this in the last year? [N of times recorded]

  3. 3.

    [if YES at Q1] On the last occasion, how fearful did you feel? [not very fearful, a little bit fearful, quite fearful, very fearful, cannot remember]”

Of the 365 participants who answered their frequency question, 19% of participants said they felt fearful once, 17% of participants felt fearful twice, a combined 18% said they felt fearful 3 to 5 times, and the remaining 34% answered anywhere from 6 to 365 times (and 12% answered “don’t know”). As such, Farrall and Gadd (2004) concluded that prevalence of fear of crime is low.

We agree with Farrall and colleagues (1997) that the prevalence of fear of crime depends on the measurement instrument. However, frequency measures are no exception. We argue that questions asking about the frequency of fear of crime are vulnerable to the anchoring effect, which occurs when participants base their numerical estimate on an available anchor (see Literature Review; Dillman et al., 2014; Tversky & Kahneman, 1974). We expect that if participants are asked how many times “in the past year” they felt fearful about becoming the victim of crime, they may use “once per year” as an anchor and adjust their estimates from there. Due to the “year” anchor, participants may assume that the researchers are asking about more serious, less frequent events. In contrast, if participants were asked about how many times per month or per week they felt fearful, they may use that reference period (“once per month” or “once per week”) as an anchor and assume the researcher is asking about less serious, more frequent events. We sought to test this with a survey experiment.

2Literature Review

Tversky and Kahneman (1974) famously uncovered the cognitive heuristics that people rely on to make everyday judgments. One such heuristic is anchoring and adjustment, in which people tend to “make estimates by starting from an initial value that is adjusted to yield the final answer” (p. 1128). They reviewed many instances in which participants were asked to estimate a series of quantities and their reported responses were close to the numerical anchor provided. In one instance, participants were tasked with estimating the percentage of African countries in the United Nations. When they were given an initial anchor of 10, the median estimate was 25, but when the anchor was 65, the median estimate was 45. The anchoring effect prevailed even when participants were paid for accurate responses and even when they knew that the provided anchors were arbitrary (Tversky & Kahneman, 1974). Decades later, Kahneman (2011) referred to the anchoring effect as “one of the most reliable and robust results of experimental psychology” (p. 119).

Schwarz (2007) reviewed similar cognitive errors that threaten the validity of survey questions—particularly questions about frequency of behaviors or emotions. He warned that “frequency reports are highly context dependent, often shaped by the research instrument” (p. 282). There are two major ways that anchoring can occur in a survey. First, participants’ responses to earlier questions can serve as anchors for later questions (Dillman et al., 2014). Second, questions themselves can provide anchors for participants (Tversky & Kahneman, 1974), which is the focus of the current study.

Winkielman, Knäuper, and Schwarz (1998) conducted survey experiments to examine the influence of reference period in participants’ self-reported frequency of anger. They expected that asking people “how often were you angry last year?” (p. 720) would lead to reports of lower frequency and higher intensity anger compared to asking how often they were angry during a “typical week.” They explained (p. 720):

“A participant may say to himself or herself, ‘The researcher would not expect me to remember all the small experiences that happen over the whole year, so she must be asking about the serious ones.’ This reasoning suggests that the same question may be interpreted as referring to substantively different experiences, depending on the length of the reference period.”

Indeed, they found that the reference period (year vs. week) affected participants’ reported frequency of anger. They concluded that “different reference periods elicited different question interpretations” (p. 725). In other words, their explanation was that the reference period signaled to participants how to interpret what “anger” meant: those in the year condition likely inferred they were being asked about major anger episodes (e.g., rage), whereas those in the week condition likely inferred they were being asked about minor anger episodes (e.g., annoyance).

More recent research has continued to find compelling evidence for anchoring and adjustment in reporting of frequencies. A recent Danish survey experiment examined the impact of question anchors on reported frequency of covid-19 prevention behaviors (Hansen et al., 2022). Participants were asked to report the number of times they washed their hands during the previous day and the number of people they had close contact with during the previous day. The handwashing question had two conditions: they asked participants if their frequency was more than, equal to, or less than either 3 or 30. The close contact question also had two conditions: they asked participants if the number of people they had close contact with was more than, equal to, or less than either 3 or 15. They found a significant but small effect size for the close contact measure and a large effect size for the handwashing measure (d = 0.76). Participants who received the low handwashing anchor reported washing hands an average of 10.9 times, compared to 18.1 times in the high anchor condition. They, too, concluded that “self-reported data for this type of behavior is effectively shaped by the type of measure” (p. 41).

3Hypotheses

Three major hypotheses guided our research:

  1. 1.

    Fewer participants in the year condition, compared to the month and week conditions, will report “yes” to feeling fearful of crime during the specified time period.

  2. 2.

    Participants in the year condition will report the lowest frequency of feeling fearful of crime, followed by the month condition, and then the week condition.

  3. 3.

    Participants in the year condition will report feeling the greatest intensity of fear of crime, followed by the month condition, and then the week condition.

4Method

4.1Sample

We used Qualtrics Panels to recruit an online sample (N = 561) of US adults nationwide. Responses were collected between January 11th and January 23rd, 2023. The only eligibility criteria for the survey were that participants must be at least 18 years old and must speak fluent English. A correct answer to the attention check (“Select ‘untrue for me’ for this question”) was required; participants who did not pass this attention check were removed by Qualtrics before we received the data. We used quotas for gender and age because a pilot launch of the survey revealed that women and younger participants were overrepresented. The gender quota aimed for a roughly 50/50 split between men and women. The age quota aimed for a representative distribution into the age categories of age 18–34 (30%); age 35–54 (32%); age 55+ (38%).

The mean age of the sample was 47.44 (range = 18–89; SD = 18.19). The sample was 51% women and 49% men. The sample was majority White (63%), followed by Black (25%), Asian (5%), American Indian/Alaska Native (3%), Middle Eastern/East Indian (1%), and Native Hawaiian/other Pacific Islander (0%). Thirteen participants (2%) indicated some combination of these race options. Most participants were not Hispanic (82%) and 18% of participants identified themselves as Hispanic.

4.2Procedure

We conducted an experiment within a survey to examine the effect of question wording on differences in frequency and intensity of fear of crime. Participants were randomly assigned to one of four conditions: (1) year, (2) month, (3) week, and (4) forced choice. Each condition asked participants about their frequency of fear of crime in different ways. For the year condition, we retained Farrall and colleagues’ (1997) original wording, except we changed “in the past year” to “in the past 12 months” (see below) so it was clear to participants that we meant a full 12 months—as opposed to 2022 only or from January 2023 to the date of survey completion (which would have been less than one month). For the month and week conditions (see below), we asked about a “typical” month/week as opposed to the “past” month/week to avoid responses about an unusual month/week, such as a vacation.

4.2.1Experimental Conditions

Year.

Participants in the year condition were first asked, “In the past 12 months, have you ever felt fearful about the possibility of becoming a victim of crime?” Participants who selected yes were then asked, “How many times have you felt fearful about the possibility of becoming a victim of crime in the past 12 months? (Type a number)”

Month.

Participants in the month condition were first asked, “In a typical month, do you ever feel fearful about the possibility of becoming a victim of crime?” Participants who selected yes were then asked, “How many times do you feel fearful about the possibility of becoming a victim of crime in a typical month? (Type a number)”

Week.

Participants in the month condition were first asked, “In a typical week, do you ever feel fearful about the possibility of becoming a victim of crime?” Participants who selected yes were then asked, “How many times do you feel fearful about the possibility of becoming a victim of crime in a typical week? (Type a number)”

Forced Choice.

Participants in the forced-choice condition were asked “How often do you feel fearful about the possibility of becoming a victim of crime?” with response options “more than once a day,” “every day,” “a few times a week,” “once a week,” “a few times a month,” “once a month,” “several times per year (6–11 times),” “a few times per year (2–5 times),” “once a year,” “less than once a year,” and “never.” Unlike the year, month, and week conditions, this condition did not have a yes/no question before the frequency question, but the “never” response option gave participants the chance to indicate the equivalent of a “no” response.

4.2.2Intensity

After participants answered their randomly assigned frequency question, all conditions then asked about intensity of fear of crime with the question, “The last time you were fearful about becoming the victim of a crime, how fearful did you feel?” with response options “not very fearful,” “a little bit fearful,” “quite fearful,” “very fearful,” and “cannot remember.”

5Results

Random assignment led to an approximately equal number of respondents in each condition. There were 136 participants in the year condition (24% of the total sample), 133 participants in the month condition (24%), 151 participants in the week condition (27%), and 141 participants in the forced-choice condition (25%).

5.1Hypothesis 1

To test Hypothesis 1, we examined the number of participants answering yes/no to whether they felt fearful of crime in the specified time period. We predicted that participants in the year condition would be the least likely to report they had been fearful of crime. Table 1 shows how many participants reported yes/no in each condition. Note that for the forced-choice condition, there was not a yes/no question before the frequency question was asked, so the “no” column in the forced-choice condition reflects the number of participants who chose “never” or “less than once a year,” whereas the “yes” column reflects the number of participants who chose anything other than “never” or “less than once a year.” We considered an answer of “less than once a year” in the forced-choice condition to be equivalent to an answer of “no” in any other condition because the other conditions did not have the option to report anything less frequent than once per year.

Table 1 Participant responses indicating whether or not they felt fearful of crime in the specified time period

Condition

No

Yes

n

%

n

%

a “No” reflects the participants who chose “never” or “less than once a year” in the forced-choice condition; “Yes” reflects participants who chose anything other than “never” or “less than once a year” in the forced-choice condition

Year

81

60

55

40

Month

77

58

56

42

Week

85

56

66

44

Forced choicea

55

39

86

61

Consistent with Hypothesis 1, the year condition had the smallest proportion of participants indicating “yes”—that they did feel fearful during that time period. We conducted a Chi-square test to see if these differences were statistically significant. The Chi-square test revealed that the proportion of participants answering yes/no did differ by condition χ2(3, N = 561) = 15.37, p = 0.002. Comparing proportions of yes/no in each group, with a Bonferroni correction, revealed that the year, month, and week conditions were not significantly different, but the forced-choice condition was significantly different from all other conditions. Ultimately, Hypothesis 1 was not supported.

5.2Hypothesis 2

To test Hypothesis 2, we compared participants’ reported frequency of feeling fearful of crime across conditions. Table 2 shows reported frequencies in each condition. One participant typed “3–4,” which we changed to 3.5. Another typed “1–2,” which we changed to 1.5.

Table 2 Reported frequencies of feeling fearful of crime in the specified time period across conditions

Condition

Mean

Median

Mode

SD

Range

a Despite reporting that they feel fearful of crime in a typical month/week, one participant in the month condition and one participant in the week condition reported a frequency of 0

Year

10.1

3

2

29.5

1–200

Month

 4.8

3

1

 5.6

0a–30

Week

 3.2

2

2

 2.8

0a–15

To easily compare the frequencies across conditions, we converted responses in the month, week, and forced-choice conditions to represent their frequency per year. To adjust to frequency per year, we multiplied the month condition responses by 12 and multiplied the week responses by 52. To adjust the forced-choice condition responses to frequency per year, we converted each response to numerals in the following way: “never” was converted to 0 and was excluded from frequency counts; “less than once a year” = 0.5; “once a year” = 1; “a few times per year (2–5 times)” = 3.5 (average of 2 and 5); “several times per year (6–11 times)” = 8.5 (average of 6 and 11); “once a month” = 12; “a few times a month” = 36 (3 × 12); “once a week” = 52; “a few times a week” = 156 (3 × 52); “every day” = 365; “more than once a day” = 730 (2 × 365). Table 3 shows participants’ reported frequencies converted to frequency per year.

Table 3 Reported frequencies of feeling fearful of crime across conditions, converted to frequency per year

Condition

Mean

Median

Mode(s)

SD

Range

a Participant answers multiplied by 12

b Despite reporting that they feel fearful of crime in a typical month/week, one participant in the month condition and one participant in the week condition reported a frequency of 0

c Participant answers multiplied by 52

d Participant answers were converted to numerals: “never” was converted to 0 and was excluded from frequency counts; “less than once a year” = 0.5; “once a year” = 1; “a few times per year (2–5 times)” = 3.5 (average of 2 and 5); “several times per year (6–11 times)” = 8.5 (average of 6 and 11); “once a month” = 12; “a few times a month” = 36 (3 × 12); “once a week” = 52; “a few times a week” = 156 (3 × 52); “every day” = 365; “more than once a day” = 730 (2 × 365)

Year

 10.1

  3

2

 29.5

1–200

Montha

 57.1

 36

12, 24

 67.6

0b–360

Weekc

166.6

104

104

143.7

0b–780

Forced choiced

 94.2

 12

0.5, 365

146.5

0.5–730

These descriptive statistics show stark differences between conditions. Consistent with Hypothesis 2, participants in the year condition reported the fewest instances of feeling fearful of crime (mean = 10.1, median = 3, mode = 2 times per year), compared to participants in the month condition (mean = 57.1, median = 36, modes = 12 and 24 times per year) and week condition (mean = 166.4, median = 104, mode = 104 times per year). The mean frequency per year in the week condition was more than 10 times the mean frequency in the year condition. The mode frequency per year in the week condition was more than 50 times the mode frequency in the year condition. These descriptive statistics are so striking that Hypothesis 2 is clearly supported. The time period anchor in the question impacted participants’ reported frequency of feeling fearful of crime.

5.3Hypothesis 3

To test Hypothesis 3, we ran a one-way ANOVA to examine whether intensity of fear of crime varied significantly between conditions. We predicted that participants in the year condition would report the highest intensity fear, followed by month and then week. The ANOVA revealed no statistically significant difference in intensity between conditions, F(3, 259) = 2.22, p = 0.087.

6Discussion

We anticipated that question wording would have a significant impact on participants’ reported frequency of fear of crime. Specifically, we anticipated that participants who were asked about their fear of crime “in the past 12 months” would report fewer instances of fear and higher intensity fear than participants who were asked about a “typical month” or a “typical week.”

Hypothesis 1 was not supported: participants in the year, month, and week conditions did not differ in their likelihood of indicating they felt fearful of crime at any point during the specified time period. The forced-choice condition was the only condition to significantly differ from the others. Hypothesis 2 was supported: participants in the year condition reported the lowest frequency of fear of crime, followed by month, and then week. However, Hypothesis 3 was not supported: there was no difference in the intensity of participants’ fear of crime between conditions. We expected that longer reference periods (year) would yield higher intensity than shorter reference periods (month and week).

We found no differences in reported intensity of fear of crime, which was unexpected since we did find such stark differences in frequency between conditions. We are not sure why there were no significant differences in intensity. As Schwarz (2007) explained, a reference period of “year” would likely signal to participants that researchers are interested in less frequent, and thus more intense, episodes of emotion, whereas a shorter reference period would likely signal to participants that researchers are interested in more frequent, and thus less intense, episodes. Our participants in the year condition reported much lower frequency, but not higher intensity, fear. This finding does not follow the logic that we hypothesized, so perhaps there is some other cognitive process or heuristic that can explain this outcome. Future research should further examine the relationships among question anchors, reported frequency, and reported intensity.

Our most impactful finding was the stark difference in frequency of fear of crime between conditions. This major finding has direct implications for measuring fear of crime. Farrall and colleagues (1997) asserted that previous measures of fear of crime tended to overestimate prevalence of fear of crime, and Farrall and Gadd (2004) concluded that prevalence is low because only about one-third of their sample reported experiencing “fear provoking episodes” in the past year (p. 130). However, we contend that asking about frequency of fear of crime within a year will inherently yield lower frequency estimates because of the question framing. Our results suggest that if Farrall and Gadd had asked about frequency within a shorter reference period, such as a month or a week, perhaps they would have instead concluded that prevalence of fear of crime is high. We recommend caution regarding frequency measures with a specified reference period (e.g., year, month, week) because they may yield responses close to the anchor of the reference period.

The current study illustrates the stark differences in frequency estimates of fear of crime as a function of question wording. Any bias in measurement has the potential to be exploited to promote a particular agenda. As explained in the introduction, anything that distorts public perceptions of crime has potential implications for policy decisions. For example, if constituents are polled about how many times in a typical week they feel fearful of crime, this may yield high frequency estimates and thus foster support for an anti-crime policy.

If researchers wish to measure frequency of fear of crime, perhaps a forced-choice question with response options ranging from “several times a day” to “less than once a year” or “never” is more advisable, with the specific response options depending on the research question. This type of question does not include an anchor because there is no reference period—only an exhaustive list of choices. Our forced-choice condition appears to have functioned as a control group. This condition had the widest range of responses, including a mean of 94.2 times per year, a median of once a month (12), and two modes: every day (365) and less than once a year (0.5). Perhaps this forced-choice question, without an anchor provided to participants, is the most valid way to measure frequency of fear of crime. However, Winkielman and colleagues (1998) illustrated that forced-choice response options can influence participants’ responses as well. They warned researchers that participants assume that the middle response option is the average and then adjust their answer based on the assumed average.

We also ask researchers to consider whether it is necessary to measure the frequency of fear of crime at all. Unless measuring frequency of fear of crime is crucially important for a particular hypothesis, we suggest alternative fear of crime measures. Gallup routinely asks American respondents about their perceptions of crime (Gallup, 2023). Since 1965, they have asked: “Is there any area near where you live—that is, within a mile—where you would be afraid to walk alone at night?” However, this one-item measure has been criticized for not asking specifically about crime (Etopio & Berthelot, 2022; Garofalo, 1979; Hale, 1996). Similarly, they have also asked: “How much do you personally worry about crime and violence?” (though it is important to note that this question is asked in the context of “problems facing the country”). If a researcher’s priority is to use the fewest items possible, these single-item measures may be preferrable to a frequency measure.

Another alternative to frequency measures of fear of crime is our own 10-item fear of crime scale (Etopio & Berthelot, 2022). Example items include “Crime worries me in my day-to-day life” and “I feel vulnerable to becoming the victim of a crime.” We created the items systematically from participant statements during in-depth qualitative interviews about their feelings toward crime. After pretesting the items and conducting factor analyses, we found the resulting 10-item scale to demonstrate convergent validity, divergent validity, and internal consistency. To our knowledge, this is the first and only fear of crime scale that was created from qualitative reports of people’s feelings about crime.

Our findings also have implications beyond fear of crime. We echo Schwarz’ (2007) conclusions that frequency measures of emotions require participants to first interpret the researchers’ intentions and their interpretations influence their frequency estimates. Shorter reference periods signal to participants that researchers are interested in more frequent (and perhaps more benign) instances, whereas longer reference periods signal to participants that researchers are interested in less frequent (and perhaps more intense) instances. We have no reason to suspect that these measurement issues would be specific to fear of crime.

These findings may generalize to measuring the frequency of other emotions, thoughts, feelings, and behaviors. For example, if a question asks how often someone has experienced pain in the past year, they may think of a sprained ankle from months ago and a recent case of appendicitis, reporting an answer of two instances of pain per year. In contrast, if a question asks how often someone has experienced pain in a typical week, they may think of their aching back each morning and evening, reporting an answer of 14 instances of pain per week, which would be 728 instances of pain per year. Of course, this example is hypothetical, but our findings suggest that researchers who wish to measure the frequency of some subjective experiential event within a given time period should be aware of this possibility. As Kahneman (2011) said, “it is not surprising that people who are asked difficult questions clutch at straws, and the anchor is a plausible straw” (p. 125).

Despite the possible threats to validity, some research questions may necessitate using frequency measures. For example, variables such as frequency of handwashing or frequency of urination are important to study because they may meaningfully predict health risks. These frequency measures (and certainly others) could not be replaced by measures of intensity—the frequency per se is important. At the very least, frequency questions should exclude any numerical anchors (e.g., “do you urinate more or less than 8 times per day?”). Further, researchers could use prospective measures such as the daily diary method. In daily diary studies, participants are asked to record their experiences in real time over a specified time period (Lischetzke & Könen, 2021). More specifically, “event-based sampling” asks participants to record whenever a particular event or behavior occurs. To measure frequency of handwashing, for example, a researcher could ask participants to record every time they wash their hands over the course of a day or a few days. For events or behaviors that are less frequent, the study period can be extended up to weeks or months. Because this approach asks participants to record prospectively, it is not vulnerable to biases in memory like retrospective measures are (Lischetzke & Könen, 2021).

Lastly, researchers may need to define the construct of interest for participants. For behaviors such as handwashing or urination, the measure may not require much interpretation. However, more abstract variables may be vulnerable to differences in interpretation, even for prospective measures. As explained above, Winkielman et al. (1998) warned that the length of the reference period can influence participant interpretations of the question. Even if using a daily diary method, perhaps defining exactly what it means to feel “anger” or “pain” or “fear of crime” could further minimize differences in interpretation and thus increase validity.

7Conclusion

The current study found that participants’ estimates of how often they feel fearful of crime depends on the specified time period in the question. Asking participants how many times they felt fearful of crime in the past 12 months led to remarkably lower frequencies than asking about how many times they feel fearful in a typical month or typical week. Our findings suggest that researchers measuring the frequency of fear of crime should do so with caution. We advise against including a temporal reference period when measuring frequency of fear of crime due to its impact on participants’ estimates of frequency. Unless it is necessary for a particular research question, we also invite researchers to reconsider measuring frequency of fear of crime altogether. Individual differences in fear of crime can be captured by measuring its intensity instead of frequency.

Aside from fear of crime, our findings highlight the importance of considering human cognition in survey methodology in any discipline. Our research is further evidence that frequency measures can be biased by anchoring. Researchers should be cautious when measuring the frequency of subjective experiences. Cognitive heuristics, such as anchoring and adjustment, can lead participants to use cues from the question to infer what researchers expect from them. We echo other researchers’ conclusions that self-reported frequency of emotions, thoughts, feelings, and behaviors vary greatly depending on the type of measurement used.

References

Collins, R. E. (2016). Addressing the inconsistencies in fear of crime research: a meta-analytic review. Journal of Criminal Justice, 47, 21–31. https://doi.org/10.1016/j.jcrimjus.2016.06.004. a, b

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Hoboken: Wiley.a, b

Donohue, J. J. (2017). Comey, Trump, and the puzzling pattern of crime in 2015 and beyond. Columbia Law Review, 117(5), 1297–1354.a, b

Etopio, A. L., & Berthelot, E. R. (2022). Defining and measuring fear of crime: a new validated scale created from emotion theory, qualitative interviews, and factor analyses. Criminology, Criminal Justice, Law, & Society, 23(1), 46–67. https://doi.org/10.54555/ccjls.4234.34104.a, b, c

Farrall, S., & Gaad, D. (2004). The frequency of the fear of crime. British Journal of Criminology, 44, 127–132.a, b, c

Farrall, S., Bannister, J., Ditton, J., & Gilchrist, E. (1997). Questioning the measurement of the ‘fear of crime’: findings from a major methodological study. British Journal of Criminology, 37(4), 658–679.a, b, c, d

Fattah, E. A., & Sacco, V. F. (1989). Crime and victimization of the elderly. New York: Springer.

Gallup (2023). Crime. https://news.gallup.com/poll/1603/crime.aspx

Garofalo, J. (1979). Victimization and the fear of crime. Journal of Research in Crime and Delinquency, 16, 80–97.

Hale, C. (1996). Fear of crime: a review of the literature. International Review of Victimology, 4, 79–150.a, b

Hansen, P. G., Larsen, E. G., & Gundersen, C. D. (2022). Reporting on one’s behavior: a survey experiment on the nonvalidity of self-reported COVID-19 hygiene-relevant routine behaviors. Behavioural Public Policy, 6, 34–51. https://doi.org/10.1017/bpp.2021.13.

Hart, T. C., Chataway, M., & Mellberg, J. (2022). Measuring fear of crime during the past 25 years: a systematic quantitative literature review. Journal of Criminal Justice, 82, 1–11. https://doi.org/10.1016/j.jcrimjus.2022.101988.

Henson, B., & Reyns, B. W. (2015). The only thing we have to fear is fear itself. . . and crime: the current state of the fear of crime literature and where it should go next. Sociology Compass, 9(2), 91–103. https://doi.org/10.1111/soc4.12240.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus, and Giroux.a, b

Lischetzke, T., & Könen, T. (2021). Daily diary methodology. In F. Maggino (Ed.), Encyclopedia of quality of life and well-being research. Springer. https://doi.org/10.1007/978-3-319-69909-7_657-2.a, b

Nixon, R. “$ 10 million from FEMA diverted to pay for immigration detention centers, document shows” New York Times. https://www.nytimes.com/2018/09/12/us/politics/fema-ice-immigration-detention.html (Created 12 Sept 2018). 

Schwarz, N. (2007). Cognitive aspects of survey methodology. Applied Cognitive Psychology, 21, 277–287. https://doi.org/10.1002/acp.1340.a, b, c

Simon, J. (2018). After the culture of fear: fear of crime in the United States half a century on. In M. Lee & G. Mythen (Eds.), The Routledge international handbook on fear of crime (pp. 82–92). New York: Routledge.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157), 1124–1131.a, b, c, d

White House (2017a). Inside President Donald J. Trump’s first year of restoring law and order. https://trumpwhitehouse.archives.gov/briefings-statements/president-donald-j-trumps-first-year-restoring-law-order/

White House (2017b). Standing up for our law enforcement community. https://perma.cc/H4XS-MKRV

Winkielman, P., Knäuper, B., & Schwarz, N. (1998). Looking back at anger: reference periods change the interpretation of emotion frequency questions”. Journal of Personality and Social Psychology, 75(3), 719–728.a, b, c