TY - JOUR AU - Schonlau, Matthias AU - Couper, Mick P. PY - 2016/08/15 Y2 - 2024/03/28 TI - Semi-automated categorization of open-ended questions JF - Survey Research Methods JA - SRM VL - 10 IS - 2 SE - Articles DO - 10.18148/srm/2016.v10i2.6213 UR - https://ojs.ub.uni-konstanz.de/srm/article/view/6213 SP - 143-152 AB - Text data from open-ended questions in surveys are difficult to analyze and are frequently ignored. Yet open-ended questions are important because they do not constrain respondents’ answer choices. Where open-ended questions are necessary, sometimes multiple human coders hand-code answers into one of several categories. At the same time, computer scientists have made impressive advances in text mining that may allow automation of such coding. Automated algorithms do not achieve an overall accuracy high enough to entirely replace humans. We categorize open-ended questions soliciting narrative responses using text mining for easy-to-categorize answers and humans for the remainder using expected accuracies to guide the choice of the threshold delineating between “easy” and “hard”. Employing multinomial boosting avoids the common practice of converting machine learning “confidence scores” into pseudo-probabilities. This approach is illustrated with examples from open-ended questions related to respondents’ advice to a patient in a hypothetical dilemma, a follow-up probe related to respondents’ perception of disclosure/privacy risk, and from a question on reasons for quitting smoking from a follow-up survey from the Ontario Smoker’s Helpline. Targeting 80% combined accuracy, we found that 54%-80% of the data could be categorized automatically in research surveys. ER -