Fighting Health Misinformation: Building an Interpretable, Criteria-Driven System to Assist the Public in Assessing the Quality of Health News
Date of Award
Doctor of Philosophy
Ajay Sethi, Amanda Simanek, Maria Haigh
Artificial Intelligence, Health Misinformation, Infodemic, Interpretable A.I., Machine Learning, Public Health Education
Machine learning techniques have been shown to be efficient at identifying health misinformation. However, interpreting a classification model remains challenging due to the model's intricacy. The absence of a justification for the classification result and disclosure of the model's domain knowledge may erode end-users’ trust in such models. This diminished trust may also undermine the effectiveness of artificial intelligence-based initiatives to counteract health misinformation. The study objective is to address the public's need for help evaluating the quality of health news and the typical opaqueness of an AI approach. This study employs an interpretable, criteria-based approach for automatically assessing the quality of health news on the Internet. Nine well-established criteria were chosen for building the system. To automate the evaluation of the criterion, Logistic Regression, Naive Bayes, Support Vector Machine, and Random Forest algorithms were tested. Two approaches were utilized for developing interpretable representations of the results. For the first approach, (1) word feature weights are calculated, which explains how classification models distill keywords that are relevant to the prediction; (2) then using the Local Interpretable Model Explanations (LIME) framework, keywords for visualization are selected to show how classification models identify positive news articles; (3) and finally, the system highlights target sentences containing keywords to justify the criterion evaluation result. For the second approach, (1) sentences that provide evidence to support the evaluation result were extracted from 100 health news articles; (2) based on these results, a typology classification model is trained at a sentence level; (3) then, the system highlights positive sentence instances for the result justification. The accuracy of both methods is measured using a small held out test set. A user study was conducted to understand how users trust in the proposed system’s news evaluation result. The performance of automatic evaluation of health news of each nine criteria ranges from highest (AUC, Precision) values of (0.89, 0.82) for Cost down to lowest values of 0.61 for AUC (Novelty) and 0.60 for Precision (Alternative). Both interpretation approaches could visually interpret the given criteria effectively. When not considering the number of sentences for visualization, the best accuracy achieved for each criterion was 100% (Cost), 66.7% (Benefit), 100% (Harm), 95% (Quality),95.45% (Mongering),90% (Conflict),65% (Alternative),68.42% (Availability) and 66.67% (Novelty). The result of the user study shows that participants have high trust in the health news quality evaluation result generated by the system. However, no statistically significant difference was observed between the study and control groups. Results suggest one might visually interpret an automatic criterion-based health news quality evaluation successfully using either approach. This work addresses the need of interpretation in a computerized health information evaluation.
Liu, Xiaoyu, "Fighting Health Misinformation: Building an Interpretable, Criteria-Driven System to Assist the Public in Assessing the Quality of Health News" (2022). Theses and Dissertations. 3035.
Available for download on Monday, December 23, 2024
Artificial Intelligence and Robotics Commons, Databases and Information Systems Commons, Public Health Commons