Date of Award

December 2022

Degree Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Atmospheric Science

First Advisor

Paul J Roebber

Committee Members

Clark Evans, Sergey V Kravtsov, Jon D Kahl

Abstract

Data scientists are more widely using artificial intelligence and machine learning (ML) algorithms today despite the general mistrust associated with them due to the lack of contextual understanding of the domain occurring within the algorithm. Of the many types of ML algorithms, those that use non-linear activation functions are especially regarded with suspicion because of the lack of transparency and intuitive understanding of what is occurring within the black box of the algorithm. In this thesis, we set out to create a protocol to delve into the black box of an ML algorithm set to predict synoptic severe weather patterns and discover if we can more closely observe what is occurring inside the algorithm. In doing so, we prove that despite the lack of context considered when creating the algorithm there can be some recognition of key synoptic features. This protocol is aided by the introduction of a novel visualization tool that acts to peer inside the hidden nodes of an artificial neural network to better diagnose the black box. To show that this protocol and tool have merit, we also consider 5 generalized questions that should be answered to develop trust with ML algorithms.

Share

COinS