Research

OVERVIEW | My research lies at the intersection of philosophy of science, epistemology, and ethics. In my dissertation, I describe methods of testing, confirming and evaluating the adequacy of computational models for simulating climate change and decision-making. In recent publications, I explore the impact of recent methodological shifts to Big Data and artificial intelligence on the reliability of climate model predictions and clinical diagnostics. In future work, I hope to elaborate on the distinct challenges and standards needed to use data produced by artificial intelligence models as evidence to support medical decision making and public policy.

SIMULATING CLIMATE | Computer models of the climate system are the primary tool by which scientists both understand the causes of climate change and form predictions about how the climate will change in the future on local and global scales. The reliability of climate models has been subject to significant controversy. Several philosophers and climate scientists have pointed out methodological challenges to inferring the adequacy of models for predicting future climate on the basis of model-fit to past and present data. The question remains: Why should we trust computer model predictions of climate change?


Most of my research has involved examining the different sources of trust in climate model predictions. My main contribution is a philosophical account of dynamical adequacy, a metric by which scientists test and evaluate process-models. In a recent article (under review), I show that a major reason why scientists trust the computer predictions of climate change is because those computers accurately and dynamically simulate the climate processes behind the predictions. I argue that testing and evaluating the representative adequacy of dynamic process-models confirms the adequacy of a model for making reliable predictions (and decisions) about future climate. 

COMPUTING CLIMATE  Recently, my research has turned to analyzing the epistemic requirements for the ethical application of artificial intelligence (AI) in the Earth sciences. Specifically, I consider the epistemic implications of scientists’ recent (and growing) adoption of artificial intelligence and data-driven models, like neural networks, to understand and predict climate change. Can artificial intelligence models trained on vast amounts of data (with no process representation) reliably predict climate? How does their performance compare to that of traditional simulation models?


In an article recently published in Philosophy of Science, I highlight the recurring failure of AI models to accurately predict climate out of sample or beyond the training data. Given current methods of training and evaluating machine learning models, such models are especially prone to overfitting. Moreover, the primary ways to guard against over fitting and improve the predictive accuracy of parameterizations are (historically) to (1) evaluate the process representation and (2) improve the process representation. But AI models rule out both strategies because they do not represent processes directly or indirectly.


So, given the propensity of ML models to overfit and the lack of process representations, I argue that machine learning parameterizations undermine the reliability of climate model predictions. I conclude that the representation of climate processes adds significant and irreducible value to the reliability of climate model predictions. In doing so, I problematize the use of big data methods for predicting, not just understanding, climate change.

MACHINE MEDICINE |  My ongoing and future research concerns similar questions about computational science and artificial intelligence but in a different domain: clinical medicine. A central problem for machine learning models is that they are black box models and epistemically opaque. This has led some to call for a halt to the adoption and use of incredibly accurate machine learning models, especially in safety critical domains like healthcare, in favor of using simpler, though perhaps less accurate, models. Others have argued that explainability is not ethically required for the deployment of such complex models, even for high stakes decision making, such as in healthcare settings.


Clearly, the ethical adoption of AI technologies in healthcare is both hotly contested and relies significantly on the epistemological status and implications of AI models being ‘black box’ models or epistemically opaque models.

I am currently working on two related problems that follow from the black box nature of such models, the first is epistemic and the second is  ethical. First, how should human experts weigh and combine existing knowledge with the outputs of a black box model for decision making? And second, how will the black box nature of such algorithms impact, or compromise, a patients’ rights to informed consent, data privacy, and recourse in the case of medical injury and resulting damage?