top of page
efe63b64-650d-4a9d-8d94-e6fdf09420b7_edited.jpg

Research

 

Recently, the scientific method has been fundamentally transformed by the adoption of advanced computational methods, including artificial intelligence (AI) systems, across many policy-relevant sectors. AI is now used to detect and diagnose diseases based on medical images, suggest treatment based on individual patient data, and use electronic health records to predict which patients will require intensive care. In the environmental sciences, AI is being used to make predictions of extreme weather events, model the potential impacts of climate change on cities, and facilitate the discovery of novel climate patterns. These novel and complex computational methods produce scientific evidence that guides climate mitigation and adaptation policymaking. 

 

However exciting, innovative technology poses new social, political, and moral challenges. We often believe that decision makers have a moral duty to collect good evidence, carefully deliberate about that evidence, and to make decisions that are adequately informed and justified by the available evidence. 

 

But what does it mean to have good scientific evidence when that evidence is produced by an AI system? 

 

And is it ethical to make high-stakes decisions about medical treatment or climate adaptation strategies based on scientific evidence produced by AI systems? 

 

These are some of the questions I address in my research and publications:

 

  1. "Confirming (climate) change: a dynamical account of model evaluation." Synthese (2022) (Thesis: If scientists can confirm the reliability of a climate model’s predictions over changes in space and time, then a model’s predictions are adequate for the purposes of public policy).

  2. "Can machines learn how clouds work? The epistemic implications of machine learning methods in climate science." Philosophy of Science (2021) (Thesis: Machine learning methods are not reliable for climate prediction and decision-making because they do not represent causal processes that produce climate phenomena.) 

  3. "Against explainability requirements for ethical artificial intelligence in health care." AI & Ethics (2022) (Thesis: Explainability does not confirm the reliability of AI methods for medical decision making and, thus, is not necessary for the ethical use of AI in clinical settings.) 

  4. "Algorithmic evidence in U.S. criminal sentencing." AI & Ethics (2024) (Thesis: The use of automated risk assessment tools to predict a defendant’s risk of recidivism is necessarily unfair. I provide a Bayesian account of algorithmic fairness that centers on equal treatment and requires the use of equally confirmatory algorithmic evidence.)

 

Future Research - Artificial Experts and Machine Evidence

 

Looking forward, I plan to write a series of articles exploring some of the following questions: 

 

  1. Can we trust AI expert systems or is trust a concept that is only applicable to human agents? 

  2. Are we, as non-experts, obligated to epistemically defer to AI expert systems? 

  3. What should we do when AI and human experts disagree?

  4. How should human experts incorporate the testimony or evidence of AI experts in their own reasoning and decision making? 

  5. How can reliance on AI experts cause injustice in human-machine interactions and human decision-making?

bottom of page