top of page
efe63b64-650d-4a9d-8d94-e6fdf09420b7_edited.jpg

Research

My research examines the epistemic, organizational, and developmental conditions under which reliance on AI systems is justified in consequential settings. I work at the intersection of philosophy of artificial intelligence, trustworthy machine intelligence, and enterprise transformation, with a particular focus on agentic AI, multi-agent systems, organizational decision-making, and workforce development in industrial environments.

Current Research

My current research focuses on the design, governance, and practical use of agentic and multi-agent AI in enterprise settings. I study how such systems can be aligned with corporate values, how responsibility and oversight shift when cognition is distributed across human and machine actors, and how AI can transform workflows without weakening accountability, technical rigor, or the cultivation of human expertise. A central concern in this work is not only what AI systems can do, but what kinds of institutions, practices, and capabilities are required for their use to remain reliable, governable, and developmentally constructive.

Industrial Research Agenda

My industrial research is concerned with how enterprise AI reshapes the structure of expertise inside organizations. This includes the alignment of agentic systems to organizational values, the dynamics of coordination and control in multi-agent environments, and the design of workflows that strengthen rather than bypass expert judgment. I am especially interested in workforce development under conditions of increasing cognitive automation: how expertise is cultivated, how metacognitive skill is preserved, and how AI systems can be designed to support learning, calibration, and responsible decision-making rather than mere task compression.

Scholarship

My scholarship examines what justifies reliance on models, algorithms, and intelligent systems in high-stakes settings. Across work in climate science, medicine, criminal justice, and AI, I investigate the structure of evidential support, the conditions under which artificial systems can warrant epistemic deference, the limits of explainability as a proxy for trustworthiness, and the role of metacognition in responsible human reliance on automated systems. This work provides the conceptual foundation for my current research on agentic AI, multi-agent systems, organizational decision-making, and workforce development.

 

Selected Publications

Robustness, Variety of Evidence, and Climate Model Agreement.” In Comprehensive Philosophy of Science, Chapter 5.16, Philosophy of Climate Science (2026).


Examines the distinct roles of robustness reasoning and variety-of-evidence reasoning in establishing trustworthy climate-model agreement.

Algorithmic Evidence in U.S. Criminal Sentencing.” AI & Ethics (2024).

 

Argues that the use of automated risk assessment tools in sentencing is necessarily unfair and develops a Bayesian account of algorithmic fairness centered on equally confirmatory evidence.

Confirming (Climate) Change: A Dynamical Account of Model Evaluation.” Synthese (2022).

 

Argues that climate-model predictions are adequate for public policy when their reliability can be confirmed across changes in space and time.

Against Explainability Requirements for Ethical Artificial Intelligence in Health Care.” AI & Ethics (2022).


Argues that explainability is not a necessary condition for ethical AI in clinical settings because it does not itself establish reliability.

Can Machines Learn How Clouds Work? The Epistemic Implications of Machine Learning Methods in Climate Science.” Philosophy of Science (2021).


Argues that machine learning methods are not reliable for climate prediction when they fail to represent the causal processes that produce climate phenomena.

Current Lines of Inquiry

Across this work, I return to a set of recurring questions: When is acting on AI output epistemically responsible, given the stakes, purpose, and decision context? What makes an artificial system count as an expert, and what are the limits of artificial expertise? How should metacognitive skill be preserved and cultivated when AI increasingly mediates reasoning and action? How should agentic AI systems be aligned with organizational values rather than narrow optimization targets? What new governance problems arise when agency is distributed across multiple models, tools, and human actors? And how can workflow transformation enhance capability, judgment, and institutional learning rather than merely accelerate execution?

skawamle[at]iu.edu

1033 E 3rd St, Bloomington, IN 47405

©2020 by Suzanne Kawamleh. Proudly created with Wix.com

bottom of page