Research

There is wide agreement that the physical basis of climate models and process-based reasoning plays an important role in why we should trust climate models to make accurate or reliable predictions. But the form that reasoning takes or why it should support a model's adequacy for making good predictions is not very clear. A large part of my project is clarifying the sources of model credibility.

My research shows how conceptual categories central to scientific investigation are constructed and transformed through the development of new technologies and the practices that support them. For example, with the advent of computer simulation, one is able to not only calculate, say, temperature at some future point or time, but also calculate how temperature evolves over time. Thus, scientists can evaluate models by comparing the evolution of variables like temperature to the evolution of their physical counterparts in observational data. In an article under review in Philosophy of Science, I argue that the historical advent of computer simulations, and associated model evaluation practices, construct a fundamentally dynamic concept of representation. This dynamic concept of representation can be used to show the inadequacy of our present methods for confirming climate models and point to new methods of ensuring model reliability.

My analysis of the technological influence on modes of representation also reveals important lessons regarding the logics of big data as they are employed in the Earth sciences. I examine how the adoption of big data methods in climate science force a reversion from a dynamic concept of representation back to a static concept of representation. In another article, forthcoming in Philosophy of Science, I argue that the development and use of neural networks to process big data reduces representation to the reproduction of simple descriptive statistics (like mean temperature) and reduces model evaluation to an assessment of how closely the neural network prediction resembles the observational data. Whereas most scholars have criticized the adoption of machine learning methods for its use of “black boxes” and the negative impacts on scientific understanding, I raise the novel objection that neural networks, in virtue of a static notion of representation, rule out important forms of model evaluation. Such forms of model evaluation safeguard against errors and secure the reliability of predictions about future climate change. Thus, I problematize the use of big data methods for predicting, not just understanding, climate change.

My future research addresses the implications of historicizing the knowledge structures through which we understand and predict climate change for addressing scientific skepticism. I consider how existing conceptual categories impact public assessment of scientific technologies and their products. Some skeptics conclude that climate simulations are unreliable: they may produce predictions of past climate variables that resemble observational data because of error. In contrast to other accounts of scientific skepticism, I argue that the problem is not that climate models do not provide good evidence of climate change or that people—whether for social or epistemic reasons—fail to appropriately make judgments on the basis of evidence. Rather, rational disagreement about climate science stems, in part, from applying static concepts and standards of representation to evaluate the reliability of scientific technologies that rely on dynamic concepts of representation. By locating the source of scientific skepticism in the historical transformation of conceptual categories, I explain the sources of disagreement about the central technologies and methods of climate science while maintaining a commitment to considering the social contexts of rational agents on both sides of the debate.

 

skawamle[at]iu.edu

1033 E 3rd St, Bloomington, IN 47405

©2020 by Suzanne Kawamleh. Proudly created with Wix.com