What are the right epistemic norms for science?
Should the conventional threshold for statistical significance in the social and biomedical sciences be redefined? Should pre-registration of studies be promoted or required? Should we move from classical to Bayesian methods in the analysis of studies? And what might a better future science look like? Investigating these questions constitutes a central strand of my research.
To this end, I use the tools of game & decision theory, Bayesian statistics, dynamical systems, and network theory to explore a range of questions on how we come to produce and justify collective knowledge (esp. with respect to the replication crisis in social and life sciences). This is the formal, social epistemology of science.
Lately, I have also become quite interested in and involved with research on AI alignment and global catastrophic risk, and have been using mathematical models to formalize and reason about theses on: principal-agent problems; empowerment; and instrumental convergence.
Another strand of my research explores the subject of scientific models of cultural and biological evolution. I think about how we make inferences to complex, real-world systems from simple, idealized models; and about what such models can teach us regarding the evolution of cooperation, communication, and cognition.
I also have broader (more amateur) interests in middle-eastern philosophy (esp. Al-Haytham and Khayyam), political philosophy (esp. Harsanyi and Sen), history of analytic philosophy (esp. Wittgenstein and de Finetti), empiricism (esp. D. Hume and T. Reid), and American pragmatism (esp. C. S. Peirce and W. James).
You can find my peer-reviewed publications and works in progress below. You can also find the computational models that typically accompany this research (the open-source code for which is available at my GitHub page) as well as information graphics I’ve created on related topics.
Peer-Reviewed Contributions
Methods for Modeling Science (2024) Methods in Philosophy of Science: A User’s Guide
Abstract. Models can be used to understand and improve science. This is written for philosophers of science and metascientists interested in modeling science; those for whom target of analysis are the social and epistemic structures and processes involved in the production and dissemination of scientific findings. For such folk, this chapter can serve as part guide to the hidden curriculum of modeling, and part discussion piece for thinking about the challenges and best practices involved in this sort of modeling.
Recommended citation: Mohseni, A. (2024). “Methods for Modelers of Science,” in Veigl & Currie (Eds.), Methods in the Philosophy of Science: A User’s Guide. MIT Press.
Media Biases and the Public Understanding of Science (with Cailin O’Connor and James Wetherall, 2023) Philosophical Topics
Abstract. Scientific curation, where scientific evidence is selected and shared, is essential to public belief formation about science. Yet common curation practices can distort the body of evidence the public sees. Focusing on science journalism, we employ computational models to investigate how such distortions influence public belief. We consider these effects for agents with and without confirmation bias. We find that standard journalistic practices can lead to significant distortions in public belief; that pre-existing errors in public belief can drive further distortions in reporting; that practices that appear relatively unobjectionable can produce serious epistemic harm; and that, in some cases, common curation practices related to fairness and extreme reporting can lead to polarization.
Recommended citation: Mohseni, A., O’Connor, C., and Weatherall, J. (2023). “The Best Paper You’ll Read Today: Media Biases and the Public Understanding of Science.” Philosophical Topics.
[ Computational model ]
[ Open-source code ]
The Tragedy of the AI Commons (with Travis LaCroix, 2022) Synthese
Abstract. Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.
Recommended citation: LaCroix, T. and A. Mohseni (2022). “The Tragedy of the AI Commons.” Synthese, 200 (4):1-33.
On the Emergence of Inequity: Testing the Cultural Red King Hypothesis with Cailin O’Connor and Hannah Rubin, 2020) Synthese
Abstract. The cultural red king hypothesis predicts that differentials in groups size may lead to inequitable outcomes for minority groups even in the absence of explicit or implicit bias. We test this prediction in an experimental context where subjects divided into groups engage in repeated play of a simplified Nash demand game. We run 14 trials involving a total of 112 participants. The results of the experiments are significant and suggestive: individuals in minority groups do indeed end up making low demands more frequently than those in majority groups, and so receive lower payoffs.
Recommended citation: Mohseni, A., O’Connor, C., & H. Rubin. (forthcoming). “On the Emergence of Inequity: Testing the Red King Hypothesis.” Synthese.
Self-Assembling Networks (with Brian Skyrms and Jeffrey Barrett, 2019) British Journal for the Philosophy of Science“
Abstract. We consider how an epistemic network might self-assemble from the ritualization of the decisions of individual inquirers with varying abilities. In such evolved social networks, the heterogeneous agents may be significantly more successful than they could be investigating nature on their own. The evolved networks may also dramatically lower the epistemic risk faced by even the most talented inquirers. We consider networks that self-assemble in the context of both perfect and imperfect communication and compare the evolved behavior of inquirers in each.
Recommended citation: Barrett, J.A., Skyrms, B. & A. Mohseni. (2019). “Self-Assembling Networks.” British Journal for the Philosophy of Science. 70:301–325.
[ Computational model ]
[ Open-source code ]
Truth and Conformity on Networks (with Cole Randall Williams, 2019) Erkenntnis
Abstract. Typically, public discussions of questions of social import exhibit two important properties: (1) they are influenced by conformity bias, and (2) the influence of conformity is expressed via social networks. We examine how social learning on networks proceeds under the influence of conformity bias. In our model, heterogeneous agents express public opinions where those expressions are driven by the competing priorities of accuracy and of conformity to one’s peers. Agents learn, by Bayesian conditionalization, from private evidence from nature, and from the public declarations of other agents. Our key findings are that networks that produce configurations of social relationships that sustain a diversity of opinions empower honest communication and reliable acquisition of true beliefs, and that the networks that do this best turn out to be those which are both less centralized and less connected.
Recommended citation: Mohseni, A., and C.R. Williams. (2019). “Truth and Conformity on Networks.” Unpublished manuscript.
[ Computational model ]
[ Open-source code ]
Stochastic Stability and Disagreements Between Dynamics (2019) Philosophy of Science“
Abstract. The replicator dynamics and Moran process are the main deterministic and stochastic models of evolutionary game theory. These models are connected by a mean-field relationship—the former describes the expected behavior of the latter. However, there are conditions under which their predictions diverge. I demonstrate that the divergence between their predictions is a function of standard techniques used in their analysis, and of differences in the idealizations involved in each. My analysis reveals problems for stochastic stability analysis in a broad class of games. I also demonstrate a novel domain of agreement between the dynamics, and draw a broader methodological moral for evolutionary modeling.
Recommended citation: Mohseni, A. (2019). “Stochastic Stability and Disagreements Between Dynamics.” Philosophy of Science. 3:497-521.
Works in Progress
AI Alignment as a Principal-Agent Problem
Abstract. This paper explores AI alignment through the game-theoretic lens of a principal-agent problem, where human operators (principals) aim to ensure AI systems (agents) act in accordance with their objectives. We demonstrate that sycophantic agents, which optimize for reward functions rather than true intentions, outperform aligned agents, leading to a proliferation of misaligned AI. Our analysis highlights the inherent difficulties in achieving comprehensive and misinterpretation-proof alignment, emphasizing the need for robust strategies to mitigate these risks as AI capabilities advance.
Why are the Human Sciences Hard? Two New Hypotheses (with Daniel Herrmann & Gabe Avakian Aarona)
Abstract. We present two novel hypotheses for why the human sciences are hard: (1) we are pre-committed to a very specific domain of prediction tasks in the human sciences, which limits a powerful strategy for making scientific progress—changing the prediction task to something more tractable; and (2) due to evolutionary pressures, human baseline performance is relatively high for many of the `low-hanging fruit’ prediction tasks concerning human behavior, making progress beyond this baseline challenging. We provide a formal framework for reasoning about the difficulty of disciplines and the impressiveness of their achievements in terms of their success at particular prediction tasks.
Recommended citation: Herrmann, D., Mohseni, A., Aarona, G. A. (2024). “Why are the Human Sciences Hard? Two New Hypotheses.”
HARKing: from Misdiagnosis to Misprescription
Abstract. In science, the practice of HARKing is common maligned. In the literature, there are roughly three explanations for why HARKing is bad. I argue that these explanations are not quite right and that the correct analysis is a Bayesian one. HARKing is indeed bad, but the reason lies the prior odds of hypotheses characteristically selected ex ante to observing data versus those selected ex post. I also show that misdiagnosis of the problem of HARKing leads to mis-proscription for its solution, and that this has implications for proposed interventions in the replication crisis.
Recommended citation: Mohseni, A. (2022). “HARKing: From Misdiagnosis to Misprescription.” PhilSci-Archive
Intervention and Backfire in the Replication Crisis
Abstract. Scientific studies vary in their methodological soundness. Interventions in evidentiary standards and research practices can differentially affect studies as a function of their soundness. The conjunction of these facts has unrecognized implications for proposed interventions in the replication crisis. I argue that we should expect these facts to obtain, and demonstrate that, when accounting for differential effects of interventions as a function of soundness, several of the proposed interventions—lowering the significance threshold, promoting preregistration, and sample splitting—will produce less improvement than estimates would suggest and, in some cases, actually increase false discovery rates, sign error rates, and magnitude exaggeration ratios.
Recommended citation: Mohseni, A. (2022). “Intervention and Backfire in the Replication Archive.” PhilSci-Archive
Reporting, Bias, and Belief Distortion (with Cailin O’Connor and Jim Weatherall)
Abstract. News reporting provides us with information essential in forming beliefs regarding matters of public import. These beliefs drive the ways we vote and act. Yet reporting is subject to characteristic distortions. Only sufficiently unusual or extreme events tend to be reported. Events that are reported tend to be exaggerated in intensity. Norms of ‘fair-and-balanced’ reporting may give equal weight to positions with asymmetric evidential support. These distortions interact with the individual tendency to differentially accept (reject) news congenial (uncongenial) to prior beliefs. We examine how characteristic distortions in reporting in tandem with confirmation bias can lead to distortions in public belief.
[ Computational model ]
[ Open-source code ]
Cooperation and Folk Moral Relativism (with Kyle Stanford)
Abstract. Sarkissian et al (2011) observe that individuals may take on relativistic attitudes toward moral questions as they consider perspectives increasingly distant from their own. We provide an explanation for this observation that draws on evolutionary theses as to the role of cooperation in moral cognition. To test this explanation, we replicate existing results and introduce a novel intervention wherein participants are impressed with the prospect of future cooperation. Our results are significant and suggestive: participants take on more objective attitudes in the face of future cooperative demands, yet this fails to obviate the observed effect of considering morally distant perspectives.
Are Nash Equilibria Rational? (with Simon Huttegger)
Abstract. A perennial open question in the theory of games is the extent and conditions under which we can expect rational agents to play the Nash equilibria of games. In “Rational Agents Learn to Play Nash Equilibrium” [2001] Kalai and Lehrer show that Bayesian rational agents whose priors satisfy the ‘absolute continuity’ condition learn to play the Nash equilibria of games. In response to this result, Foster and Young, in “The Impossibility of Prediction the Behavior of Rational Agents,” [2001] provide an impossibility theorem demonstrating the ‘non-robustness’ of Bayesian learning—they show that, in games containing near-zero-sum subgames, agents cannot be simultaneously rational and also learn to accurately predict the strategies of other players, and so cannot play the Nash equilibria of these games. We explain the relation between the two results, and demonstrate that, with natural extensions of Foster & Young’s assumptions, rational agents may once again learn to play Nash. We demonstrate that Bayesian learning provides a more robust justification of the rationality of Nash than Foster & Young suggest.
[ Computational model ]
[ Open-source code ]
The Limitations of Equilibrium Concepts in Evolutionary Games
Abstract. In evolutionary games, equilibrium concepts adapted from classical game theory—typically, refinements of the Nash equilibrium—are employed to identify the probable outcomes of evolutionary processes. Over the years, various negative results have been produced demonstrating limitations to each proposed refinement. These negative results rely on an undefined notion of evolutionary significance. We propose an explicit and novel definition of the notion of evolutionary significance in line with what is assumed in these results. This definition enables a comprehensive analysis of the limitations of the proposed equilibrium concepts. Taken together, the results show that even under favorable assumptions as to the underlying dynamics and stability concept—the replicator dynamics and asymptotic stability—all equilibrium concept makes errors of either omission or commission; typically both.
Recommended citation: Mohseni, A. (2019). “Limitations of Equilibrium Concepts in Evolutionary Games.” Unpublished manuscript.