Research and Publications

What are the right epistemic norms for science?

Should the conventional threshold for statistical significance in the social and biomedical sciences be redefined? Should pre-registration of studies be promoted or required? Should we move from classical to Bayesian methods in the analysis of studies? And what might a better future science look like? Investigating these questions constitutes a central strand of my research. 

To this end, I use the tools of game & decision theory, Bayesian statistics, dynamical systems, and network theory to explore a range of questions on how we come to produce and justify collective knowledge (esp. with respect to the replication crisis in social and life sciences).

You can find my peer-reviewed publications and works in progress below. You can also find the computational models that typically accompany this research (the open-source code for which is available at my GitHub page) as well as information graphics I’ve created on related topics.

My dissertation investigates how best to change the norms of science—against the backdrop of the replication crisis in the social and biomedical sciences—in

In Misunderstanding HARKing: from Misdiagnosis to Misproscription, I argue that standard accounts of one of the ‘horseman of the replication apocalypse’, HARKing (hypothesizing after results are known), are incorrect. I show how a Bayesian analysis clarifies the issue and reveals how HARKing can actually be good. Further, I show how misdiagnosing HARKing leads to misproscribing its solution in the replication crisis. The moral is that addressing the replication crisis requires the right statistical theory: that of Bayesian inference.order to make progress toward a better future science.

In Intervention and Backfire in the Replication Crisis, I argue that proposals for norm change—e.g., lowering the conventional threshold for statistical significance or requiring the pre-registration of studies—need to account for the way these changes affect not merely individual studies but, rather, how they affect populations of studies that exhibit important forms of heterogeneity. I provide computational models demonstrating how failing to account for population heterogeneity can lead to increasing the very forms of unreliability (e.g., false discovery rates) that the changes are typically aimed at decreasing. The moral is that successful proposals for interventions in the replication crisis need to take a population-level perspective

In Changing Scientific Norms: Epistemic Trade-Offs from Classical and Bayesian Perspectives, I take a step back to consider the challenge of epistemic trade-offs produced by candidate interventions. These trade-offs involve relationships between types of errors (false positives, false negative, sign errors, and magnitude exaggeration ratios) as well as measures of fecundity such as rate of discovery. I take both the perspective of each the frequentist and Bayesian meta-analyst and demonstrate that there are heretofore unobserved trade-offs from each. The moral is that the norms for an ideal science must be tailored to domain-specific preferences and with the recognition of epistemic trade-offs.

As a product of this research, I have produced computational tools that can help scientists and policymakers in reasoning about the consequences of changing scientific norms and identifying optimal policies that are sensitive to epistemic trade-offs and domain-specific preferences.

Another strand of my research explores the subject of scientific models of cultural and biological evolution. I think about how we make inferences to complex, real-world systems from simple, idealized models; and about what such models can teach us regarding the evolution of cooperation, communication, and cognition.

I also have broader interests in middle-eastern philosophy (esp. Al-Haytham and Khayyam), political philosophy (esp. Harsanyi and Sen), history of analytic philosophy (esp. Wittgenstein and de Finetti), empiricism (esp. D. Hume and T. Reid), and American pragmatism (esp. C. S. Peirce and W. James).

Publications

Abstract. Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.

[ Pre-print version ]

Recommended citation:  LaCroix, T. and A. Mohseni (2020). “The Tragedy of the AI Commons.” arXiv.

[ Computational model ]

Abstract. The cultural red king hypothesis predicts that differentials in groups size may lead to inequitable outcomes for minority groups even in the absence of explicit or implicit bias. We test this prediction in an experimental context where subjects divided into groups engage in repeated play of a simplified Nash demand game. We run 14 trials involving a total of 112 participants. The results of the experiments are significant and suggestive: individuals in minority groups do indeed end up making low demands more frequently than those in majority groups, and so receive lower payoffs.

[ Pre-print version ]

Recommended citation:  Mohseni, A., O’Connor, C., & H. Rubin. (forthcoming). “On the Emergence of Inequity: Testing the Red King Hypothesis.” Synthese.

[ Open-data repository ]

Abstract. We consider how an epistemic network might self-assemble from the ritualization of the decisions of individual inquirers with varying abilities. In such evolved social networks, the heterogeneous agents may be significantly more successful than they could be investigating nature on their own. The evolved networks may also dramatically lower the epistemic risk faced by even the most talented inquirers. We consider networks that self-assemble in the context of both perfect and imperfect communication and compare the evolved behavior of inquirers in each.

[ Pre-print version ]

Recommended citation:  Barrett, J.A., Skyrms, B. & A. Mohseni. (2019). “Self-Assembling Networks.” British Journal for the Philosophy of Science. 70:301–325.

Computational model ]
[ Open-source code ]

Abstract. Typically, public discussions of questions of social import exhibit two important properties: (1) they are influenced by conformity bias, and (2) the influence of conformity is expressed via social networks. We examine how social learning on networks proceeds under the influence of conformity bias. In our model, heterogeneous agents express public opinions where those expressions are driven by the competing priorities of accuracy and of conformity to one’s peers. Agents learn, by Bayesian conditionalization, from private evidence from nature, and from the public declarations of other agents. Our key findings are that networks that produce configurations of social relationships that sustain a diversity of opinions empower honest communication and reliable acquisition of true beliefs, and that the networks that do this best turn out to be those which are both less centralized and less connected.

[ Pre-print version ]

Recommended citation:  Mohseni, A., and C.R. Williams. (2019). “Truth and Conformity on Networks.” Unpublished manuscript.

[ Computational model ]
[ Open-source code ]

Abstract. The replicator dynamics and Moran process are the main deterministic and stochastic models of evolutionary game theory. These models are connected by a mean-field relationship—the former describes the expected behavior of the latter. However, there are conditions under which their predictions diverge. I demonstrate that the divergence between their predictions is a function of standard techniques used in their analysis, and of differences in the idealizations involved in each. My analysis reveals problems for stochastic stability analysis in a broad class of games. I also demonstrate a novel domain of agreement between the dynamics, and draw a broader methodological moral for evolutionary modeling.

[ Pre-print version ]

Recommended citation:  Mohseni, A. (2019). “Stochastic Stability and Disagreements Between Dynamics.” Philosophy of Science. 3:497-521.

[ Computational model ]
[ Open-source code ]

Works in Progress

(Drafts available upon request.)

Abstract. In science, the practice of HARKing is common maligned. In the literature, there are roughly three explanations for why HARKing is bad. I argue that these explanations are not quite right and that the correct analysis is a Bayesian one. HARKing is indeed bad, but the reason lies the prior odds of hypotheses characteristically selected ex ante to observing data versus those selected ex post. I also show that misdiagnosis of the problem of HARKing leads to mis-proscription for its solution, and that this has implications for proposed interventions in the replication crisis.

[ Pre-print version ]

Recommended citation:  Mohseni, A. (2022). “HARKing: From Misdiagnosis to Misprescription.” PhilSci-Archive

[ Computational model ]

Abstract. Scientific studies vary in their methodological soundness. Interventions in evidentiary standards and research practices can differentially affect studies as a function of their soundness. The conjunction of these facts has unrecognized implications for proposed interventions in the replication crisis. I argue that we should expect these facts to obtain, and demonstrate that, when accounting for differential effects of interventions as a function of soundness, several of the proposed interventions—lowering the significance threshold, promoting preregistration, and sample splitting—will produce less improvement than estimates would suggest and, in some cases, actually increase false discovery rates, sign error rates, and magnitude exaggeration ratios.

Recommended citation:  Mohseni, A. (2022). “Intervention and Backfire in the Replication Archive.” PhilSci-Archive

Abstract. News reporting provides us with information essential in forming beliefs regarding matters of public import. These beliefs drive the ways we vote and act. Yet reporting is subject to characteristic distortions. Only sufficiently unusual or extreme events tend to be reported. Events that are reported tend to be exaggerated in intensity. Norms of ‘fair-and-balanced’ reporting may give equal weight to positions with asymmetric evidential support. These distortions interact with the individual tendency to differentially accept (reject) news congenial (uncongenial) to prior beliefs. We examine how characteristic distortions in reporting in tandem with confirmation bias can lead to distortions in public belief.

[ Computational model ]
[ Open-source code ]

 

Abstract. Sarkissian et al (2011) observe that individuals may take on relativistic attitudes toward moral questions as they consider perspectives increasingly distant from their own. We provide an explanation for this observation that draws on evolutionary theses as to the role of cooperation in moral cognition. To test this explanation, we replicate existing results and introduce a novel intervention wherein participants are impressed with the prospect of future cooperation. Our results are significant and suggestive: participants take on more objective attitudes in the face of future cooperative demands, yet this fails to obviate the observed effect of considering morally distant perspectives.

[ Open-data repository ]

Abstract. A perennial open question in the theory of games is the extent and conditions under which we can expect rational agents to play the Nash equilibria of games. In “Rational Agents Learn to Play Nash Equilibrium” [2001] Kalai and Lehrer show that Bayesian rational agents whose priors satisfy the ‘absolute continuity’ condition learn to play the Nash equilibria of games. In response to this result, Foster and Young, in “The Impossibility of Prediction the Behavior of Rational Agents,” [2001] provide an impossibility theorem demonstrating the ‘non-robustness’ of Bayesian learning—they show that, in games containing near-zero-sum subgames, agents cannot be simultaneously rational and also learn to accurately predict the strategies of other players, and so cannot play the Nash equilibria of these games. We explain the relation between the two results, and demonstrate that, with natural extensions of Foster & Young’s assumptions, rational agents may once again learn to play Nash. We demonstrate that Bayesian learning provides a more robust justification of the rationality of Nash than Foster & Young suggest.

Computational model ]
Open-source code ]

Abstract. In evolutionary games, equilibrium concepts adapted from classical game theory—typically, refinements of the Nash equilibrium—are employed to identify the probable outcomes of evolutionary processes. Over the years, various negative results have been produced demonstrating limitations to each proposed refinement. These negative results rely on an undefined notion of evolutionary significance. We propose an explicit and novel definition of the notion of evolutionary significance in line with what is assumed in these results. This definition enables a comprehensive analysis of the limitations of the proposed equilibrium concepts. Taken together, the results show that even under favorable assumptions as to the underlying dynamics and stability concept—the replicator dynamics and asymptotic stability—all equilibrium concept makes errors of either omission or commission; typically both.

[ Pre-print version ]

Recommended citation:  Mohseni, A. (2019). “Limitations of Equilibrium Concepts in Evolutionary Games.” Unpublished manuscript.