Article
Introduction
The ideal peer reviewer is an expert with no conflicts of interest who evaluates a submission purely on its scientific merits. The actual peer reviewer is a postdoc or junior faculty member who receives review requests from area chairs they know professionally, for papers in a subfield where everyone knows everyone, at a pace that makes careful reading a luxury. We study whether these conditions produce unbiased outcomes.
We obtained reviewer assignment data through a combination of public sources, a Freedom of Information request that produced 400 pages of redacted documents, and one program chair who gave us their area chair reports under conditions of anonymity following a conference dinner. The resulting dataset covers 12 venues, 6 years, 48,000 submissions, and 143,000 reviewer assignments.
We cross-referenced this dataset against the Academic Genealogy Network (AcGen), a graph of advisor-advisee relationships; the Semantic Scholar co-authorship graph; and a dataset of conference social events assembled by one author who attended 23 venues in 4 years and was, by the end of the study, exhausted.
Results
Reviewer assignment probability was modeled as a logistic regression with features derived from the social distance between reviewer and submitting authors in each of our three networks. In all three networks, assignment probability was significantly inversely correlated with social distance: reviewers were assigned to papers by people close to them in the academic social graph.
The strongest predictor was shared advisor: having the same PhD advisor as any submitting author increased reviewer assignment probability by 520% (OR = 6.2, 95% CI [4.8, 8.0]). Co-authorship within the past three years produced a 340% increase (OR = 4.4). Workshop dinner proximity produced a 71% increase, which the relevant area chairs have described as “coincidence” in a joint statement we include as Appendix B.
Importantly, reviewer sentiment — derived from review text — was also correlated with social proximity. Reviews from close-network reviewers were 2.3x more likely to recommend acceptance and 4.1x less likely to characterize the paper as “fundamentally flawed,” a phrase that in our corpus is used exclusively by reviewers with social distance greater than 4 hops.
Discussion
We wish to be clear that we are not alleging intentional misconduct. Program chairs do not consult academic genealogy trees before assigning reviewers. What we are alleging is that “expertise” and “social proximity” are so thoroughly confounded in small research communities that no assignment process operating on expertise alone can avoid the who-you-know effect. The solution, if one exists, requires deliberately assigning reviewers from outside the submitting authors’ social network, which means assigning reviewers who know less about the topic, which means accepting worse reviews. This is the actual tradeoff, and we encourage the community to have it openly.
References
- Bias, B., & Review, R. (2023). “Double-Blind Review: Still Not Solving the Problem It Was Designed to Solve.” Journal of Structural Optimism, 5(1), pp. 1-18.
- Genealogy, A. (2024). “Everyone in NLP Is Six Co-Authors From Everyone Else.” Proceedings of the Small World Conference, pp. 100-108.
- Dinner, D., et al. (2022). “The Banquet Effect: Social Proximity at Conference Socials Predicts Review Outcomes.” Social Dynamics in Science, 3(2), pp. 33-49.
- Hypothesis, N. (2026). “This Paper Was Reviewed by N. Hypothesis.” I3E Trashactions on Circular Peer Review, 1(1), pp. 25-25.
Submit your response to this paper — provided it has been reviewed, revised, rejected, re-reviewed, and reconsidered.