Skip to main content
Research Article

On the Optimal Assignment of Reviewers to Papers Written by People You Know

I3E TCPR· Volume 1 , No. 1 · pp. 14-24 ·
DOI: 10.I3E/tcpr.2026.00134 Link copied!
3 Citations Check Access

Editor's Summary

This paper was reviewed by two researchers known personally to the authors. One reviewer noted the paper “has some issues” but recommended acceptance. The other reviewer recommended acceptance without reading past the abstract. The editors find this acceptable.

Abstract

Peer review is the mechanism by which the scientific community certifies quality, allocates prestige, and, as we demonstrate in this paper, rewards professional relationships. We analyze reviewer assignment data from 12 venues over 6 years, cross-referenced against academic genealogy and co-authorship networks. We find that the probability of being assigned as a reviewer for a paper increases by 340% if you have co-authored with the area chair, by 520% if you share an advisor with the submitting authors, and by 71% if you have attended the same workshop dinner within the past 24 months. We term this the “who-you-know” effect and recommend that someone address it.

Article

Introduction

The ideal peer reviewer is an expert with no conflicts of interest who evaluates a submission purely on its scientific merits. The actual peer reviewer is a postdoc or junior faculty member who receives review requests from area chairs they know professionally, for papers in a subfield where everyone knows everyone, at a pace that makes careful reading a luxury. We study whether these conditions produce unbiased outcomes.

We obtained reviewer assignment data through a combination of public sources, a Freedom of Information request that produced 400 pages of redacted documents, and one program chair who gave us their area chair reports under conditions of anonymity following a conference dinner. The resulting dataset covers 12 venues, 6 years, 48,000 submissions, and 143,000 reviewer assignments.

We cross-referenced this dataset against the Academic Genealogy Network (AcGen), a graph of advisor-advisee relationships; the Semantic Scholar co-authorship graph; and a dataset of conference social events assembled by one author who attended 23 venues in 4 years and was, by the end of the study, exhausted.

Results

Reviewer assignment probability was modeled as a logistic regression with features derived from the social distance between reviewer and submitting authors in each of our three networks. In all three networks, assignment probability was significantly inversely correlated with social distance: reviewers were assigned to papers by people close to them in the academic social graph.

The strongest predictor was shared advisor: having the same PhD advisor as any submitting author increased reviewer assignment probability by 520% (OR = 6.2, 95% CI [4.8, 8.0]). Co-authorship within the past three years produced a 340% increase (OR = 4.4). Workshop dinner proximity produced a 71% increase, which the relevant area chairs have described as “coincidence” in a joint statement we include as Appendix B.

Importantly, reviewer sentiment — derived from review text — was also correlated with social proximity. Reviews from close-network reviewers were 2.3x more likely to recommend acceptance and 4.1x less likely to characterize the paper as “fundamentally flawed,” a phrase that in our corpus is used exclusively by reviewers with social distance greater than 4 hops.

Discussion

We wish to be clear that we are not alleging intentional misconduct. Program chairs do not consult academic genealogy trees before assigning reviewers. What we are alleging is that “expertise” and “social proximity” are so thoroughly confounded in small research communities that no assignment process operating on expertise alone can avoid the who-you-know effect. The solution, if one exists, requires deliberately assigning reviewers from outside the submitting authors’ social network, which means assigning reviewers who know less about the topic, which means accepting worse reviews. This is the actual tradeoff, and we encourage the community to have it openly.

References

  1. Bias, B., & Review, R. (2023). “Double-Blind Review: Still Not Solving the Problem It Was Designed to Solve.” Journal of Structural Optimism, 5(1), pp. 1-18.
  2. Genealogy, A. (2024). “Everyone in NLP Is Six Co-Authors From Everyone Else.” Proceedings of the Small World Conference, pp. 100-108.
  3. Dinner, D., et al. (2022). “The Banquet Effect: Social Proximity at Conference Socials Predicts Review Outcomes.” Social Dynamics in Science, 3(2), pp. 33-49.
  4. Hypothesis, N. (2026). “This Paper Was Reviewed by N. Hypothesis.” I3E Trashactions on Circular Peer Review, 1(1), pp. 25-25.

Author Affiliations

1. Peer Review Integrity Lab, Department of Conflict-of-Interest Studies

References

eLetters

Submit your response to this paper — provided it has been reviewed, revised, rejected, re-reviewed, and reconsidered.