Referee finding: a role for AI?

I had to ask ten people just to find two willing to review a paper I serve as editor for. I wanted three people but decided to go with two. Everyone knows how hard it is these days to get referees. Here are some thoughts on the problem and how judicious use of AI can help.

The problem is we all think of the same people, the top people in the field. It is as if I wrote a piece on Christianity and asked Jesus, Mark, Peter, Paul and Mary to review it, if they were still around. Editors also depend on the lists paper writers submit as preferred reviewers. Those people are also top people in the field. I often look at such lists and know that every single person on the list will say no. How do we fix this?

You may be surprised to hear that there are many people out there who would like to review more papers, people who have spent years in the field but have not achieved instant recognition for their work. These are the people who will provide thoughtful reviews, people up on the field, actively working in it, and anxious to review more. They are simply newer and therefore harder to discover. If they are in a field that you know well, a little thought and their names will come to mind. Choose them, not the old chestnuts (like me) who are likely to be overwhelmed with requests.

But we editors handle many papers that are not very close to our own expertise. How, then can we discover younger reviewers? I think there is a place for AI in this effort. Let me share how I do it. I simply paste the abstract of the paper at hand without title or authors into the query box and ask something like: “Who has published in the area of this abstract in the last 5 years?” I use Anthropic’s Claude and it gives me thoughtful output. First, it breaks down the abstract. And then it lists people who work on the various sub-topics it has identified and tells me where they work. These are often newer, younger people and give me a list to explore. I do not simply add these people to the list of possible referees. I look at them. Generally they are appropriate.

If you want to test how well your system is at identifying good people, simply take off the requirement that they have published in the last 5 years. Then you will get the greats of the field, people who are too busy, too retired, or even too dead to be useful.

There are probably other ways of getting lists of younger experts. You can look at the authors of the referenced papers, for example. But I find using Claude is a great time saver. I do tweak the prompts and check the people, but getting that list saves a lot of time.

Oh, one more thing. I never feed in more than the abstract. Anthropic is not supposed to save one’s material but I like to be safe and figure the abstract alone will not break any confidences. We can solve the referee shortage by remembering the new people.

Unknown's avatar

About Joan E. Strassmann

Evolutionary biologist, studies social behavior in insects & microbes, interested in education, travel, birds, tropics, nature, food; biology professor at Washington University in St. Louis
This entry was posted in Publishing your work, Refereeing and tagged , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.