Reviewer Finder

Reviewer Finder

Going through an in-depth research including a lot of reading and interviews with researchers, reviewers, and editors representing some of the biggest publishers in the world, revealed obvious signs of misalignment, a mix of unmet expectations, and numerous process holes caused by legacy half-baked solutions introduced throughout the years.

People in the Centre

Writing a manuscript, its submission to a journal, its screening by editors, its acceptance for review, its publication or rejection are all processes with a person in the centre. As obvious as it may seem, we observed existing tools failing to grasp this simple fact. We also realised how easy it would have been to ignore it ourselves, was it not for the effort of a thorough research.

Manuscript Life-Cycle

Solution Mode Temptations

Naturally, considering what we were capable of as a strong AI company, we were quickly tempted to try and solve everything at once – the very reason we worked on the Manuscript Evaluation project. But as a result of continuously confirming what the main problems were, we were able to focus a bit deeper into them.

Focusing On The Main Issue

The majority of editors consistently proved to have a huge problem with finding experts to review the ever-increasing number of journal-submitted papers. For a single paper, they would reach out to a few reviewers, having to follow up with each one. Waiting for a response from a reviewer could take weeks and result in a request decline. The bigger the wait, the bigger the time to publication of a manuscript. The more manuscripts are waiting to be reviewed, the slower the publication time of a journal – not to mention the wait of authors and the delay of science as a whole.

So Why Are Reviewers So Unreliable?

A reviewer, is a researcher, who goes through all activities a researcher would go anyway – doing experiments, research, writing papers with different research groups, being a professor at a university, or even an editor at a journal. That, and the fact that one does not get paid to be a reviewer – it’s completely voluntarily. So it turns out reviewers are unreliable because they are just too busy. The more senior, the busier they are.

Why Were Existing Solutions Failing

Some of the popular tools for finding peer reviewers let researchers add keywords to their profiles that hopefully describe them well. Editors would search using different keywords and edit researcher keywords unintentionally adding more complexity for other editors to get reliable results. Not to mention the possibility of typos in these manual steps. That results in reaching out to the wrong people and getting only a few accept the review requests.

The Way We Addressed This

By utilising millions of papers, we were able to create a semantic fingerprint for each paper and then compare it to a semantic fingerprint of a reviewer based on their past work. This way, we generated a list of closely matching reviewers.

In addition to matching reviewers to manuscripts, we added multiple filters in the background that define what a good reviewer is. For example, if a reviewer has a conflict of interest with any of the authors of the manuscript – they will just not appear on the list.

We added email addresses and affiliations since a name without any contact details or context is unusable. We also added closely related past work of reviewers as an evidence of why they are on the list.

Go back to Projects