Reviewer Finder

Going through an in-depth research including a lot of reading and interviews with researchers, reviewers, and editors representing some of the biggest publishers in the world, revealed obvious signs of misalignment, mix of expectations, and numerous process holes caused by legacy half-baked solutions introduced throughout the years.

It’s About People

Writing a manuscript, its submission to a journal, its screening by editors, its acceptance for review, its publication or rejection are all processes with a person in the centre. As obvious as it may seem, we observed existing tools failing to grasp this simple fact. We also realised how easy it would have been to ignore it ourselves, was it not for the effort of a thorough research.

Manuscript Life-Cycle

Solution Mode Temptations

Naturally, considering what we were capable of as a strong AI company, we were quickly tempted to try and solve everything at once – develop features and see what sticks around. As a result of continuously confirming what the main problems were, we were able to keep our focus sharp.

Focusing On The Main Issue

The majority of editors proved to have problems with finding experts to review the ever-increasing number of journal-submitted papers. For a single paper, they would reach out to a few reviewers, having to follow up with each one of them, often not receiving any response, or one signalising that the match between manuscript and reviewer is poor.

Waiting for a response from a reviewer could take weeks and result in a request decline. The bigger the wait, the bigger the time to publication of a manuscript. The more manuscripts are waiting to be reviewed, the slower the publication time of a journal – not to mention the wait of authors and the delay of science as a whole.

So Why Are Reviewers So Unreliable?

A reviewer, is a researcher, who goes through all activities a researcher would go anyway – doing experiments, research, writing papers with different research groups, being a professor at a university, or even an editor at a journal. That, and the fact that one does not get paid to be a reviewer – it’s completely voluntarily. So reviewers may be unreliable because they are just too busy – especially later on in their career.

Why Were Existing Solutions Failing

Some of the popular tools for finding peer reviewers let researchers add keywords to their profiles that hopefully describe them well. Editors would search using different keywords and edit researcher keywords unintentionally adding more complexity for other editors to get successful results. Not to mention the possibility of typos in these manual steps.

The result: reaching out to the wrong people and getting a low acceptance rate.

The Way We Addressed This

By utilising millions of papers, we were able to create a semantic fingerprint for each paper and then compare it to a semantic fingerprint of a reviewer based on their past work. In this way, we generated a list of closely matched reviewers.

In addition to matching reviewers to manuscripts, we added multiple filters that define what a good reviewer is. For example, a reviewer that has no conflict of interest with any of the authors of a manuscript.

We added email addresses and affiliations since a name without any contact details or context is unusable. We also added closely related past work of reviewers as an evidence of why they are on the list.

Conclusion

Studying the industry, identifying its problems, recognising one as a big one, studying it in depth, and then finally utilising our strengths as an AI company to tackle that problem – all essential pieces of the process that led us to a reliable, time-saving tool that will improve the work of editors, the quality of their journals, and the speed science gets out in the world.

Other projects