Editors Teaching AI

Hundreds of researchers submit scientific manuscripts every day and journal editors can no longer keep up with the work, especially by turning back to their old ways.

With an increasing number of manuscript submissions, journals are forced to multiply their efforts, exhausting the editorial staff as well as collaborations with the already insufficient number of reviewers willing to answer review requests.

Have a look at the process each manuscript has to go through to get an idea of the overwhelming load of work falling on editor’s shoulders: Manuscript Life-Cycle

To Start Off…

To address part of this problem, we decided to create a tool that will reduce the time it takes an editor to evaluate a manuscript. We knew that even the smallest reduction in time spent per manuscript will be greatly beneficial in the long run.

We started with simple things, such as a missing title or authors, comparison of author provided keywords with what actually the essential topics of a manuscript are, and spotting different types of conflict of interest as well as a bunch of other things.

Machine Learning

It turned out, journals have additional, more specific requirements towards manuscripts which would require more ambiguous checks to be ran. Some journals would like to spot the mention of an Ethical Statement, others would like a statement of Conflict of Interest and so on and so on. Editors would then need to evaluate these statements and decide whether they are sufficient and whether the manuscript can be moved forward to the next phase.

And that’s when we involved machine learning into the checks. In order to do that, we had to teach the system by feeding it with good examples of the statements listed above as well as a few more things editors are on the lookout for.

Being lucky of having some of the biggest publishers as clients, we decided that it would be a good idea to involve editors and let them teach their own checks. This was beneficial not only because it was going to train the system but also because it was going to demonstrate to editors that AI is not just a magic trick, and that it definitely needs humans and their expertise in order to learn from them.

The Interface

We needed to come up with a very simple user interface that will not scare editors further – after all we’ve just asked them to train their own AI.

We added a section for the statements that editors would like a check for and then we let them choose keywords to ease the process of finding sections that may be the statement in question. On top of that, we added a small leaderboard to help keep them engaged and encourage them to do more training.

By locating an example of what a statement may look like, editors could say whether it really is what they are looking for or not. By doing that they are initially training the system.

Conclusion

By developing this tool and involving editors into the process, we not only tackled the challenge of the increasing number of manuscript submissions but also demonstrated how a machine learning model can be trained and how human expertise is an essential part of it.

Other projects