Editors Teaching AI
Hundreds of researchers submit scientific manuscripts every day and journal editors can no longer keep up with the work, especially by turning back to their old ways.
With an increasing number of manuscript submissions, journals are forced to double their efforts, exhausting the editorial staff as well as collaborations with the already insufficient number of reviewers willing to answer review requests.
To get a better idea of the overwhelming load of work falling on the editor’s shoulders, have a look at the process each manuscript has to go through here.
To Start Off…
To address this problem, we decided to develop a tool that will reduce the time it takes an editor to evaluate a manuscript. We knew that even the smallest reduction in time spent per manuscript will be beneficial in the long run.
We started with simple checks, such as a missing title or authors, comparison of author provided keywords with what actually the essential topics of a manuscript are, spotting different types of conflict of interest as well as a bunch of other things.
It turned out, journals had additional, more specific requirements towards manuscripts which would require more ambiguous checks to be ran. Some journals would like to spot the mention of an Ethical Statement, others would like a statement of Conflict of Interest and so on. Editors would then need to evaluate these statements and decide whether they are sufficient and whether the manuscript can be moved forward to the next phase.
And that’s when we involved machine learning. In order to do that, we had to teach the system by feeding it with good examples of the statements listed above as well as a few more things editors are on the lookout for.
Lucky to have some of the biggest publishers as clients, we decided to involve their editors and let them teach their own checks. This was beneficial not only because it was going to train the system but also because it was going to demonstrate to editors that AI is not just a magic trick, and that it definitely needs humans and their expertise in order to learn from them.
We needed to come up with a very simple user interface that will not scare off editors further. After all, we’ve just asked them to train their own AI.
We added a section for the statements that editors would like a check for and then we let them choose keywords to ease the process of finding sections that may be the statement in question. On top of that, we added a small leaderboard to help keep them engaged and encourage them do more training.
By showing examples of what a statement may look like, editors could say whether it really is what they are looking for or not. By doing that they are initially training the system.
By developing this tool and involving editors into the process, we not only tackled the challenge of the increasing number of manuscript submissions but also demonstrated how a machine learning model can be trained and how human expertise is an essential part of it.