Peer Review Quality on the Unified Scholars Platform



As I moved forward in the start-up journey, I had reasons to interact with professors, who would be my customers; and those interactions, for me as a founder, offer feedbacks and insights from the customers' perspective, which are invaluable to any start-up.

I did receive positive feedbacks and comments, which were encouraging and which validated our start-up concept; but the most interesting feedbacks were the critiques. In fact, one early critique of our peer review process was so good and factual that it made me alter the peer review algorithm completely. Though, it took us three months back; it made us develop the "commit system" which enhanced our peer review process. And the professor who offered that critique now sits as a member of our Advisory Board. There is no point discussing that critique because it has been satisfactorily addressed in the updated platform. What I want to discuss here are the questions asked during two of my recent interactions with professors, and the concerns I received from members of my team who are also scholars. I feel some other scholars might have similar questions or concerns; hence the need to address them publicly via this article.

  • Questions were asked about peer review quality - specifically, how, since the platform is automated, is peer review is evaluated for quality. That is one question.
  • Platforms like ScholarOne and EditorOne were mentioned as possibly 'similar' platforms, and the next question is what differentiates the USP from those two, especially in terms of value.
  • Then, there was some talk about 'open peer review' and the unwillingness by some scholars to open their unpublished research to everyone. This is a concern echoed by very few.
  • There was a concern that our payment for peer review would corrupt the system and the likelihood that fake/greedy reviewers would take advantage of the system.
  • There was a question about how we assess peer reviewers and editors beyond their credentials. And how exactly we do our validation, and/or a release of our validation policy document.
  • There is also a question about why authors should use the USP rather than ScholarOne/EditorOne. Or what makes the USP better for authors?
  • There was a question of how authors would know we have sufficient peer reviewers and editors on the platform.

Those are interesting questions and concerns that I would address here and now, but before I start, I should first clarify that - peer review on the Unified Scholars Platform (USP) is not too different from the peer review currently in use by the big publishers. It is not! Infact, the USP's peer review process is largely the same, save for two minor differences which I would mention next.

When a paper is submitted to the USP, the platform invites experts in that field to look at the paper. Those experts are professors and associate professors whose credentials have been validated/vetted after registration on the USP.

So, when a paper is submitted, the platform invites tens of experts out of whom, only three would evaluate the paper. The platform uses an automated commit system to allocate each paper to three experts (two reviewers and one editor), who must confirm their competence to evaluate that specific paper. And once a paper is fully committed or taken, it becomes unavailable to others to even view, save for the abstract. Eventually, out of the tens of invitations sent out, only three experts in that research title would be allowed to evaluate the paper. This is not different from the existing peer review process; the difference here is that the commit system is used to allocate papers and control the first part of the peer review workflow. So, on the USP, the traditional role of the editor or associate editor (to allocate papers, evaluate reviews, make decisions, etc) have been replaced by an automated algorithm. The editor is now free to work on improving the paper and getting it ready for publication, if other criteria are met. This is the first difference between the USP's peer review and the status-quo.

The second difference is in how peer reviewers are invited and /or how papers are allocated. On the USP, no one invites anyone or allocates papers to anyone. The invitations are automated; and the experts check out the papers, and commit to the ones that they have the competence to evaluate. During commitment, they read and confirm a declaration that they are highly competent to evaluate that specific paper. Then, the paper is reserved for them and the twenty five days starts counting. This is the second difference because in the existing peer review process, peer reviewers are chosen, identified or suggested before they are allocated papers to evaluate. But the USP, no one does that. The platform is programmed to allow scholars/experts choose the papers they can evaluate. The reason behind this is discussed in the later part of this article. So, read on.

Without making those two important changes to the early process, we would never be able to make peer review an app. It would have continued to be human operated just as the associate editors have been operating ScholarOne and EditorOne.

Having explained the differences between the USP's peer review and the existing peer review, I would now proceed to answer the above questions and concerns.

The first question is how the automated platform checks for peer review quality
(A) Peer reviewers and editors have been evaluating papers for free, and they have largely been doing good reviews. The USP is a paid service; editors and peer reviewers are getting paid to evaluate papers. If they had been doing it well when it was free, there is no doubt in my mind that they would give their best when they are getting paid. And if, in any case, a review is of low quality, there is a report function on the feedback panel. Everyone is anonymous on the platform- both authors and reviewers are anonymous. Authors can and are encouraged to click the report button if a feedback is of low quality or is inappropriate. The reporting feature is very simple. With a click and one or two sentences, a feedback and other relevant details are sent to admin. And the platform automatically restricts the reviewer/editor status of the expert whose feedback was reported. And (s)he is unable to continue in that role until investigation is complete, usually within 72 hours. If the review is found to be low quality, that expert loses his review/editor status permanently; and if not, his status is reinstated. Remember also that there are other conduct that could hinder the delivery of quality peer reviews, and those have also been taken into account.

Hence, the USP has some features on the admin side to track reviewers/editors whose conduct on the platform is inappropriate. And the consequence is a loss of reviewer/editor status permanently. So, submitting quality feedbacks and behaving appropriately are the keys to retaining your reviewer/editor status on the platform. So, to now answer the question in brief, we do not need to have anyone check for quality reviews. There are thousands of papers on the platform and thousands of reviews coming in on a daily basis. So, we encourage anonymous authors or users, who spot low quality reviews to report them. Not only low quality reviews; if anything is off, feel free to email admin with the datails (user id, script id, etc). This way we can keep the platform secure, uphold academic standard and ensure that scholars get the best value from the service.

(B) The next question is the difference between the USP and ScholarOne/EditorOne. ScholarOne and EditorOne are workflow management systems, whilst the USP is primarily a SaaS with workflow management features. They are very different. As workflow management systems, ScholarOne and EditorOne are used to manage and coordinate scholarly publishing more efficiently. They optimize the publishing workflow, but they are not standalone apps. In contrast, the USP is not used to optimize the publishing workflow, it is a standalone app that runs its own peer review process independently, without human control. Unlike ScholarOne and EditorOne, it does not need to be operated by an editor or associate editor. Editors only submit their feedbacks, the platform operates itself.

So, whilst editors use EditorOne and ScholarOne to optimize workflow, the USP uses editors/reviewers to arrive at its decisions. It is a standalone self-operating web app. Nobody drives/operates it. That is the key difference between the USP and other workflow systems.

There are many other differences, but exposing how the USP differs from other platforms puts us at a disadvantage. Suffice it to say that the USP is very different from other scholarly publishing workflow platforms.

(C) The third question is actually a concern that open platforms might expose the discoveries of scholars before publication. Well, this is very incorrect and I dare say that this is small minded thinking. Research takes a very long time to complete and document in a manuscript, and it is weird to think anyone can see your paper and steal your work or replicate it overnight. It just doesn't work like that.

Also, on the USP, a paper isn't open indefinitely to every scholar. It is open to get three reviewers only. Once it gets three reviewers, the paper becomes closed to other scholars. We cannot speculate on how long it may take to get three reviewers; it could be one day or few days. But as a precaution, however, papers submitted to the USP are taken as part-published and the submission is date/time stamped and stored. Regardless of the peer review outcome, that paper and all of its contents are the intellectual property of the author, and any subsequent work that builds upon it, must credit the originating author. That is the standard and has always been the standard. It is theft for any scholar to see your work on a platform and steal it. It doesn't happen; the author who submitted that paper is very well ahead; the USP keeps that submission record and declares it part-published. So, there is nothing to be worried about really.

(D) The fourth one is a concern that paying peer reviewers would corrupt the system. Well, I disagree with this concern. You first have to understand where scholars are coming from to be able to say whether 'payment' is a good or a negative introduction. There have been lots of talk about how big publishers exploit scholars - take their research, use them for reviews, and still sell the product to back to scholars. Then, declare huge profits ($bn), "commend" scholars for their foolery whilst charging them to access their works. And this has been ongoing from generation to generation of scholars. So, you need to understand that we are coming from complaints of cheating. And when you take this into account, your concern that 'payment would corrupt the system' would immediately evaporate.

So, payment wouldn't possibly corrupt the system; payment would correct the generational fraud in the system. Secondly, the concern about fake/greedy reviewers abusing the system. Well, I am not sure that Upwork, Toptal and Fiverr are used by fake/greedy people, because they get your job done to great standard. Yet, the people there are not half as educated/upright as Ph.D holders. In reality, there are similarities between the USP and those freelance platforms – Upwork and Fiverr. So, why hasn’t $$ corrupted the freelancing system. People post bigger jobs than peer review and random freelancers submit proposals and do the job. The completion rate on those platforms is impressive. In fact, big companies like Google, Facebook use experts from Toptal platform. Why haven't fake/greedy freelancers abused the system. Now you see why scholarly publishing still backward? It’s because the big publishers are the custodians of the archaic system that has refused to yield to modernity.

I also mentioned earlier that on the USP, scholars/experts commit to papers by themselves and no one selects peer reviewers or allocates papers. This is in line with modern platforms like Upwork and Fiverr. Nobody allocates jobs to freelancers on those platforms. They check the jobs and be sure they can do it before submitting a proposal. And the owner checks the proposal before hiring the freelancer. That is the modern standard. But in scholarly publishing, we still need editors or associate editors to select peer reviewers and invite them or to exclude peer reviewers. This backwardness only exists in scholarly publishing. People post big web projects worth thousands of $$ on Upwork and freelancers check, submit proposals and build the web project; but for ordinary peer review, we need an editor to sit down, identify peer reviewers, send them invitations and so on; common we are in the 21st century The USP has knocked off those archaic practices. Expert scholars should check and commit to evaluate papers that they are highly competent on. And do the peer review to great standard. they are getting paid afterall.

(E) The next question is about how we assess peer reviewers and editors beyond their credentials. I don't think this is anything to worry about. We use publicly available information to validate and vet our peer reviewers and editors. The aims of the validation is to weed out fake or dishonest registrations, and to ensure that only the best scholars get the reviewer and editor status. Ideally, we validate within 24hours, but it could take days if we detect any mismatch; and we might call or email you to request more info, if necessary.

(F) Another question is why should authors use the USP rather than ScholarOne/EditorOne. kindly refer to the "why publish with us" page of our website for answers to this question.

(G) The next question is how authors would know whether we have sufficient reviewers/editors. Well, we would not call for papers until we have sufficient peer reviewers and editors in that field. Once we call for papers, you can be sure we have done our home work and you have nothing to worry about.

I hope I have successfully answered all the questions/concerns raised by some scholars. And I hope this article has shed more light on how the platform works.