Getting into the spirit of #PeerReviewWeek, I invite us all to spend some time thinking about peer-review less like a monolithic “yes, this was peer reviewed” / “no, this was not peer reviewed” binary. (651 words / September 28, 2023)
Getting into the spirit of #PeerReviewWeek, I invite us all to spend some time thinking about peer-review less like a monolithic “yes, this was peer reviewed” / “no, this was not peer reviewed” binary. Using ‘peer-reviewed versus not peer-reviewed’ as a distinguisher is a very blunt thresher for separating wheat from chaff. But our architecture supports such a “yes/no” perspective.
For example, students in low-level writing courses are often assigned to write papers that include 3-5 peer-reviewed articles without much of an explanation why. Library search systems offer radio boxes to display only peer reviewed results, filtering out non-peer reviewed results. And on annual reports, faculty are encouraged to itemize which of their products have undergone peer review.
Instead, what if we thought more deliberately about peer review as a dynamic raft of checks that might be conducted at any time by any source? Every paper has different assessment needs, and crucial feedback can arrive before or after the publication process, through formal or less formal modes. To change how we conceptualize peer-review, consider how we think about two other concepts in scholarly publishing: predatory publishing and authorship.
Attempts to define ‘predatory publishing’ have largely failed because the term can encompass a large set of behaviors, none of which are individually essential to earning the label.
Authorship is similar to predatory publishing in that, as a concept, it includes many different types of behavior.
But unlike predatory publishing, the concept of authorship has found a method to help overcome the umbrella definition problem. CRediT, the Contributor Roles Taxonomy decouples the concept of the authorship into a descriptive set of 14 roles “typically played by contributors to scientific scholarly output” (https://casrai.org/credit/).
While it makes sense to assume that if a paper only has one listed author, then that one listed author performed all of the writing. But when multiple authors are listed, it makes less sense to believe that each individual wrote an equal share of each section. Yet, we treat it that way. CRediT solves for that by specifically noting who did what, similar to movie credits. And while there are currently 14 roles in the contributor taxonomy, that doesn’t mean that every paper will have had 14 functions performed in its writing.
Bringing this detour back to peer review... now take stock of what types of assessments we understand peer reviewers to ordinarily conduct across journals. Add to this the different sorts of checks that editors perform. And don’t stop there, what sort of checks do we wish would be regularly performed? What checks assessments happen outside of the journal, post-publication? Imagine if we built from this a taxonomy of evaluative checks. And in addition to an alt-metric widget, picture every paper with an constantly updating CRediT-style list of what specific reviews have been conducted, regardless of where or when the paper was published or deposited.
In the past, journals did not uniformly require a data availability statement, much less inspect the actual data. But maybe that’s a standard evaluation we want to take place going forward, which may be more possible to achieve with A.I. If every paper lists which standard evaluations that took place--and also, therefore, which evaluations did not take place--readers could know what areas to give extra scrutiny to. And if we incorporated infrastructure for reviewers uniform post-review commenting (ex. PREreview, Disqus), then reviewers with a knack for a particular type of evaluation (ex. Elizabeth Bik spotting image duplications), could seek out papers that lack those checks, and provide them for future readers.
So, again. We can continue to think of peer review in a binary paradigm of having a stamp of approval or not, or we can start thinking of peer review in a much more granular, flexible, and transparent manner. Happy Peer Review Week!
_________________________________________
This post appeared originally on LinkedIn at https://www.linkedin.com/pulse/peer-review-static-monolith-dynamic-raft-arthur-boston/
_________________________________________
Like this content?
Consider supporting: https://buymeacoffee.com/arthur.boston
Or leaving feedback: https://forms.gle/eDiiYfeEMNciYdJr5