Do we need to rethink peer reviewing?

This platform: About | OpenReview

Seems to allow one to post anonymous manuscripts, to obtain commentary and circulate a manuscript anonymously while it is still in the publication process.

1 Like

Interesting. Not much activity in the links I looked at there, though. Have you experience with how it works?

I think openreview is pretty widely used in the machine learning community, e.g., ICLR uses it (>2000 submissions I think…), here’s the 2020 submissions:

you can see some of the lively discussion taking place and that they use some anonymity in the process – reviewers are still anonymous and maybe authors started out anonymous but then become visible later?

1 Like

Great link, thanks! Would it be an idea to have a two-step review process:

  1. Double-blind 3-reviewer style process like we have now, which leads to acceptance/rejection
  2. Open peer review of the accepted papers

In my thinking that would give us the best of both worlds?

@alexarje I just stumbled on it two days ago via a preprint I found via g. scholar.
Yes there is not much public commenting going on, but the interesting thing is that scholar seems to scrape anonymized articles under review! The preprint in question is no longer anonymous and it seems to retain citations it picked up while anonymous.

1 Like

@alexarje Very interesting. I would describe your 2. as ‘open peer commentary’
And it would be a nice warm up for the conference.

Yes, once the paper is accepted, an ‘open peer commentary’ can be an excellent warm-up/discussion starter, but the review process would be already finished, right?

Anyway, I fell that it’s hard to achieve real anonymity in our community. Sometimes I need to make an extra effort not to guess who’s the author of a given paper I’m reviewing.

2 Likes

Yes, in my suggestion above, the first step (review process + acceptance/rejection) would already be finished. Only accepted papers would move on to step 2. I agree that it might be better to call this “peer commentary” to separate it from the “peer review” of step 1. The peer commentary could serve as “warm-up” for more real-time interaction during the conference itself. Not sure if it could replace the function the Slack channel had this year, or just be an addition?

@alexarje @edumeneses @charlesmartin Re: peer commentary vs. slack, I see one as a slower more carefully considered process, indeed like peer reviewing, and the other as an informal chat which can contain multiple short comments by the same person. With peer commentary perhaps we should should restrict to one carefully considered (editable?) post per person . This could keep the commentary succinct and manageable. A chat thread takes time to scroll through. Another idea is to use one for pre and post conference commentary and the other for commentary during the conference.

1 Like

@mjl Yes, I agree with the need to differentiate between more structured comments and lively Slack-type discussion. See my new thread about PubPub. This system allows to have comments and replies within the document. This may help in structuring comments and point to specific parts of the documents. I find this better than restricting people to only post one comment per article.

1 Like

I just spent some time going through parts of OpenReview. It seems like they have a very nice ecosystem for reviewing, both pre- and post-acceptance. From the FAQ I see that they also support double-blind peer review in the first stages of submissions. Definitely the most fine-grained reviewing systems I have seen so far.

They seem to be “old-school” in relying on PDFs, though. So I guess OpenReview could possibly replace CMT, but would not help us towards media-rich content. Perhaps it could work in combination with PubPub? Then we would have the best of both worlds?

1 Like

I tend to think the quality of reviews is more important than the format of the process. Like others, it’s my experience that review quality is uneven, and honestly I don’t think it’s getting any better. I have submitted several papers that got only one review of any substance, including the meta-review.

Changing the process toward a CHI-style rebuttal is attractive but it doesn’t directly address this problem, and it adds extra time which means the submission deadline has to move back before Christmas. I would suggest we focus on recruiting and training more reviewers. Actively encourage PhD students to write reviews. Find (with author and reviewer permission) some past examples of outstanding reviews that can be shared as models.

The process for a NIME journal could be different, and could follow the Frontiers model.

Post-publication commentary is a nice idea. I wonder how much uptake it would get?

5 Likes

I totally agree with these suggestions. As a beginner on the reviewing process at my first NIME last year, I was very confused. Hence, if I had seen some past examples and also received some instructions, I would probably have done a better work.

3 Likes

Really good point @Isabela, there really are no instructions for reviewers, particularly beginners!

Even a one-pager on the main website that we can improve each year could make a huge difference and help new people to know that we are open for new reviewers to join.

So much of the procedure of academic conferences is secret knowledge; we should to advertise expected behaviour as well as the opportunity to review!

BTW, my favourite article on “how to write a review” is this one by Ken Hinckley (not sure where I found this, maybe even someone on this thread showed it to me, if so, thanks!), it’s for a particular conference, but much of the advice is portable.

2 Likes

Good points @Isabela and @charlesmartin. I have added an issue in the website repository to keep track of this. If anyone wants to start drafting a reviewing document, please do!

1 Like

I came across this post about double-blind peer review, which also has many links to various papers that argue that double-blind policies prevents bias (of many kinds).

Also interesting point about combining double-blind reviewing for submissions with open reviewing after acceptance. This is something that I think we should try at NIME.

There is so much research demonstrating that bias based on names definitely exists and impacts the perceived value of work, in a variety of fields:

https://med-fom-medicine.sites.olt.ubc.ca/files/2014/02/nature-nepotism-and-sexism-in-peer-review.pdf

https://repository.upenn.edu/cgi/viewcontent.cgi?article=1389&context=fnce_papers

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4552397/

https://academic.oup.com/jole/article/1/2/163/2281905

https://www.researchgate.net/publication/262104586_Does_double-blind_review_favor_female_authors_Reply (has the full article in the thread)

I agree that open review post-acceptance would be useful; ideally a review process preserves anonymity that’s essential during review, but also carries the accountability after the fact.

2 Likes

But what about artistic submissions? Should we have double-blind also for music and installation submissions? This is particularly tricky if we request video documentation for submissions.

1 Like

It is an interesting thought experiment and perhaps also an interesting real experiment. It will highlight the practical difficulties and complexity of the ethical issues regarding mandatory double-blind submission policies.

Note that I am not opposed to voluntary double-blind reviewing.

Having taken a look at some of the links @astrid, I admit the evidence that double-blind seems to reduce bias. From my cursory scan, it seems the main effect is on first-author status, especially wrt gender bias. I’ll have to study these more carefully.

I know my opinion is in the minority, but for what it’s worth here are my reservations about double-blind reviewing:

  • It interferes with some of the referee’s duties, namely to check for plagiarism and proper citation of past work.

  • Anonymization creates ambiguity about the completeness of the literature review because works may have been left out to satisfy the requirement of anonymity, rather than through ignorance or neglect.

  • It can also make it more difficult to detect simultaneous submission to multiple conferences

  • In my experience meta-reviewers do not know author’s identities which eliminates the possibility for a knowledgeable human to check for referee conflict of interest. In general conflicts of interest are much more likely to go undetected when there is double-blind reviewing.

  • In a small community like NIME you very often know the author’s identity even if it has been anonymized, so you are left with the above disadvantages of anonymization without the target advantage (bias-reduction)

  • Perhaps it is just me but double-blind reviewing seems to attempt to remove the human element. It feels Kafkaesque. This is why I think it is interesting to think about running the performance submission reviews double-blind: surely it is not only the technology or concept one is reviewing but also the human as a performer. Why is this any less true when reviewing the work of a researcher? I know there is a problem with bias. Are there no other solutions? Raising consciousness for example?

1 Like