Do we need to rethink peer reviewing?

Peer review is a core feature of the quality control and acceptance for papers/music at the NIME conference. This has up until now been done once a year (typically in February-April). All submissions are reviewed by 2-3 community members, and in recent years there have also been meta-reviewers involved in helping the chairs making their final decisions.

While the current peer review system has served us well for many years, it would also be interesting to consider whether we should think differently about the review process. Perhaps it would be better to have a rolling submission process, followed by an open, continuous peer feedback? This could then be the basis for the final peer review and selection for the conference.

The experiences with the Slack channels during NIME 2020 showed that the Q&A sessions were very useful. Many presenters probably felt that they could have improved their submission based on that feedback. So perhaps we should open for a two-step submission system, where those who want can resubmit their papers after the conference?

Together, these two steps (pre-conference peer feedback and post-conference revision) could improve the final publications coming out of the conference series?

1 Like

This is an important and timely discussion to have.

On the 2-step submission system – I think it can work very well but it is an additional work that we would ask the authors to engage with. Why should they do that when our agenda is always constantly packed? :slight_smile: Maybe it is more meaningful if associated to a NIME journal? In that case, authors would be motivated to dedicated further time to improve their paper taking into account the received feedback.

On the rolling submission process – big thumb up here but possibly not for all sort of submissions. We might identify what sorts of submission are meaningful to be published on a rolling basis. I believe that some submissions would not benefit from an outside-of-the-conference rolling publication, particularly those that might result in high discussions. This year’s conference (especially due to its hybrid format) proved once more that the conference setting can offer space for in-depth discussions. Examples of submissions that can work well on a rolling-basis:

  • Updates on existing NIMEs presented at previous editions
  • Evaluations of commercial DMIs (being timely on this can be very important)
  • Technical “innovations” that other communities members could benefit from
  • Debates on hot topics
  • Book reviews (not sure)

These are interesting ideas. Regarding the rolling submission process, it might be worth piloting it in a dedicated submission category to test the waters and see how authors and reviewers handle the process. I see Fabio’s point regarding giving incentives to motivate further writing. Speaking of incentives, I believe the work of the reviewers should be somehow rewarded, especially when it’s good. I have had experiences both as a reviewer and as a meta reviewer. As a reviewer, writing a critical and constructive review might take considerable time, and as a meta-reviewer I have seen very superficial reviews at times, which is unhelpful. A mechanism that rewards good reviewers would help increasing the quality of the papers in my opinion. ISMIR has a best reviewer award if I remember right, but I don’t think that would make things any better. An idea could be that the meta-reviewer assigns a simple 1-to-5 rating to each review and the top 10 (or however many) reviewers overall get acknowledged or rewarded somehow.
If the rolling submission process is not applicable to the main scientific track, I would still consider adding a rebuttal phase. It’s more work on the shoulder of authors and reviewers, but it might boost the overall quality of the papers.


I guess the challenge today is that we start all over with a new install of the conference management system, and then recruit reviewers new each year (although lists are passed on from year to year). I have started to wonder whether using a journal management system instead for the conference would make sense. Then we could think about each conference as an “edition” of a journal, and we could keep the reviewers in one large pool that we could draw from. Then it may also be easier to track reviewers.


When it comes to recognizing the work of reviewers, I spent some time earlier this year in compiling lists of all reviewers for all NIMEs. These are currently on github, we just need a smart way of making them visible on the web page.

This does not say anything about the quality of reviews, of course, that will need to be handled separately. I have two ideas here:

  1. Publish the reviewer names on each accepted paper/music. They do this at Frontiers (see e.g. this paper), and I have found that having your name affiliated with a published paper make you write much more careful reviews.
  2. Go for a completely open peer review process, where both names and all comments are visible throughout. This is a more drastic move, though, and would make it impossible to keep the double-blind policy we currently operate with.

Would be interesting to hear other thoughts on how to improve things.


It would also be good for there to be an easy way to be added or removed from this reviewer list. I declined to review for NIME 2019 and was taken off the list for NIME 2020, after several years as a meta reviewer. There might also be people who want to update their areas of expertise, or who aren’t focusing on this field anymore.

Perhaps authors seeing the names of the reviewers AFTER the final decisions have been made would encourage more careful reviewing? A completely open peer review system really does risk an unequal process and I’m not sure that’s the solution.


Sounds good to me, although in that case there would have to be a couple of additional phases in which the authors address the reviewers’ comments and the reviewers check the changes to the manuscript, and finally endorse the paper for publication (just like in Frontiers). I guess the reviewers would want that if their name is to appear on the accepted version…


Yes, this would be more time-consuming for everyone, but it may be worth it if the final quality is better. It may also be easier to accomplish if we were to move to a post-proceedings format, with the final archived version being submitted and accepted sometime after the conference. (soon the idea of the journal is not necessary any longer…).

1 Like

all good ideas, and I’m glad this topic is coming up here. I’m not sure that making a more articulated review process is necessarily a good idea. But I do agree with Federico that some of the reviews I’ve seen as a Metareviewer in the past years were appalling.

A very simple way of encouraging reviewers to provide meaningful reviews is to fix a minimum amount of characters for the review. Say, the review cannot be less than 300 words. If it is, the reviewer cannot submit it.

It may seem banal, and certainly is not an insurance against poor reviews, but you may have noticed that 90% of poor reviews are two sentences. And most good reviews are about 250/400 words in average. So I do believe this would be an easy step to take, which doesn’t require more work from reviewer, nor from the submitters.

I also noticed that, often, submitters do not actually change their papers, or do not address all major concerns of reviewers. And then, when the paper is submitted, reviewers tend to not double check and we end up with a useless review process. Not sure how to address this, but perhaps another thing to think about.

I would also like to stress the importance of keeping independent artists and researchers into the list of reviewers and metareviewers. Creating a longer review process that extends even after the conference may be very productive for certain type of submissions, I agree. But, from my personal experience as independent artists who’s part of the community since a long time (like many others), properly reviewing or metareviewing every year is a large effort, which I take on without having a fixed salary covering my back, as most researchers do.

Don’t take me wrong, this is totally ok with me :slight_smile: and I’m far from wanting to recriminate. I just want to say that independent artists and researchers are important to NIME, perhaps more important than to other communities, and it’s useful to expand our thinking to assume their perspectives – and possibly involve them even more! :stuck_out_tongue:


Yes, I totally agree. Thanks for many good suggestions!

1 Like

Thank you all for providing a platform for discussion. This is a great, useful effort!

1 Like

Re: rolling submission process, this can be started informally for those who want feedback at an early stage of the submission process.

Something like this has already been taking place for decades in certain research communities (math, physics, and certain branches of comp. sci.) where authors will upload an early draft of an article to (or the like) and initiate discussion and feedback.
This greatly alleviates journal publication lag, which can be critical in competitive fields.

Anyone interested can of course start to do this independently of the formal NIME submission procedure.

And, btw, I think NIME authors could benefit from making more use of to host pre-prints and post-prints.

But that would also mean giving up anonymity? How would that work in a double-blind peer review process?

@alexarje I’m skeptical about the value of double-blind reviewing. Referees should certainly be anonymous, but I don’t think it helps to force authors to maintain anonymity. I’m not opposed to allowing people to submit anonymously, but I think it should be optional. And imo we should not automatically reject authors who wish to sign their work.

Try to imagine making the performance proposals artist-anonymous. Why should research submissions be different? What is the essential difference between the performance program submissions and article submissions that should require one to be anonymous and the other not?

In the case that authors are forced into anonymity, then they can still upload a pre-print to if and when it is accepted for publication. That is what a lot of people do, in fact.

I’m not sure a completely open review process would be feasible.

A small number of authors become very emotional and irrational about their beloved article being rejected. However their impact can far outweigh their rarity.

Publishing the names of the reviewers for accepted papers would be an interesting experiment, but I think you have to protect anonymity for the reviewers of rejected ones.

1 Like

This platform:

Seems to allow one to post anonymous manuscripts, to obtain commentary and circulate a manuscript anonymously while it is still in the publication process.

1 Like

Interesting. Not much activity in the links I looked at there, though. Have you experience with how it works?

I think openreview is pretty widely used in the machine learning community, e.g., ICLR uses it (>2000 submissions I think…), here’s the 2020 submissions:

you can see some of the lively discussion taking place and that they use some anonymity in the process – reviewers are still anonymous and maybe authors started out anonymous but then become visible later?

1 Like

Great link, thanks! Would it be an idea to have a two-step review process:

  1. Double-blind 3-reviewer style process like we have now, which leads to acceptance/rejection
  2. Open peer review of the accepted papers

In my thinking that would give us the best of both worlds?

@alexarje I just stumbled on it two days ago via a preprint I found via g. scholar.
Yes there is not much public commenting going on, but the interesting thing is that scholar seems to scrape anonymized articles under review! The preprint in question is no longer anonymous and it seems to retain citations it picked up while anonymous.

1 Like