Friday October 21st, 2022
9:00-10:15am CDT (UTC-5)

Is This (Panel)
Good Enough for IEEE VIS?

Panel held at IEEE VIS 2022

Organizers: Robert S Laramee, Petra Isenberg, Tobias Isenberg

“The academic review process is broken,” is a statement one often reads or hears. After getting the reviews back from the IEEE VIS conference, likely 75% (or so) of us agree. But is it really? The goal of this panel is to discuss the review process of the visualization community (broader than just IEEE VIS) and to brainstorm ways to improve upon it or to come to the conclusion that everything is fine. There are several concrete reasons why we believe this would be helpful:

It is difficult to find (qualified) reviewers.

What incentives could be put in place to accept reviewing duties? Or how can the reviewing load be lowered for our community?

Disagreement on acceptable contributions.

  • Are NULL results acceptable, assuming the experiment was performed correctly?

  • Is it acceptable to reject a paper on the grounds of "the contributions aren't strong enough for IEEE VIS?"

  • Is X (e.g., literature reviews) an acceptable contribution for VIS papers and can and should reviewers and, in particular, PC members be held accountable to it?

  • Should user study papers require institutional ethics approval and pre-registration or not?

Noise and randomness in what should be an objective assessment of research

In the light of the NeurIPS experiment of 2014 [1], that highlighted the noisiness of the review process, as well as the repetition in 2021 [2]:

  • How noisy is the review process in our community and what consequences does this have, especially on early career researchers?

  • Since NeurIPS hasn't found a way to address the aforementioned noisiness, is the review process truly broken or are there innovative ideas we can experiment with in the VIS community?

  • Would a journal-first or journal-only process for submissions be able to address the noise issue, or would it cause other changes that we may not want to see (more papers to present, different type of papers to present, even more of a concentration on IEEE VIS as a venue, even further growth of the conference)?

Changes in the review processes of other venues

Is it worthwhile to think of an OpenReview system (reviewers remain anonymous, but reviews are part of something like https://openreview.net/), as is common in some other communities, in order to facilitate:

  • analysis: open up the reviews for data analysis

  • transparency: show the variety of the collected reviews

  • education: help future authors to better understand criteria and culture of the VIS community.

The visualization field is growing

How do we deal with the growth of the field and conference in general and ensure that we remain sustainable?

  • What can we learn from other fields which grow similarly or even more (e.g. NeurIPS)?

  • More tracks, even less time for presentations, more concentration on full papers, ...? Which of these strategies serves us as a community best?


The reason for asking all these questions is not only that we struggle with these points as authors, but also as seasoned reviewers. We believe these are difficult questions that don't have a simple answer. For that reason it would be good to keep the conversation going and in the open and to crowdsource for solutions.

Speakers

Cody Dunne

Northeastern University

Alexander Lex

University of Utah

Melanie Tory

Northeastern University

Torsten Möller

University of Vienna

Alvitta Ottley

Washington University

Venue

Room TBD / and online

The panel will be held in a hybrid format. It will be part of IEEE VIS held in Oklahoma City and will be available to participants who registered for remote participation.