Workshop on INCENTIVES for peer-review

Co-located with ESWC 2019 - Portoroz - Slovenia

2nd or 3rd June 2019

Why incentives for peer-review?

Peer-reviewing is one of the most important tasks of the research cycle, as it determines which research works have enough merits to be disseminated (a.k.a published) and which are not. However, reviewing is indisputably the least rewarding task of the cycle; reviews commonly end up buried in submission systems. Even where reviews are open, as for the Semantic Web Journal, and ESWC 2018 and 2019, reviews are seen more as a mechanical part of the scientific process, rather than as scientific outputs in their own right.

The lack of appropriate incentive for reviewing creates three problems:

  1. Availability of reviewers. If the incentive is so small, why should I accept to do a review? As a consequence, a research contribution might be blocked for months waiting for a reviewer, or assigned to someone that is less appropriate to do the review.
  2. Timeliness of reviews. If the incentive is so small, why should I give a review priority on my todo list?
  3. Quality of reviews. If the incentive is so small, why should I devote more than X minutes to it? As a consequence, either flawed contributions are accepted, or contributions are rejected without proper feedback

As a scientific community, the Semantic Web community is interested in improving Availability, Timeliness and Quality (ATQ) of reviews. As heralds of Web technologies, we possess the expertise on standards, tools and technologies that could hold the key for realising proposed improvements. Therefore, the aim of this session is to foster discussion about how to improve the ATQ of reviews in the Semantic Web community, potentially extending it to other communities.

The subject has been discussed in conferences around Scholar Communication, for example FORCE11, and the VIVO conference. Recently, a series of conferences organised by the OpenUp project, and the DARIAH initiative have also touch upon the subject in the framework of open science.

what we want to discuss

All members of the community are invited. We would like to drive the discussion around the following questions:

What is the value of a review for a reviewer? To what real-world incentive they should map to improve ATQ?

  • Social ("Review-metrics" as a complement to bibliometrics, badges, diplomas, etc)
  • Concrete (Registration discount, reserved seats in keynotes, etc)
  • Unit of effort (To get a review from the community, you need to have done, or engage to do, a certain amount of reviews first)

Conversely, what is the value of a review for an editor/venue/publisher? What are they willing to invest to get availability, timeliness and quality of reviews?

What is the value of a review for the community and to funders/employers? Where do reviews fit into the scientific process, and what should their status be as scientific outputs?

Reviews express scientific content and argument, and the review history of a paper contributes to an understanding of the methods, issues and development of scientific ideas. How should the role of reviewing be treated within the research ecosystem and how can communities and funders ensure that this happens?

Depending on the chosen incentive, what are the required interactions?

  • For social incentives, metric computation, thresholds for getting badges, etc
  • For concrete and unit of effort incentives, how to redeem them? how to exchange them among venues? Are there exchange rates among venues?
  • For any incentives, how do they interact with closed or open review venues? How do any cross-venue incentives work with mixtures of closed and open review?

Where to host and run such a system?

How to contribute

We invite the submission of 1-2 page position/vision statement on the following topics (or others if you motivate their relevance)

  • How much a review is worth for you as a reviewer? What would help to incentivise the acceptance of a review.
  • How much a review is worth for you as chair/editor?
  • How much a review is worth for you as a member of the community? What would encourage communities, funders and employers to value a review in this way?
  • What interactions do you think are required with the incentive you think about?
  • Any technological, scientific or social hurdle that you think needs to be overcome to implement your idea?
  • If you have are a company or startup in this space, we would love to hear how you solve these problems. We would love to see a demo.

No format constraint. We love HTML submissions!

Send or submit up to 03/05/2019 (To plan ahead the best discussion structure based on contributions)

(SUBMISSION LINK TO BE ANNOUNCED SHORTLY)


Program

  • TBD

ORGANIZERS