Frequently Asked Questions

What are similarities and differences between regular submissions and blue sky submissions?

When will the regular papers be visible on the OpenReview?

Before the submission deadline (18-June), no papers will be visible.

Between 18- 30-June, the regular papers that do not comply with the guidelines and format will be desk rejected.

By 30-June all the submitted papers will become publicly visible. The paper will be anonymized.

During the review period, between 30-June and 16-Aug, the public is free to comment on the papers.

On 16-Aug, when the rebuttal period opens up, anonymized reviews will be made public.

During the rebuttal period, between 16-Aug and 30-Aug, the authors and reviewers should discuss the reviews. The authors are free to update the paper, and the reviewers are expected to review the new versions and update the reviews and scores.

After the author's notification, on 17-Sep, the accepted papers will be de-anonymized, and the rejected papers will be removed from the public domain.

Will I be able to update the paper during the response period?

Yes.

  • If one of your co-authors cannot register with OpenReview, and the deadline is approaching, please contact Program Chairs (corl2021-pc@googlegroups.com) for assistance.

What is the criteria for desk rejection for regular papers based on the scope?

A regular paper should demonstrate the relevance to robot learning through

  • Intent: Explicitly address a learning question for physical robots OR

  • Outcome: Test the proposed learning solution on physical robots.

Papers that do not meet that criteria might be desk rejected.

Rejection examples:

  • No learning: Manually design and tune the performance of a robot controller without use of learning.

  • No learning: A search algorithm for model-based planning.

  • No robotics: A generic result on sample complexity.

  • No robotics: A generic RL algorithm.

  • Little robotics: Improved performance on a standard CV dataset, e.g., ImageNet recognition.

Gray

  • An RL algorithm that works only in simulator X. Does it transfer to real robot learning (sim2real, data efficiency, …)? Yes for CARLA for autonomous driving. No for Cheetah/Human-oid in Mujuco. According to our stated principles, the submission satisfies the intent. Its failure or success to demonstrate the relevance will be determined during the review process.


What is the maximum size of the supplemental material? Can I add links to the additional resources?

  • 100 MB

  • Links to the additional resources are fine, but the reviewers have no obligation to look at them. Please make sure that the additional resources do not accidentally reveal your identity.


I can't register an account, and the deadline is approaching. Help.

  • If one of your co-authors cannot register with OpenReview, and the deadline is approaching, please contact Program Chairs (corl2021-pc@googlegroups.com) for assistance.

Can I change author order after the submission deadline? How about add a new author?

  • Changing the author order is permitted. Adding or removing the authors after the paper submission deadline is not allowed. Under exceptional circumstances, please reach out to the program chairs.


What is the criteria for desk rejecting regular papers?

Process: ACs will identify the desk rejection candidates, using one of the criteria below as a justification. PC will examine the candidates and make the final decision. We will err on the side of caution, and only desk reject papers when there is a consensus between all PCs and the AC.

The paper can be desk rejected for one of the three reasons: formatting issues, anonymity violation, or scope.

Formatting issues -- paper is either too long, or in an incorrect format.

Anonymity violation -- the main manuscript, supplemental materials, or a link provided in a paper identifies one or more of the authors.

Scope: All CoRL submissions must demonstrate the relevance to Robot Learning through

  • Intent: Explicitly address a learning question for physical robots OR

  • Outcome: Test the proposed learning solution on physical robots.

  • Rejection examples

    • No learning: Manually design and tune the performance of a robot controller without use of learning.

    • No learning: A search algorithm for model-based planning.

    • No robotics: A generic result on sample complexity.

    • No robotics: A generic RL algorithm.

    • Little robotics: Improved performance on a standard CV dataset, e.g., ImageNet recognition.

  • Gray

    • An RL algorithm that works only in simulator X. Does it transfer to real robot learning (sim2real, data efficiency, …)? Yes for CARLA for autonomous driving. No for Cheetah/Human-oid in Mujuco. According to our stated principles, the submission satisfies the intent. Its failure or success to demonstrate the relevance will be determined during the review process.

How do I communicate with the ACs / reviewers / authors?

The preferred mode of communication between ACs, the reviewers, and the authors is through the comments on the papers. Note that the comments can be assigned different visibility levels, marked as readers field.


I am an area chair and would like to assign reviewers. How do I do that?

Preliminary assignments are completed from the AC recruited pool of the reviewers who agreed to review. We expect that the majority of the existing assignments will not need modifications (90%?). That said, there might be situations where a specific paper would benefit a specific reviewer, or where the automated fit is not good -- in these cases -- you are welcome to change the reviewers, from another reviewer from the pool or outside of the pool.


To do so use Edge Browser: Modify Reviewer Assignments from the Area Chair Console.

I am a reviewer, and I am not qualified to review a submission. Who do I contact?

Contact the submission's area chair by posting a comment on the submission's page. Please make it readable to the area chair and PCs. The (minor) adjustments to the reviewer assignments will be made until 30-Jun.

Should we wait for the final versions of the supplemental material to flag papers for desk reject consideration?

No, not necessarily.


Our goal and spirit of the desk-reject flagging is to enter the review phase with submissions relevant to robot learning, and enable a fair and unbiased review process. It is not the goal to penalize the authors for omissions and things that can be mitigated before the review process starts. Likewise, we are not looking to reduce by-quota the number of submissions that enter the review phase.


The content, and formatting concerns are not applicable to the supplementary material. The only possible way for the supplementary material to make the paper a candidate for desk rejects is a violation of anonymity.