We will first present a short tutorial that will organize and summarize workshop submissions, and will also discuss some state-of-the-art advances in conference organization and peer review. The submitted ideas will then be clustered to form working groups that will engage in focused discussions. Each group will produce a handful of actionable items for the IFAAMAS board. In addition, the group will produce a summary of its discussions that may be of interest to conferences in general, and we will post these online as our informal workshop proceedings. Groups will have an opportunity to present their progress and get feedback from other groups. People can also switch between groups, although each group will have a designated leader to moderate the discussion.
Finalized Schedule:
8:30am-9:15am - General Introduction, Tutorial, and Introduction to Group Topics
9:15-10:00am Group Discussions 1
10:00am-10:30am - Coffee Break
10:30am-10:40am - Progress Report 1
10:40am-11:30am - Group Discussion 2
11:30am-11:45am Progress Report 2
11:45pm-12:30pm - Compile Feedback/Whitepaper per Group and Determine Next Steps
A preliminary virtual meeting was held on May 8, 2025 on Gather: https://app.gather.town/app/Q12lGEa1ZQhsCIuD/PiPER%20preliminary%20meeting
This link still holds links to the note-taking/planning documents for each of the 4 discussion groups below, and will remain accessible (but password-protected) for PIPeR participants past the date of the workshop.
Proposed Use of LLMs Reviewing
Matthew E. Taylor, Joydeep Biswas
A Method to Detect LLM-Generated Peer Reviews, and its Evaluation
Vishisht Rao, Aounon Kumar, Himabindu Lakkaraju, Nihar B. Shah
Accounting for Truthful Self-Assessment in Peer Review
Weijie J. Su, Clayton Thomas, Jibang Wu, Haifeng Xu
Designing Rules to Pick a Rule: Aggregation by Consistency
Ratip Emin Berker, Ben Armstrong, Vincent Conitzer, Nihar B. Shah
Strengthening Peer Review: Two Practical Suggestions
Hadi Hosseini, Debmalya Mandal
PIPeR Questionnaire
Jérôme Lang
Review Allocation With Author Information
Yichi Zhang, Grant Schoenebeck
Simple Measures to Improve Accountability in Peer Review for Large AI Conferences
Ulle Endriss
Reviewer Data Storage and Sharing Agreement among AAMAS and Related Conferences
Justin Payan
Strengthening Peer Review: Two Practical Suggestions
Hadi Hosseini, Debmalya Mandal
PIPeR Questionnaire
Jérôme Lang
Detecting Collusion in Peer Review with Game-Theoretic Approach
Rica Gonen, Asaf Samuel
Emphasize Correctness and Increase Acceptance Rates
Nihar B. Shah
Decomposing the Review Process: a Two-Part Submission System
Eugenio Bargiacchi, Roxana Rădulescu, Mehrdad Asadi, Paolo Speziali, Diederik M. Roijers
Simple Measures to Improve Accountability in Peer Review for Large AI Conferences
Ulle Endriss
From Crowds to Codes: Can Adaptive Peer Review Help?
Fang-Yi Yu, Xingbo Wang, Yichi Zhang
Who Benefits the Most? Analyzing the Effects of Two-Stage Peer Evaluation
Roy Fairstein, Harper Lyon, Oshri Damty Omer Lev, Nicholas Mattei, Kobi Gal
Decomposing the Review Process: a Two-Part Submission System
Eugenio Bargiacchi, Roxana Rădulescu, Mehrdad Asadi, Paolo Speziali, Diederik M. Roijers