Open Space

“Open Space” is a technique for running meetings where the participants create and manage the agenda themselves. Participants can propose ideas that address the open space topic, these are divided into various sessions that all other participants can join and brainstorm about. At the end of the CiML workshop we organized an open space meeting. It was great to brainstorm with the participant about the topics that they proposed.

Open Space Topic CiML Workshop

The Organization of Challenges for the Benefit of More Diverse Communities

Agenda

Time slot: 16:30 – 17:00

Session: A

Session Title: How can we organize challenges that encourage collaboration?

Convener: Adriënne Mendrik

Discussion Summary

We discussed whether we could use gamification to encourage collaboration, like is sometimes done in board games. Currently, incentives for participating in challenges are often tailored at competition (the winner gets prize money). You notice that in the first phase of a competition people are still willing to share code and help eachother, but once you get more towards the last phase of a challenge, people are more protective of their code and tend to collaborate less, because they want to win the prize money. Therefore, the question arises, what could be other incentives we could think of that would encourage collaboration and building upon eachothers knowledge. Can we have a "yes, and..." attitude, using eachothers reasults and building upon them, instead of competing. In order to be able to do this, it is important that submissions are open (open source) and re-usable. This is important to gain knowledge from them and build on top of them.

In order to get high quality open source submissions, people need to get credits for their work (other type of scientific contributions than a paper). This would encourage people to spent more time on writing high quality open source software or even organizing challenges. Now, it is often moslty volunteer work and the people that are doing it, are doing it mostly in their spare time. Open submissions also leaves more room to assess submissions in different ways, giving for example credits to the most original solution (by using a jury). Maye we can also learn from adventure games how to encourage exploration and creativity in challenges. Another option is, that you get extra points if you have an answer that nobody else has, or that you get credits for using resources of others (encouraging re-use), or the other way around, getting extra credits if people are re-using your software / method / idea (measure impact). We should be careful, however, that this is not becoming a goal in itself (like leaderboard climbing). The question is also, whether we could learn from for example the set-up of a marathon. In a marathon, you also get credits for finnishing the marathon, even if you did not manage to be fastest.

Since there are different types of people it might make sense to organize different types of challenges that address the same problem, from different angles. For example one challenge with a competition format and one with a collaboration format. It could even be a scientific experiment to see from which format we get the best results.

The question is also, whether the prize money and the deadline set for a challenge, results in the right motivation for people to participate. It is also less healthy for the work-life balance. People tend to spent nights trying to get the best results, and one could wonder whether this moves science forward. The best ideas usually come when you are well rested and relaxed, and can sit back to think about the problem. Can we look more at other fields for this, for example the Beyond Budgetting method that was set-up by CFOs in companies, who noticed that the bonus system (getting money when reaching a certain target e.g. number of sales) resulted in a decrease of intrinsic motivation and an increase in competition within the company that was bad for the company. Therefore they proposed to seperate target and rewards. Setting long-term targets that were important for the company and that people wanted to work on (intrinsic motivation), and evaluating people every year, based on how they performed within context (colleagues, economic situation and so on). In challenges we would like to have people that are passionate about solving the problem, because this usually leads to better solutions and more collaboration than when people are only participating to win prize money. So the question is, is there a way to set a long term target and evaluate people after certain time slots in context, seperating target and metrics, to promote collaboration?

Time slot: 16:30 – 17:00

Session: C

Session Title: Automatic post-challenge ablation study

Convener: Zhen Xu

Discussion Summary

One of the key contributions of a challenge is to see how much progress we have made towards solving the problem. However, we have suffered a lot in post challenge ablation study. Even for a code submission challenge (e.g. AutoDL), submissions are in a whole big file without much modularity, causing much inconvenience for analysis.

As Frank Hutter proposed in his talk as well as in the open space discussion, it is encouraged to disentangle submissions into modules, say N sequential modules in total. Then we could run N competitions in parallel such that participants could choose parts of their interest to join without worrying about other modules. After these challenges finish, we have N disentangled modules to analyze (in an automatic way).

We also find what’s interesting is that, if we run these N models within one challenge in a discrete way, i.e. provide finite choices per module, and ask participant to submit configurations of these module choices, we get naturally a meta learning trajectory dataset without much cost. We believe this idea could be polished to collect meta datasets.