Workshop held in conjunction with 
ICSE 2016                   
May 16, 2016             
Austin, Texas, USA

Keynotes

We are delighted to announce that CSI-SE 2016 will feature two excellent keynotes from Dave Messinger, VP Product Architecture at TopCoder and Anand Kulkarni, Founder and Chief Scientist at LeadGenius.

 Dave Messinger (TopCoder)
   Anand Kulkarni (LeadGenius)

Dave Messinger is VP of Product Architecture at TopCoder.

More information coming soon!
 
Anand is founder and Chief Scientist of LeadGenius, a Y Combinator, Sierra Ventures, and Andreessen-Horowitz-backed startup using human computation to automate sales at scale.  Built on the MobileWorks crowd architecture, LeadGenius applies fair and ethical principles to help crowds of workers find work in the online economy while letting sales team grow their businesses. 

Anand was named as one of Forbes Magazine's "30 under 30" Top entrepreneurs under 30. Anand has published over a dozen papers in ACM and IEEE magazines, journals, and conferences. Anand previously held a National Science Foundation graduate research fellowship in mathematics. He holds degrees in Industrial Engineering and Operations Research, Mathematics, and Physics from UC Berkeley.

Program

 Time  Description

 08:50-09:00


Welcome & opening


 09:00-10:00   

Keynote:
Dave Messinger
VP Product Architecture at
TopCoder

 10:00-10:30 Paper Session 1

Predicting Questions’ Scores on Stack Overflow
Haifa Alharthi, Djedjiga Outioua and Olga Baysal

 10:30-11:00   

Coffee Break


 11:00-12:00


Keynote: Anand Kulkarni
Founder and Chief Scientist at LeadGenius

Crowds in the Compiler: Augmenting Developers with Programmers in the Crowd

Crowd computing asks the question of how software can be used to program crowds of people. How can we use crowds of people to amplify our ability to program? At LeadGenius, we've experimented with using code-savvy members of our international virtual crowd to accelerate the development of our own crowdsourcing platform, writing code virtually alongside our engineers.  I'll share results and observations from experiments in multiple scenarios ranging from embedded IDE enhancements to stub-function-converters to direct collaborations between crowd members and engineers. Last, we'll discuss how results from research into pair programming, agile, and other best practices in conventional autonomous software engineering inform and shape results around using the crowd within our programming environments.

 12:00-12:30    Paper Session 2
Exploring Crowd Consistency in a Mechanical Turk Survey
Peng Sun and Kathryn Stolee

 12:30-14:00


Lunch Break
    
 14:00-15:30 Paper Session 3

Measuring User Influence in Github: The Million Follower Fallacy
Ali Sajedi Badashian and Eleni Stroulia

Linking  Usage Tutorials into API Client Code
Naihao Wu, Daqing Hou and Qingkun Liu

EARec: Leveraging Expertise and Authority for Pull-Request  Reviewer Recommendation in GitHub
Haochao Ying, Liang Chen, Tingting Liang and Jian Wu

Task Allocation for Crowdsourcing using AI Planning
Leticia Machado, Felipe Meneguzzi, Rafael Prikladnicki, Cleidson De Souza and Erran Carmel

Toward Microtask Crowdsourcing Software Design Work
Edgar R.Q. Weidema, Consuelo López, Sahand Nayebaziz, Fernando Spanghero and André van der Hoek

 15:30-16:00
  

Tea Break


 16:00-17:30

Breakout Group Discussions



Call for papers

A number of trends under the broad banner of crowdsourcing are beginning to fundamentally disrupt the way in which software is engineered. Programmers increasingly rely on crowdsourced knowledge and code, as they look to Q&A sites for answers or use code from publicly posted snippets. Programmers play, compete, and learn with the crowd, engaging in programming competitions and puzzles with crowds of programmers. Online IDEs make possible radically new forms of collaboration, allowing developers to synchronously program with crowds of distributed programmers. Programmers' reputation is increasingly visible on Q&A sites and public code repositories, opening new possibilities in how developers find jobs and companies identify talent. Crowds of non-programmers increasingly participate in development, usability testing software or even constructing specifications while playing games. Crowdfunding democratizes choices about which software is built, broadening the software which might be feasibly constructed. Approaches for crowd development seek to microtask software development, dramatically increasing participation in open source by enabling software projects to be built through casual, transient work.

CSI-SE seeks to understand how crowdsourcing is shaping and disrupting software development, shedding light on the opportunities and challenges. We encourage submissions of studies, systems, and techniques relevant to the application of crowdsourcing (broadly construed) to software engineering.

Topics of interest

Topics of interest include, but are not limited to:

  • Techniques for performing software engineering activities using microtasks
  • Techniques and systems that enable non-programmers to contribute to software projects
  • Open communities and systems for sharing knowledge such as Q&A sites
  • Techniques for publicly sharing and collaborating with snippets of code
  • Web-based development environments
  • Systems that collect and publish information on reputation
  • Techniques for reducing the barriers to contribute to software projects
  • Crowd funding software development
  • Programming competitions and gamification of software development
  • Techniques for motivating contributions and ensuring quality in systems allowing open contribution

Workshop organization

CSI-SE is a one day workshop composed of four sessions. A morning session will be devoted to invited talks by leaders in crowdsourcing in software engineering. Two paper sessions will provide opportunities for authors to disseminate their work and interact with other researchers working in the area of crowdsourcing in software engineering. The workshop will close with a highly interactive discussion session.

Submissions

CSI-SE welcomes three types of paper submissions:

  • Full papers - max. 7 pages. Describing in-depth studies, experience reports, or tools for crowdsourcing including an evaluation; these submissions should describe new work relevant to crowdsourcing for software engineering.
  • Short papers - max. 4 pages. Describing early ideas with appropriate justification, preliminary tool support, or short studies that highlight interesting initial findings.
  • Research notes - max. 2 pages. These are short contributions that can present more speculative ideas than the other two types of contributions. Sound reasoning is important, but no full justification or evaluation of ideas is necessary. This type of submissions is to encourage novel and visionary contributions that have not been developed in-depth. 

Papers should follow the formatting guidelines for ICSE 2016 submissions. Note that the page limits include references. Papers should be submitted through the EasyChair submission system.

Each paper will be reviewed by three members of the program committee. Accepted papers will be published as an ICSE 2016 Workshop Proceedings in the ACM and IEEE Digital Libraries and be presented at the workshop. Papers must present novel material and not  under review elsewhere at the time of submission. The official publication date of the workshop proceedings is the date the proceedings are available in the ACM Digital Library. This date may be up to two weeks prior to the first day of ICSE 2016. The official publication date affects the deadline of any patent filings related to published work.

Important dates

Submissions due                                              January 29, 2016

Notification to authors                                    February 19, 2016

Camera-ready copies of authors' papers     February 26, 2016

Workshop                                                          May 16, 2016

Previous workshops