Open source communities are a distinct form of organizing, and, in the present paper, we investigate the presence of location-based agglomeration effects in these communities. While it is clear that an open source community transcends multiple geographic locations, we do not yet know whether location-based agglomeration effects are present in these communities. We take a critical look at this aspect by investigating whether new and spin-off project initiations are more concentrated in geographic locations that already have many prior projects initiated in those regions, even after accounting for the locations’ endowments. Using data from a popular open source community (GitHub) from years 2008 to 2011 inclusive, we find the presence of not only agglomeration effects but also aggregation diseconomies beyond a certain level. Our findings have implications for research on open source communities and agglomeration effects, in addition to generating practical implications. [Click here for full paper]
REFLECTIONS ON A ‘HUMAN-IN-THE-LOOP’ TOPIC MODELLING (HLTM) APPROACH TO SYSTEMATIC LITERATURE REVIEWS
Ever-increasing volumes of publication data eventually threaten to exceed the limits of human cognition and manual processing. In light of these challenges, new techniques have been called for that extend the capabilities of humans through machine memory and computational power. This paper reflects on a novel "human-in-the-loop" topic modelling approach to systematic literature reviews by combining Latent Dirichlet Analysis (LDA) and human coding to identify key constructs, relationships, and outcomes. Our approach begins by modelling hidden semantic structures through topic modelling to gain insights into embedded patterns and topic compositions in research. Thereafter, we diverge from traditional LDA methods, as we use human coding to extract contextual insights into the socio-technical components of DevOps. Finally, we re-run topic modelling against these socio-technical categories, affording us the opportunity to theorise further. Learnings emerging at each of the three phases are discussed when moving between topic modelling and human coding techniques.
TOWARDS A THEORETICAL MODEL OF ARTIFICIAL INTELLIGENCE (AI) CAPABILITY AND TRUST
Artificial Intelligence (AI) has become an integral part of healthcare systems. Most healthcare systems are dependent on computer software, digital tools, and algorithms for automation, measurement, and supporting clinical decision makers in their diagnosis of patients. However, it is unclear how healthcare practitioners develop trust in AI systems. To address this gap, we develop a theoretical model to understand how capabilities such as speed and accuracy of diagnosis using AI can lead to trust between the practitioner and the AI system. Our results will have implications for AI designers and developers, healthcare professionals, and the healthcare system more broadly.
HOW MUCH METHOD-IN-USE MATTERS? A CASE STUDY OF AGILE AND WATERFALL SOFTWARE PROJECT AND THEIR DESIGN ROUTINE VARIATION
Development methods are rarely followed to the letter, and, consequently, their effects are often in doubt. At the same time, information systems scholars know little about the extent to which a given method truly influences software design and its outcomes. In this paper, we approach this gap by adopting a routine lens and using a novel methodological approach. Theoretically, we treat methods as (organizational) ostensive routine specifications and deploy routine construct as a feasible unit of analysis to analyze the effects of a method on actual, “performed” design routines. We formulated a research framework that identifies method, situation fitness, agency, and random noise as main sources of software design routine variation. Empirically, we applied the framework to examine the extent to which waterfall and agile methods induce variation in software design routines. We trace enacted design activities in three software projects in a large IT organization that followed an objectoriented waterfall method and three software projects that followed an agile method and then analyzed these traces using a mixed-methods approach involving gene sequencing methods, Markov models, and qualitative content analysis. Our analysis shows that, in both cases, method-induced variation using agile and waterfall methods accounts for about 40% of all activities, while the remaining 60% can be explained by a designer’s personal habits, the project’s fitness conditions, and environmental noise. Generally, the effect of method on software design activities is smaller than assumed and the impact of designer and project conditions on software processes and outcomes should thus not be understated. [Click here for full paper]