Speakers

Aldeida Aleti 

Title: Automated Testing of AI-based systems

Abstract: Software Testing is a well-established area in software engineering, encompassing various techniques and methodologies to ensure the quality of software systems. However, with the arrival of artificial intelligence (AI) systems, new challenges arise in the testing domain. These systems introduce unique complexities that require novel testing approaches. In this talk, I aim to explore the challenges posed by AI systems and discuss potential opportunities for future research in the area of testing. I will touch on the specific characteristics of AI systems that make traditional testing techniques inadequate or insufficient. By addressing these challenges and pursuing further research, we can enhance our understanding of how to safeguard AI and pave the way for improved quality assurance in this rapidly evolving area.

Bio: Aldeida Aleti is an Associate Professor in Software Engineering, currently serving as an Associate Dean of Engagement and Impact at the Faculty of Information Technology, Monash University. A/Prof. Aleti completed her Ph.D. in 2012, specializing in the area of adaptive parameter control for AI algorithms, particularly focused on automated software architecture design. Her dedication to research and innovation has led to over 80 published papers in prestigious conferences and journals within the area of software engineering, including TSE, ICSE, FSE, ASE, TOSEM, JSS, EMSE, among others. 

Steve Blackburn

Title: Memory Safety and the Case for Modular VM Development

Abstract: The development of Lisp and the first garbage collection algorithms go back to the late 1950's and John MacCarthy's quest to develop AI.  Programming languages and memory management are at the very foundation of large companies like Google, and underpin the AI revolution. Performant, safe languages are therefore a high priority. Memory-safe languages are not thus merely a popular choice. For many software developers, the alternative is economically untenable given the stakes associated with successful attacks and the cost of mitigating critical exploits in an unsafe language through testing, verification and defensive programming.   Unfortunately, while memory management has become more important, it remains a major source of overhead for even the most highly optimized runtimes. In this talk I’ll take a broad look at memory performance and the engineering of high performance garbage collectors. I’ll discuss why a modular approach to memory management will be important to the future of programming language implementation.

Bio: Steve Blackburn is a research scientist at Google DeepMind and a Professor at the ANU. His research interests include programming language implementation, computer architecture, software engineering and performance analysis. Steve leads the MMTk and DaCapo projects. He is a passionate educator and a Fellow of the ACM.


Kelly Blincoe

Title:  ENGclusion: setting up a longitudinal study

Abstract: The ENGclusion team is on a mission to bring knowledge to the engineering industry that can improve diversity, equity, and inclusion. We have launched a longitudinal study which will follow the careers of early career practitioners for five years. This talk will give a behind the scenes view into setting up a longitudinal study and some initial findings from our first data collection round. 

Bio: Kelly Blincoe is a Rutherford Discovery Fellow and a Senior Lecturer of Software Engineering at the University of Auckland, New Zealand in the Department of Electrical, Computer, and Software Engineering. 

Étienne Borde

Title: Low-code and Model-Driven Software Engineering

Abstract: Model-driven engineering is a software engineering aiming at using models of software in order to produce, document, verify, and/or optimize the artifacts of a software development process. In this talk, we will present the foundations, methods, and tools aiming at reducing the amount of code that must be written by software developers, by leveraging the principles of model-driven engineering. We will also discuss the strength and weaknesses of these approaches, using practical examples to illustrate them.

Bio: Étienne Borde is a Senior Lecturer within the department of Computer Science and Software Engineering at the University of Canterbury, New Zealand, since April 2023. Etienne received his PhD from Telecom Paris – Institut Poltechnique de Paris (Paris, France). Prior to joining the University of Canterbury, Etienne has been a “Maître de Conférences” (Lecturer/Senior Lecturer) at Telecom Paris – Institut Polytechnique de Paris (Paris, France), an invited researcher at the Software Engineering Institute (Pittsburgh, USA) and an invited professor at the Nanyang Technological Institute (Singapore). Etienne’s research focuses on the development of methods and tools for software engineering in (real-time) critical application, such as applications from the transportation domain. Model-driven engineering, real-time scheduling, model-checking, and discrete optimization are the types of techniques Etienne has studied, combined, or contributed to in his research work.

Tony  Clear

Title: Assessing the Progression of Software Engineering Competencies

Abstract: In the recent report Computing Curricula 2020 - CC2020 - Paradigms for Future Computing Curricula. (CC2020) and related work, the notion of competency has been defined as comprising ‘knowledge + skills + dispositions + task’, based on a broad conception of competency as effective professional performance in a relevant setting.  The curriculum report has advocated a move toward competency-based curriculum models rather than the traditional knowledge based approaches. But in unpacking aspects of a competency in a software context, there seem to be some challenges in assessing how they develop both in an educational and a professional setting.  In a co-supervised Research Report with Dr Ramesh Lal for our student Johnny He’s Master of Computer and Information Sciences, Johnny has contrasted how “Skills Requirements” develop in the transition from Junior Software Developer to Senior Developer.  The study was based on him reviewing job advertisements and contrasting stated junior and senior software developer expectations. In developing competency statements from this data source, and in subsequently allocating dispositions at different levels, a range of challenges became apparent. This presentation will review the challenges and the underlying implications for framing, developing and assessing professional competencies for software professionals. 

Bio: Tony Clear is an Associate Professor in the Department of Computer Science and Software Engineering at Auckland University of Technology, and an ACM Distinguished Member. He is also Co-Director of the Software Engineering Centre (SERC - https://serc.aut.ac.nz/ ) with Assoc. Prof. Roopak Sinha and Prof. Jacqueline Whalley. He holds positions as an Associate Editor for ACM Transactions on Computing Education (TOCE), for the journal Computer Science Education, and ACM Inroads for which he is also a regular columnist and Editorial Board member. He is a former practitioner and is active in research within the global software engineering and computer science education communities. With Professor Daniela Damian of University of Victoria Canada he has worked on a Royal Society of NZ International Leaders Fellowship Grant titled - "Leading the Way in Software Ecosystems for NZ". Tony has served on the steering committee for ICGSE and has chaired or served on the programme committee for conferences such as ITiCSE, ICER, ACE, FIE, ICGSE, EASE, CITRENZ, APRES, ECIS, SIESC, and reviewed for journals such as TSE, IST, JSS, JSEP, IEEE Software, IJEE, CLEIej. He supervises and has examined doctoral students in Global Software Engineering, CS Education and interdisciplinary topics, and has chaired or participated in several doctoral consortia.

Jens Dietrich

Title: The Limitations of Software Composition Analysis  

Abstract: Modern software is largely composed from existing components, a practice facilitated by automated build tools and software repositories.  From the security point of view, this has created new challenges as vulnerabilities are inherited from those components. The use of vulnerable and outdated components is now in the OWASP Top-10 list. 

Software Composition Analysis (SCA) aims to address this challenge by reasoning about the propagation of known vulnerabilities through dependencies between software packages.   Numerous analyses are readily available – they tend to scale well and are well-integrated into the tools and workflows used by developers (GitHub, various IDEs, docker, etc). 

Like most non-trivial program analyses, SCA must balance precision, recall and complexity (scalability). I will talk about some of those issues in general, and then more specifically about our work to improve the recall of SCA. This work has led to updates to vulnerability databases (GHSA)  for several vulnerabilities, including the infamous CVE-2021-44228 (aka log4shell). 

I will also briefly discuss ongoing work and research opportunities in the field of software supply chain security. 

Bio: Jens Dietrich is an Associate Professor in the School of Engineering and Computer Science at Victoria University of Wellington in Wellington, New Zealand. He has a Master in Mathematics and a PhD in Computer Science from the University of Leipzig. After graduating in 1996, Jens has worked in consulting in Germany, Namibia, Switzerland and the UK, before returning to academia in 2003, first at Massey University and since 2018 at Victoria University. His research interests are in the areas of software componentry, evolution, testing, static analysis and vulnerability detection.

Matthias Galster

Title: The PhD Journey 

Abstract: In this talk we will walk through some of the stages of the PhD journey and how to plan it. Furthermore, we will discuss various good practices for a successful journey and what makes a good research project (and good research in general). Also, we will explore what can go wrong along the way.

Bio: Matthias Galster is a Professor of Software Engineering in the Department of Computer Science and Software Engineering at the University of Canterbury in Christchurch, New Zealand. His current work aims at improving the way we develop high quality software, with a focus on software architecture, development processes and practices, and empirical software engineering. He has published in leading journals and conferences. Also, he regularly serves on program and organizing committees of software engineering conferences and reviews for renowned software engineering journals.

Mengmeng Ge 

Title: Demystifying the Evolution of IoT Malware

Abstract: Malware, short for malicious software, is arguably one of the most significant threats to computer systems. In recent years, malware attacks against Internet of Things (IoT) networks have become increasingly prevalent due to the vulnerabilities inherent in IoT devices, such as weak passwords, outdated software and firmware, and lack of end-to-end encryption. A network of compromised IoT devices that have been infected with malware, often known as an IoT botnet, can be manipulated to carry out various cyberattacks, such as distributed denial-of-service (DDoS) attacks. This talk will introduce the concept of botnet attacks and their propagation cycle and discuss the evolution of tactics and techniques employed in malware targeting IoT devices. 

Bio: Mengmeng Ge is a Senior Lecturer of Cyber Security at University of Canterbury. She completed her PhD degree at University of Canterbury in 2018. Since 2019, she joined Deakin University and worked as a Lecturer in Cyber and Networking for 2 years. She then joined RMIT University in 2021 and worked as a Lecturer in Cybersecurity for over a year. Her research mainly focuses on model-based cybersecurity analysis, security modelling for the Internet of Things, deep learning-based intrusion detection, and emerging proactive defences (e.g., moving target defence, cyber deception). She also has considerable industry experience in security risk management, vulnerability assessment, and security analytics. 

Mike Godfrey (ACSW Keynote)

Title: Swimming with the data: Philately will get you everywhere

Abstract: Empirical research in software engineering, as in the natural sciences, involves building models of the problem domain — in our case, software and its development — and then evaluating those models against real-world evidence.  There is often pressure on researchers to "think big" to discover actionable truths that pertain broadly to software development.  In this talk, I discuss the value of doing empirical "deep dives" into the study of individual software systems to better understand their social and technical contexts.  Using example studies to illustrate, I argue that time spent better understanding individual systems can lead to deeper insights about the problem space and improve awareness of the holes, ambiguities, and naive mistakes in our models.


Bio: Mike W. Godfrey is a Professor in the David R. Cheriton School of Computer Science at the University of Waterloo (UW), which he joined in 1998.  He is co-founder of the Software Analytics Group (SWAG), and is a Senior Member of both the ACM and IEEE.  He has held an NSERC Industrial Research Associate Chair in telecommunications software engineering (2000-2005), and a UW David R. Cheriton Faculty Fellowship (2014-17).  He has won three "best paper" awards, and one "most influential paper award" at various conferences.  His research interests span many areas of empirical software engineering including software evolution, code review, reverse engineering, program comprehension, mining software repositories, and software clone detection and analysis.  He has also contributed chapters to several books, including "Copy-Paste as a Principled Engineering Tool" ("Making Software: What Really Works and Why We Believe It" O'Reilly, 2010) "Why Provenance Matters" ("Perspectives on Data Science for Software Engineering", Morgan-Kaufmann, 2016), and "Sometimes, cloning is a sound design decision!", ("Code Clone Analysis: Researches, Tools, and Practices, Springer, 2021).

  

Rahul Gopinath

Title: How to Talk to Strange Programs and Find Bugs

Abstract: For effective testing of programs with complex structured inputs, the input specification or the input grammar is practically mandatory.  Given an input grammar, we can use it to produce inputs that reach program internals and induce interesting behaviours. We can also use such grammars for quickly generating inputs, for characterizing program behaviours, for repair of inputs, as well as for quick input validation. However, many programs that require complex structured inputs come with a precise input grammar. Even when such a grammar is available, the grammar could be incomplete, obsolete, or incorrect. Hence, one of the long-standing questions in software engineering is the inference of such grammars. That is, given a parser, how do we /invert/ such parsers, and get the input specification back? This talk will focus on how to extract such grammars from parsers themselves, and what can we do with such grammars.

Bio: Rahul Gopinath is a lecturer at the University of Sydney, where he works in the broad fields of Software Engineering and Cybersecurity. He was previously a postdoctoral scholar at CISPA Helmholtz Center for Information Security in Germany. His focus is on inferring program input specifications and behavioral oracles under whitebox and blackbox conditions, which can then be leveraged for fuzzing.



John Grundy (ACSW Keynote)

Title: Learnings from Australia and New Zealand Grant Assessments

Abstract: I have served on a number of Australian and New Zealand funding panels over the past 20 or so years. These include ARC College of Experts (2015-17, 2019-21), MBIE (Endevour and Smart Ideas, most years since ~2014 or there abouts), and several FRST (predecessor to MBIE) panels for industry grants (predecessor to Callaghan and MBIE) and post-docs (predecessor to Rutherford Fellowships). In this talk I will try and articulate some practical learnings from the many assessments and panel discussions (no confidential info disclosed, of course!) I have been part of, that I hope may be useful for Australian and New Zealand researchers preparing MBIE, ARC and Fellowship proposals in 2024.

Bio: John Grundy is an Australian Laureate Fellow and a Professor of Software Engineering at Monash University, Australia. John holds a BSc(Hons), MSc and PhD degrees, all in Computer Science, from the University of Auckland. John is a Fellow of Automated Software Engineering, Fellow of Engineers Australia, Certified Professional Engineer, Engineering Executive, Member of the ACM and Senior Member of the IEEE. John was general chair for ICSE 2023.



Sherif Haggag

Title: Personalized Guidelines for Design, Implementation, and Evaluation of Anti-phishing Interventions

Abstract: Current anti-phishing interventions, which typically involve one-size-fits-all solutions, suffer from limitations such as inadequate usability and poor implementation. Human-centric challenges in anti-phishing technologies remain little understood. Research shows a deficiency in the comprehension of end-user preferences, mental states, and cognitive requirements by developers and practitioners involved in the design, implementation, and evaluation of anti-phishing interventions. This research addresses the current lack of resources and guidelines for the design, implementation and evaluation of anti-phishing interventions, by presenting personalised guidelines to the developers and practitioners. Through an analysis of 53 academic studies and 16 items of grey literature studies, we systematically identified the challenges and recommendations within the anti-phishing interventions, across different practitioner groups and intervention types. Results: We identified 22 dominant factors at the individual, technical, and organisational levels, that affected the effectiveness of anti-phishing interventions and, accordingly, reported 41 guidelines based on the suggestions and recommendations provided in the studies to improve the outcome of anti-phishing interventions. Our dominant factors can help developers and practitioners enhance their understanding of human-centric, technical and organisational issues in anti-phishing interventions. Our customised guidelines can empower developers and practitioners to counteract phishing attacks.

Bio: Sherif Haggag is the Director of Teaching Ops, PG course advisor and A Professor in the School of Computer Science at the University of Adelaide. He has worked at various universities, such as Deakin University, where he achieved his PhD. His research areas of interest include Human-Centred Software Engineering, Cybersecurity, Understanding Human-Centric issues and designing apps with adaptive user interfaces, Human factors and social engineering in Cybersecurity and the persistence of the Privacy Paradox and cybersecurity behaviour. Dr Sherif strongly believes that software engineering is designed to solve human problems and support humans in different aspects, such as health, education, transport, manufacturing, etc. However, current software engineering techniques do not pay attention to other humans/end users who use the same system.



Adrian Herrera

Title: The Hitchhiker's Guide to Fuzzer Coverage Metrics

Abstract: Fuzzers find bugs by hammering a target program with many malformed inputs. These inputs are generated to maximize exploration of the target’s state space. Intuitively, exploration of this state space correlates with bug discovery; after all, you can only trigger bugs in code executed. Fuzzers use lightweight instrumentation to track coverage of the target’s state space. Traditionally, this instrumentation has focused on the control-flow elements of a program (e.g., nodes and edges in the control-flow graph). However, this only offers a coarse-grained approximation of a program’s state space. This talk will explore the wonderful world of fuzzer coverage metrics. We will survey state-of-the-art techniques, focusing on how data-flow elements can be incorporated to provide a richer view of a program’s state space. Finally, we answer the most important question: do richer coverage metrics lead to better bug-finding results?

Bio: Adrian Herrera is a principal security researcher at Interrupt Labs, where he builds fuzzers and other bug-finding tools. Before joining Interrupt Labs, Adrian spent 10 years at the Defence Science and Technology Group as a security researcher. He recently completed his PhD at the Australian National University, which, unsurprisingly, also focused on software security and bug finding.



Martin Kropp

Title: Understanding Agile Software Development – Insights from Empirical Analysis of the Swiss Agile Studies

Abstract: Agile Development has become the de-facto standard in software development nowadays. It was in many ways so successful that it is now even expanding to domains outside of IT. However, companies and organisations are still struggling with it’s proper application and the transformation to it. This is very often because the impact on workstyle and culture are not clear; many improvement are coming late; or because the correct practices are not at all, or not correctly, applied. In my talk, I will present insights that we gained through our empirical studies in Switzerland about satisfaction, leadership, practices and improvements and the influence of experience when working in an agile way.

Bio: Martin Kropp is a Professor of Software Engineering in the Institute of Mobile and Distributed Systems at the University of Applied Sciences and Arts Northwestern Switzerland.  His main interests ar everything that helps to make software development more efficient, including build automation, testing, refactoring and software development methodologies. His current research interest is in agile methodologies and tools supporting agile teams in their daily work and their collaboration. He is a co-founder and organizer of the Swiss Agile Research Network and co-author of the bi-annual Swiss Agile Study. 

Patrick Lam

Title: Hot Takes on Machine Learning for Program Analysis

Abstract: Unless you have been living under a rock, you have noticed the general popularity of AI/Machine Learning over the last few years.  These techniques have also made their way to program analysis research. Even though I see my research as focussing on classical static analysis techniques, it turns out that I've applied Machine Learning techniques in my own work as early as 2008. My students and I have recently done work on Rust bug classification; code representations for method name/return type prediction in WebAssembly; and formally verifying Copilot-generated code. I'll survey less-recent and more-recent applications of machine learning in program analysis, present overviews of my work, and tell you all about my opinions about what machine learning is good for in the domain of static analysis.

Bio: Patrick Lam is an Associate Professor in the School of Electrical and Computer Engineering at the University of Waterloo, Canada. Patrick is on sabbatical in Wellington, and is interested in software engineering applications of static analysis techniques.


Tim McNamara

Title: Making memory unsafety hard: Rust's unsafe keyword as a case study in software robustness

Abstract: This talk discusses the concept of memory safety and some of the approaches that the software engineering community has established to address it. The Rust programming language is unique in its class, systems programming languages, for being able to provide memory safety without runtime overhead. The compiler tags all values with a logical lifetime for which references can be proven to be valid. However, many programs require access to the outside world. References provided by the operating system via syscalls or via other libraries via linking are two examples. For those cases, Rust offers the unsafe keyword. The bulk of the talk will be a discussion of some preliminary findings from research investigating the use of unsafe throughout the open source ecosystem, a corpus of over 120k packages.

Bio: Tim McNamara is one of the world's leading Rust educators and runs a consultancy in the language called accelerant.dev. Previously the global head of Rust language education at AWS, he is the lead content creator for the official Rust training program offered by the Rust Foundation, author of the world-renowned textbook Rust in Action, and host of a popular YouTube channel offering tutorials in the language. He has held positions within the New Zealand eScience Infrastructure (NeSI), Canonical, AWS, and a number of data science consultancies. He holds a Masters in Public Policy from Victoria University of Wellington.


Xiaoyu Sun

Title: Demystifying security and compatibility issues in Android Apps


Abstract: Never before has any operating system been as popular as Android. Despite offering powerful communication and application execution capabilities, the Android OS is riddled with defects such as security risks and compatibility issues, which cost the economy tens of billions of dollars annually. To counteract these threats, our research fellows have proposed many works that rely on program analysis techniques to detect such defects. However, current cutting-edge techniques suffer from issues such as low precision, incompleteness, and soundness problems. In this talk, the speaker will introduce her latest works (such as ASE'22, TOSEM’22, TOSEM’21, ISSRE’21) which use static and dynamic analysis techniques and software testing techniques to produce more sound and precise program analysis results. She will start by presenting real-world software vulnerabilities and then provide background knowledge on static and dynamic program analysis. After that, she will also share a set of corresponding research projects and open-source tools based on years of effort. Finally, she will discuss some future research opportunities.


Bio: Xiaoyu Sun is a Lecturer of Software Engineering at Australian National University. Prior to that, she obtained her PhD degree from Monash University. Her research field interests mainly lie in the field of Mobile Software Engineering (i.e., Mobile Security and quality assurance) and Intelligent Software Engineering (SE4AI, AI4SE). In particular, her research focuses on applying static code analysis, dynamic program testing, and natural language processing techniques to strengthen the security and reliability of software systems. Specifically, her current research projects include developing tools for Android defects detection, e.g., compatibility issues, and privacy leaks. Xiaoyu's research has been published in top-tier conferences and journals including ICSE, ASE, TOSEM, ISSRE, MSR, and IST. She has also established extensive collaboration with the industry, including Bytedance and Alibaba.

Liming Zhu

Title: The Pivotal Role of Software Engineering and DevOps in Responsible AI

Abstract: Responsible AI stands as a beacon in the research landscape, tasked with mitigating risks ranging from existential controversies to ethical transgressions in AI applications. The path to realizing Responsible AI is fraught with challenges, despite the existence of extensive ethical frameworks and advancements in algorithmic accountability. Concurrently, Software Engineering (SE) is undergoing a profound transformation, spurred by AI advancements, and is struggling to maintain system quality and coherence amid the complexities of AI models and auto-generated code. This paradigm shift has given rise to new challenges and opportunities for engineering AI systems responsibly. This talk will dissect SE's contributions to efforts in pioneering Responsible AI through SE and DevOps research. We will scrutinize industry challenges, such as compartmentalized risk management and the misalignment between ethical principles and algorithmic functions, along with the emergence of unforeseen behaviors. We posit that SE and DevOps are crucial in guiding Responsible AI, necessitating a fundamental reorientation—from building systems with specified functions to harnessing and orchestrating emergent capabilities within AI systems.

Bio: Liming Zhu is a Research Director at CSIRO’s Data61 and a conjoint full professor at the University of New South Wales (UNSW). He is the chairperson of Standards Australia’s blockchain committee and contributes to the AI trustworthiness committee. He is a member of the OECD.AI expert group on AI Risks and Accountability, as well as a member of the Responsible AI at Scale think tank at Australia’s National AI Centre. His research program innovates in the areas of AI/ML systems, responsible/ethical AI, software engineering, blockchain, regulation technology, quantum software, privacy and cybersecurity. He has published over 300 papers on software architecture, blockchain, governance and responsible AI. He delivered the keynote “Software Engineering as the Linchpin of Responsible AI” at the International Conference on Software Engineering (ICSE) 2023. His two upcoming books “Responsible AI: Best Practices for Creating Trustworthy AI Systems” and  “Engineering AI Systems: DevOps and Architecture Approaches”, will be published by Addison Wesley in 2024.

Didar Zowghi

Title: Requirements Engineering for Responsible AI

Abstract: As AI systems become increasingly prevalent in everyday life, ensuring they are engineered responsibly that respect and reflect the diversity of society is of paramount importance. Failing to adhere to ethical principles can lead to AI solutions that perpetuate societal biases, inadvertently sidelining certain groups and prolonging existing inequalities. We propose a mechanism for AI developers engaged in responsible AI engineering to seamlessly integrate diversity and inclusion principles during the system development, ensuring AI decisions and functionalities uphold ethical standards. Our proposal has the potential to directly influence future research on how AI systems are designed, developed, and developed, and ensure that they address the requirements of diverse users.

Bio: Didar Zowghi is a Senior Principal Research Scientist at CSIRO's Data61, leading a research team dedicated to diversity and inclusion in artificial intelligence (AI). Her expertise lies in requirements engineering, and, more recently, in the context of Responsible AI development. Prof Zowghi has made significant contributions to the field of software engineering, earning her accolades, including the prestigious IEEE Computer Society Distinguished Educator Award in 2022 and the IEEE Lifetime Service Award in 2019 for her leadership in the Requirements Engineering research community. She is Associate Editor of IEEE Software and Springer's Requirements Engineering Journal. She is also an Emeritus Professor at the University of Technology Sydney (UTS) and a Conjoint Professor at the University of New South Wales (UNSW). She has published over 220 research articles in prestigious conferences and journals and has co-authored papers with 90+ researchers from 30+ countries.