Dr. Antonio Vallecillo is Professor of Computer Science at the University of Málaga. His research interests include model-based software engineering (MBSE), open distributed processing (ODP) and software quality. Between 1986 and 1995 he was in the Computer Industry, working for Fujitsu and ICL. In 1996 he joined the University of Málaga, where he currently conducts research on software modeling and analysis. He is involved in several standardization activities within AENOR, ISO, ITU-T, and the OMG. He has organized several international conferences, including ECOOP 2002, TOOLS 2010, MODELS 2013 and ECOOP 2017, has served as PC Chair for conferences such as TOOLS, ICMT, ECMFA, QoSA, and ICSOC, and is in the editorial board for the Sosym and JOT journals. Between 2014 and 2018 he was the President of the Spanish Society on Software Engineering (SISTEDES), and since 2017 he coordinates the Computer Science and Information Technologies (INF) area of the Spanish Research Agency (AEI). He is currently one of the members of the OMG "Precise Semantics for Uncertainty Modeling" initiative.
Modeling and Evaluating Quality in the Presence of Uncertainty
Uncertainty is the quality or state that involves lacking information or insufficient knowledge. Uncertainty can be due to different reasons, including incomplete or inaccurate information, inexact data or measurements, imprecise human judgments, or approximate estimations. The explicit representation of uncertainty is gaining attention among software engineers in order to provide more faithful systems representations, more accurate design methods, and better estimations of the development processes. However, incorporating uncertainty into our systems models is not enough. Uncertainty also affects many aspects related to the quality of systems, products, processes, and data, including how uncertainty is taken into account when designing our systems, measured when evaluating their quality, and perceived by customers and users. In fact, uncertainty – and, more specifically, the lack of knowledge about the system, our measuring tools, and our potential users – should be incorporated into our quality models, too. This talk identifies several kinds of uncertainties that have a direct impact on quality, and discusses some challenges on how quality needs to be planned, modeled, designed, measured and ensured in the presence of uncertainty.
Maria Teresa Baldassarre
Dr. Maria Teresa Baldassarre received a degree with honours in Informatics at the University of Bari, Italy, where she has also received her PhD in software engineering. She is currently assistant professor here. Her research interests are mainly focused on: empirical software engineering, software metrics, quality assurance and improvement in software processes and products.
She collaborates on several research projects and carries out controlled and in field experimentation within small and medium enterprises. She is a partner of the SER&Practices spin off company of the University of Bari. She is actively involved in research projects and collaborations with international partners. Currently she is the representative of the University of Bari in the International Software Engineering Research Network (ISERN), and is involved in various program committees related to relevant software engineering and empirical software engineering international conferences.
Dr. Christof Ebert is managing director at Vector Consulting Services. He supports clients around the world to improve product strategy and product development and to manage organizational changes. Prior to that, he had senior management positions for twelve years with a global IT market leader. A member of several industry advisory boards, he is a professor at the University of Stuttgart and at the Sorbonne in Paris. Many Fortune 100 companies have already used his competence to improve their performance and competitiveness. His books on requirements engineering and global software engineering serve as industry references worldwide. Follow Christof on Twitter: @ChristofEbert
Scaling Agile for Critical Systems
Agile development is mandatory to remain flexible and value-oriented at the same time. Although most projects claim usage of agile methods, empirical research shows that a majority sees is rather as a move towards anything goes. Agile practices must be scaled to specific environments such as team set-up, business-risk and quality requirements such as safety. Especially distributed teams face severe difficulties with agile techniques due to distance in time and location. Our research provides results from an empirical field study on distributed development with agile practices. The benchmark covers some thirty companies from different regions and industries worldwide. It looks towards how agile methods can be scaled for global teams and distributed development. We found that successful agile adoption needs tailoring, rather than predefined complex frameworks. Lessons learned are provided to facilitate transfer to other settings.