Modeling & Simulation (M&S) is widely used in industry to analyze process-based production and logistics problems across manufacturing and defense, and related problems in traffic planning and supply chains. It is also used to study clinical pathways and resource management in healthcare. Commercial-off-the-shelf Simulation Packages (CSPs) are visual interactive modelling environments that support the development, experimentation and visualization of simulation models. These support different M&S world views such as real-time simulation, discrete-event simulation, agent-based simulation and system dynamics. Of these, arguably, discrete-event simulation is the most widely used in industry.
The contemporary practice of discrete-event simulation is facing significant and practical barriers in terms of time and system boundary. Time limits the amount of experimentation can be performed within a simulation project and leads to either model depth being sacrificed or investigation left incomplete. System boundary limits the scope of a simulation to what can be practically modeled within a single CSP and prevents M&S practitioners from addressing large scale industrial problems. Distributed computing, and in particular Parallel and Distributed Simulation, can address these issues.
This presentation presents experiences with dealing with time and system boundary M&S problems in industry through the application of Grid computing and Distributed Simulation. It argues that effective collaboration between end user practitioners, vendors and researchers is key and that researchers play a vital role in the successful knowledge transfer of these techniques from academia to industry. The presentation sets out a roadmap of challenges that need to be met to allow M&S practitioners to break through these barriers to reduce project costs, perform more effective investigation and to study larger systems.
Dr Simon J E Taylor (firstname.lastname@example.org) is a Reader in Computing in the Department of Information Systems and Computing at Brunel University and leader of the ICT Innovation Group. He is Chair of the COTS Simulation Package Interoperability Standards Group under SISO and Editor-in-Chief of the Journal of Simulation. He leads the Tools and Training Theme of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) at Brunel. He was Chair of ACM's SIGSIM (2005-2008). He regularly consults with industry and has published widely in simulation modelling. His recent work has focused on the knowledge transfer of advanced ICT techniques into simulation modelling, improving healthcare services with the Cumberland Initiative, and the impact of advanced research infrastructures in Europe and Africa.
Distribution of Random Streams in Stochastic Models at the age of multi-core and manycore processors
Random number generators are necessary in every simulation which includes stochastic aspects. For High Performance Computing, there is an increasing interest in the distribution of parallel random number streams. Even if we have now at our disposal statistically sound random number generators according to very tough testing libraries, their parallelization can still be a delicate problem.
A set of recent publications shows it still has to be mastered by the scientific community. In this presentation, we discuss the different partitioning techniques currently in use to provide independent streams with their corresponding software. A state of the art in the parallelization of random numbers for High Performance Computing is given from the point of view of a simulation practitioner. With the arrival of multi-cores and manycore processors architecture on the scientist desktop, modelers who are non-specialists in parallelizing stochastic simulations need help and advice in distributing rigorously their experimental plans and replications according to the state of the art in pseudo-random numbers partitioning techniques. New programming models and platforms are now available to a larger audience, particularly with the arrival of very powerful GP-GPUs (General Purpose Graphical Processing Units).
In addition to the classical approaches in use to parallelize stochastic simulations on regular processors, this talk will also present the recent advances in pseudo-random number generation for GP-GPUs with their associated parallelization techniques.
Professor David Hill is currently Vice President of Blaise Pascal University in charge of ICT. He is also past director of the Inter-University Computing Center (CRRI) of the Auvergne Region (2008-2010). From august 2005 to august 2007, he was deputy director of ISIMA Computer Science & Modeling Institute (French Grande Ecole d’Ingénieur) where he managed various departments before 2005. Since 1990, David Hill has authored or co-authored more than a hundred and fifty papers and he has also published many text books. Professor David Hill has served as chairman or program chairman at various international conferences. He is Associate Editor and reviewer for simulation, ecological and environmental modeling journals.