OzViz 2011

We cordially invite you to participate in the OzViz 2011 workshop which will be held on 23-25th November 2011 in Sydney, NSW.

Started in 2001, OzViz is the major workshop for visualisation practitioners and researchers from across Australia and New Zealand. The workshop provides an occasion for participants to present research outcomes, share innovative ideas, publicise work and meet colleagues. It is highly multidisciplinary, with participants from fields such as mathematics, geoscience, architecture, biology, medicine and astronomy presenting alongside computer graphics and visualisation experts.

This year’s OzViz program will include an exciting Accelerated Computing Workshop on 23rd November with invited speakers and live demonstrations (see below). Also Visualisation Image Exhibition and show of the newest demoscene productions

OzViz 2011 + Accelerated Computing Workshop

OzViz 2011 Workshop #ozviz

Printable version of the initial announcement can be downloaded from here.

Program Committee:
CSIRO: Tomasz Bednarz, John Taylor, Justin Baker, Con Caris, Pascal Vallotton, Dadong Wang,
ANU: Drew Whitehouse,
UWA: Paul Bourke.

Important dates & locations:
24th  October: deadline for presentation abstract submission,
1st  November: acceptance notification,

23rd  November: Accelerated Computing Workshop (Auditorium at the CSIRO’s Riverside Life Sciences Centre, 11 Julius Avenue, North Ryde NSW),
24-25th November: OzViz 2011 workshop (University of Sydney, Civil Engineering Building, 1st level).

OzViz 2011 and Accelerated Computing workshops will be free for all participants.

Visualisation Image Exhibition:
Submitted works, click here.


Presentations on 24th November 2011 @ University of Sydney, Civil Engineering Building, 1st Level:

8:30 - 9:00

Registration, coffee / tea

9:00 - 9:10


9:10 - 9:50

Drew Berry KEYNOTE 1
“Revealing the molecular world to the mainstream public”

Drew Berry will present his animated visualisations that combine accuracy with aesthetics to engage broad audiences and transform comprehension of biology. Visualizations have always been central to the thinking and discovery process of science, but they also have huge appeal for the public who are hungry to learn more about the inner workings of the body, but also expect to be entertained.

SESSION 1 Chaired by Drew Whitehouse

9:50 - 10:30

Paul Bourke
(iVEC@University of Western Australia)
Projects 2011

Paul will present a number of projects completed in 2011. One is a pure science visualisation project with ICRAR (International Centre for Radio Astronomy Research) in collaboration with Dr Alan Duffy. It involved visualisation of data from three different particle simulations but they had enough in common that a single rendering pipeline could be created. The simulations involve large scale structures in the Universe, namely galaxies and galaxy distribution, and employ between 200 million and 1 billion particles.

Another project, in collaboration with Dr Peter Morse, is the Pausiris Mummy Exhibit created for the new MONA (Museum of New and Old Art) located in Hobart. In addition to the volume visualisation and animation the whole presentation system and software was developed, this included a double HD projection system built within a very confined space. In the museum, the digital representation lies alongside the physical mummy casket, the final animations reveal the interior of the mummy which has not previously been revealed.

10:30 - 11:00

Kit Devine
(Australian National University)
“History Rocks: Designing an engaging, immersive and interactive Virtual Heritage Resource”

Virtual heritage visualisations offer unique ways to present heritage objects and places. Virtual objects can be rotated and viewed at different angles and virtual places explored. Time-lapsed animations can be used to show changes over hundreds of years condensed to a few seconds. The Virtual Sydney Rocks is designed to be an engaging virtual space that allows users to explore the oldest part of Sydney over a 200 year period. Users set the time and date to determine the sun position, the weather and which buildings and objects are displayed. Buildings and other objects are linked to the Virtual Sydney Rocks Guidebook which displays associated information on a second screen. Users can take a tour, play a game or explore freely. The Virtual Sydney Rocks will be used to conduct research into user preference for engagement strategy in immersive, interactive worlds.

11:00 - 11:15


SESSION 2 Chaired by Paul Bourke

11:15 - 11:45

Uwe Rosebrock, Tisham Dhar, Patrick Hogan and Varun Chandola
(CSIRO Marine & Atmospheric Research)
“Open Source, Interactive, Real-Time Visualization and Analysis Framework for Geospatial Data”

A collaborative attempt to use mostly existing open source components with the aim to provide a framework delivering the basic tools for n-dimensional, geospatial data presentation and interrogation in real-time.

The objective is to build an open source 4D virtual globe application using NASA World Wind technology that integrates analysis of climate model outputs with remote sensing observations, demographic and environmental data sets. This will facilitate a better understanding of global and regional phenomenon, and the impact analysis of climate extreme events. The critical aim is real-time interactive interrogation.

11:45 - 12:10

Con Caris,
Peter Reid,
Lance Munday
(CSIRO Earth Science and Resource Engineering)
“High-resolution Panoramic Imaging using an Unmanned Aerial Vehicle”

Systems and services for generating digital aerial images for geospatial mapping applications have been available for at least 15 years. The traditional platforms for low altitude image capture have been mainly aircraft, helicopters and blimps. As a result, the cost of capturing these images is expensive, and processing the images can be very time-consuming depending on the available tools, expertise and the desired accuracy of the result. With the advent of low-cost remotely controlled quadcopters, and other multiple rotor designs, light-weight digital cameras can now be used to capture digital aerial images in a controlled and efficient manner.

In this presentation, we will discuss the workflow required to capture, correct and stitch images from a UAV to form high resolution georeferenced panoramic images that can be displayed in Google Earth or a Flash Player.

12:10 - 12:40

Andrew Leahy
(University of Western Sydney)
UWS Wonderama

Would like to introduce the Wonderama a panoramic tile-wall vis rig that we’ve been experimenting with at the University of Western Sydney.

Wonderama is an example of a panoramic vis-wall where flat displays are arranged in an arc. It was built for zero cost using spare equipment as a proof-of-concept and as a demonstration/development system for immersive data visualisation. The rig is used as a software development platform by UWS technical staff, B.CompSci and B.ICT undergrads and Google Summer of Code students. Wonderama has been highly popular amongst the students and has garnered interest from senior researchers for data vis and outreach.

Additional Wonderama Links:
Liquid Galaxy http://code.google.com/p/liquid-galaxy/
Wonderama http://code.google.com/p/wonderama/ (public wiki, which is woefully out of date)
ClusterGL2 http://code.google.com/p/clustergl2/

12:40 - 12:45

Tomasz Bednarz (CSIRO Mathematics, Informatics and Statistics) Demoscene 2011 – part 1

Show of best demoscene productions of 2011 - real-time art visualisation, etc.

12:45 - 13:15


SESSION 3 Chaired by Luke Domanski

13:15 - 13:45

Derek Gerstmann (International Centre for Radio Astronomy Research & University of Western Australia) A Practical Visualization Strategy for Large-Scale Supernovae CFD Simulations

In this talk, we describe a practical approach we’ve developed which enables explorative visualisation for studying large-scale time-series astrophysical CFD simulations. This is part of an ongoing data-intensive research project within our group to support the visualization of large-scale astrophysics datasets for the scientists at the International Centre for Radio Astronomy Research (ICRAR).

In particular, we discuss the application of progressive stochastic sampling and adjustable workloads to insure a consistent response time and a fixed frame-rate to guarantee interactivity. The user is permitted to adjust all rendering parameters while receiving continuous visual feedback, facilitating explorative visualisation of our complex volumetric time-series datasets.
13:45 - 14:15 Pawel Lachowicz “Adaptive Wavelet Analysis and Turbulent Flow around Black-Holes”

Hot gas falling by the spiral onto black-holes undergoes dynamic changes in the strong gravitational field. That manifests through a rapid time variability in emitted radiation. First, we will explain the accretion process, the geometry and physics of the hot flow, and observables. Next, we will use the variability of X-ray emission as a diagnostic tool and apply digital signal processing techniques to visualize the dramatic course of actions in the flow. Finally, we will show how to map and interpret the signatures of accretion using adaptive wavelet methods.
14:15 - 14:45 Justin Baker and Peter Tyson
(CSIRO Information Management & Technology)
“Remote Visualisation in CSIRO”

Remote visualisation is a means of providing a local client – usually a desktop PC - with efficient virtualised access to dedicated 3D graphics hardware. Like other virtualisation technologies, remote visualisation systems are based on commodity hardware and the virtualisation is performed through a software layer. Different remote visualisation libraries are generally tied to specific operating systems.

CSIRO researchers primarily use either desktop Windows or Linux based systems for their visualisation work. For this reason, the chosen remote visualisation solution needed to support native applications for these two operating systems. A variety of remote visualisation technologies were investigated before settling on an architecture based on the open source projects VirtualGL and VizStack.
14:45 - 15:00

SESSION 4 Chaired by Con Caris

15:00 - 15:30

Ajay Limaye
(Australian National University)
“mahaDrishti - Massive Volume Exploration and Presentation Tool”

"Maha" stands for gigantic/massive in Sanskrit. mahaDrishti will be able to handle multiGigabyte datasets with progressive rendering ability. Users would be able to interactively explore such datasets even on 32-bit systems with reasonably good
graphics card (~1GB texture memory). A new way of exploring 4D datasets. Users will be able to perform full volume rendering, isosurface and slices viewing. mahaDrishti
will supplement existing Drishti application with the ability to explore datasets that the current version cannot handle.
I plan to demonstrate exploration of multiGigabyte datasets on my 32-bit laptop with 2GB RAM and Nvidia fx3800(1GB) card.

15:30 - 16:00

Drew Whitehouse
(Australian National University)
Voluminous - The Cloud Volume Visualisation System

Volume visualisation is a method for visualising three dimensional data-sets. The technique is applied in a wide variety of research disciplines, particularly for visualising computer simulations and the results of 3d imaging (e.g. x-ray and magnetic resonance imaging tomography). Modern high performance computing and sensor technologies has meant there is a veritable fire hose of this data to assimilate and many new “cloud” technologies are ideal for the task.

Youtube video of Voluminous.

16:00 - 16:30

Anna Ceguerra and Simon Ringer
(University of Sydney)
“Monte Carlo for Atom Probe Tomography data, using the GM-SRO”

The atom probe has the capacity to capture data for millions of atoms, in several minutes to hours. The captured data is the highest combination of chemical and spatial resolution in any current microscopy technique, giving us the ability to visualise atoms and their spatial relationship to one another on a large scale. However, there are known issues with the atom probe in that the detector does not detect 43% of the atoms, and the spatial resolution is imperfect (Geiser et al. 2007). These are open issues that do not yet have a practical solution in instrumentation. In this project, the aim is to fill in the “missing pieces” of data, using the information that is already available from the atom probe experiment.

16:30 - 17:00

Phillip Gough and Jonathan Mcewan “Encouraging Play with Interactive Visualisation of Live Data”

This presentation will introduce the interactive artwork, City___ and discuss the research, methodology, conceptualisation and development behind interactive artistic data visualisation.
City___ is a modular, generative and interactive artwork designed to prompt play in the urban space. The artwork visualises data aggregated from online services as beautiful streams of colour in a fluid dynamic simulation.
City___ has 3 modes of interaction: passive, reactive and interactive. As each mode is discovered, more meaning can be extracted from the artwork. This allows users to engage with the artwork on multiple levels. Curious users are rewarded by being able to have a greater influence on the artwork, and through exploration, have more information presented to them.

And traditionally we go for "unofficial dinner", from 1800 - late :-)

Presentations on 25th November 2011 @ University of Sydney, Civil Engineering Building, 1st Level:

8:30 - 9:00

Registration, coffee / tea

9:00 - 9:40

Olivier Salvado
(CSIRO ICT Centre)
"Analysis and visualization of medical imaging"

Medical imaging is a vital clinical and research tool that has made significant strides in the last few decades with impressive advances in acquisition technology. However, image interpretation in clinical setup is mostly performed manually as very few automated methods have been approved by regulators. In recent years, major advances have been made that allow robust and reliable automatic image analysis, understanding, and interpretation. This talk will present the challenges faced by the research community to share analysis software and visualization tools of 3D images, and the desirable features of such tools. Examples of open source software platforms will be described that include advanced visualization and analysis, with cloud computing facilities.

SESSION 5 Chaired by Pascal Vallotton

9:40 - 10:10

Seán O’Donoghue
(CSIRO CMIS, Garvan Institute)
Visualising Biological Data: Challenges & Perspectives

Experimental methods in biological research are delivering data of rapidly increasing volume and complexity. Unfortunately, many current methods and tools used to visualise and analyse these data are inadequate, and urgent improvements are needed if life scientists are to gain insight from these data deluge, rather than being overwhelmed.

I will present the current outcomes of two recent, international community initiatives to increase the prominence of data visualisation and usability in computational biology; these initiatives (http://vizbi.org/ and http://biovis.net/) seek to bring visualisation experts together with computational biologists, bioinformatics, graphic designers, animators, and medical illustrators. I will also illustrate how the application of such visualisation and usability principles can have a significant impact on biological research, with examples from my own research on macromolecular structures, systems biology, and literature mining (http://odonoghuelab.org/).

10:10 - 10:40

Dirk Van Der Knijff, Bernard Meade,
Richard Collmann (University of Melbourne, VeRSI)
“HD3D Telemedicine”

The HD3D telemedicine project is a cross-disciplinary collaboration between the Centre for informatics and Applied Optimisation at the University of Ballarat (UB), IBES, VeRSI, the Melbourne Dental School, ITS Research Services, and Department of Psychiatry at the University of Melbourne (UoM), and 20 health care groups in Melbourne and Western Victoria.

The full project consists of four proof-of-concept projects to test and trial innovative ICT hardware/software to be used for the tele-assessment, -diagnosis and -follow-up of patients located at a distance from the relatively small number of highly-trained clinical specialists in aged care/geriatric services, dentistry, oncology, wound management, and psychiatry. The sub-projects are Home-Care, to trial the use of HD3D cameras in the patient’s home, Mind-Care, to trial the use of HD3D units to provide better access to specialised neuropsychiatric assessments, Aged-Care, to trial and model general and specialist healthcare support to Heritage Lakes Aged Care centre, and Bush-Care, to trial provision of specialist cancer care to patients at the Nhill and Horsham Hospitals.

In this project, the University of Melbourne, VeRSI and IBES will be providing technical expertise to assemble and trial the equipment and assist in the software development. In this talk we will introduce the project and then describe the equipment and the results of some initial tests we have done.

10:40 - 11:00


SESSION 6 Chaired by Justin Baker

11:00 - 11:30

Chuong Nguyen, Paul Jackway, Ron Li, Changming Sun, David Lovell, Xavier Sirault, Scott Berry, Robert Furbank, Anthony Paproki, Jurgen Fripp and John La Salle
Image analysis for accelerated plant phenomics

Feeding the world and preserving its diversity of life are two incredibly challenging tasks where technology has a critical role to play. Current progress in understanding, discovering and identifying living species has to be accelerated significantly, but getting information from the physical world into the digital domain is a major impediment. However, computer vision and medical imaging provide more effective approaches to acquire raw data from samples, process and transform the data into more a useful form of information. By adopting and adapting these approaches, larger volumes of data can be obtained more rapidly and more accurately. Accelerated Phenomics represents a new research activity that utilises contemporary computer vision, image analysis and pattern recognition technology to speed-up the process of species discovery and classification. This talk will focus on image processing and 3D reconstruction of Accelerated Phenomics for plants, analyse current technical challenges and present initial results achieved by the collaborative efforts of our groups.

11:30 - 12:00

Stuart Ramsden, Stephen Hyde and Vanessa Robins
(Australian National University)
“EPINET: Euclidean Patterns in Non-Euclidean Tilings - Extending the Structure Zoo”


We present the latest work in an ongoing investigation into the links between 2D Non-Euclidean Hyperbolic Tilings and Nets in 3D Euclidean space. The algorithm for enumerating a simple subset of these structures has been outlined in a previous paper and our most recent results generalises the approach to a large family of related examples.

12:00 - 12:30

Rob Manson (MobLabs) The web can now see and hear and ...

New sensor based Web Standards developments have punched a hole in the web that is letting the real world leak into the browser. The getUserMedia API now lets us access cameras and microphones and JSARToolkit and the javascript based Natural Feature
Tracking from ICG Graz University have shown that browsers can now be taught to perceive the world around them. Combining this with the <canvas> and WebGL we gives a real working model for a Web Standards based Augmented Reality. On top of this we also
have OGCs Sensor Web Enablement and new developments like the Sensor API and the rapid spread of networked sensors. Massively distributed and dynamic immersive visualisation is now the new structural form for the modern web.

12:30 - 12:40

Tomasz Bednarz (CSIRO Mathematics, Informatics and Statistics) Demoscene 2011 – part 2

Show of best demoscene productions of 2011 - real-time art visualisation.

12:40 - 13:10


SESSION 7 Chaired by Dadong Wang

13:10 - 13:40

Matt Adcock and Chris Gunn
(CSIRO Information & Communication Technologies)
“Annotating with ‘Sticky’ Light for Remote Guidance”

A worker performing a physical task may need to ask for advice and guidance from an expert. This can be a problem if the expert is in some distant location. We describe a system which allows the expert to see the workplace from the worker’s point of view, and to draw annotations directly into that workplace using a laser pico projector. Since the system can be worn by the worker, these projected annotations may move with the worker’s movements. We describe a method for sticking these annotations to the original positions on the respective physical objects thereby compensating for the movement of the worker.
13:40 - 14:10 Phillip Gough and Adityo Pratomo
(University of Sydney)
Interactive Data Visualisation with a Tangible User Interface

This presentation will introduce an interactive data visualisation from Reefs on the Edge, a transmedia art installation. We will be discussing how Reefs on the Edge presents an abstract data visualisation, approached from an art perspective, to successfully engage an audience with scientific data on how sea surface temperatures effect the survival of corals in the south of the Great Barrier Reef.

Using a Tangible User Interface to control the simulation, users are able to explore and engage with the information. This talk also presents key concepts of developing a tangible user interface using technologies such as Arduino and reacTIVision to develop interactive table.

14:40 - 17:00

Trip to the Powerhouse Museum (http://www.powerhousemuseum.com/)


Accelerated Computing Workshop at OzViz 2011

After the success of the 2010 OzViz OpenCL(TM)* Workshop and based on helpful feedback from more than 60 attendees we will expand the workshop this year to include OpenCL, CUDA, as well as invited talks. The 2011 OzViz Accelerated Computing Workshop aims to bring together practitioners and enthusiasts interested in heterogeneous computing and compute-assisted  visualisation.

Last year's website and presentations:

Program Committee:
CSIRO: Tomasz Bednarz, John Taylor, Luke Domanski, Sam Moskwa, 
UWA/ICRAR: Derek Gerstmann,
NVIDIA: Mark Harris.

The Accelerated Computing Workshop will provide attendees with an intensive one day forum which covers both a general introduction as well more advanced topics, showcasing CUDA & OpenCL as modern massively-parallel programming environments. Speakers from both industry and academia will discuss a range of subjects, including core fundamentals, hardware architectures, parallel programming, as well as workload scheduling and device specific optimizations.

Possible topics:
Introduction to CUDA/OpenCL and specification overview, parallel architectures, memory models, APIs and compute languages, high-level API bindings, graphics interoperability, visual computing applications, parallel primitives, numerical  simulations, and various other examples.

The detailed  workshop program details will be announced soon.

Accelerated Computing Workshop will be held on on 23rd November 2011 at CSIRO's Riversive Life Sciences Centre (RLSC):

The CSIRO's RLSC is located at 11 Julius Avenue, North Ryde NSW 2113, Sydney, Australia, see here, GoogleMaps location see here.

Very easy access from the CBD, depending on your location, check the train planner here or 131500 Transport Infoline.  Search for Northern Line, City to Hornsby or Epping (via North Ryde, Macquarie University), station North Ryde (which is ~200m from the CSIRO's RLSC).

Presentations on 23rd November 2011 @ CSIRO RLSC, Lecture Theatre:

8:30 - 9:00

Registration, coffee / tea

9:00 - 9:15

Tomasz Bednarz

9:15 - 10:15

Prof. Takayuki Aoki KEYNOTE 1
Large-scale CFD applications on GPU-rich supercomputer TSUBAME2.0

GPUs have high performances in both computation and memory bandwidth suitable for CFD applications. The TSUBAME 2.0 supercomputer, equipped with 4224 NVIDIA Tesla M2050 GPUs, has started the operation since November 2010 at the Tokyo Institute of Technology.

For GPU computing, we have rewritten the entire code of a high resolution meso-scale atmosphere model ASUCA that is being developed by the Japan Meteorological Agency for the purpose of the next-generation weather forecasting service. Using 3996 GPUs on TSUBAME 2.0, we achieve extremely high performance of 145 TFLOPS in single precision for 14368×14284×48 mesh. We also show gas-liquid two-phase flows and results of Lattice Boltzmann Method. Recently, we have achieved 1.017 PFLOPS for a metal dendritic solidification by solving the phase-field model on 4000 GPUs and the paper has been nominated as one of 5 Gordon Bell Award finalists in SC’11 conference in November 2011.

If possible, we also would like to demonstrate stereoscopic visualizations by NVIDIA 3D VISION.

10:15 - 10:35


SESSION 1 Chaired by Sam Moskwa

10:35 - 11:10

Luke Domanski “GPUs and CUDA Fundamentals”

11:10 - 11:45

Luke Domanski “High Performance Image Analysis Using GPUs”

11:45 - 12:55

Derek Gerstmann OpenCL by Example

OpenCL is an open, royalty-free, cross-platform standard specifically designed for general purpose parallel programming of heterogeneous systems, including modern desktop and workstation class multi-core processors (CPUs), graphics processing units (GPUs), and other accelerators such as Cell, ARM and digital signal processors (DSPs).

This talk will provide an example-driven presentation for a wide range of OpenCL-related topics, including the fundamental API + kernel language, the event model, graphics interoperability with OpenGL + DX, as well as an overview of the current state of the standard and what is being addressed by the Khronos organisation.

OpenCL Implementations:
- http://developer.apple.com/search/index.php?q=opencl
- http://developer.amd.com/zones/OpenCLZone/
- http://software.intel.com/en-us/articles/opencl-sdk
- http://developer.nvidia.com/opencl
- http://opencl.snu.ac.kr/

WebCL Prototypes:
- http://webcl.nokiaresearch.com
- http://code.google.com/p/webcl/

OpenCL Books
- http://www.amazon.com/OpenCL-Programming-Guide-Aaftab-Munshi/dp/0321749642
- http://www.amazon.com/OpenCL-Action-Accelerate-Graphics-Computations/dp/1617290173/
- http://www.amazon.com/Heterogeneous-Computing-with-OpenCL-ebook/dp/B005JRHYUS
- http://www.fixstars.com/en/opencl/book/

12:55 - 13:45


SESSION 2 Chaired by Derek Gerstmann

13:45 - 14:35

Suraj Pandey “Experience and Practice: Workflows for Scalable Executions of Scientific Applications”

A workflow model helps simplify the logical representation of underlying complex data and control dependencies for most scientific applications. This inherently assists application scientists in managing and accelerating the computations in a scalable manner. In this presentation, we will present two applications, as case studies, that are modeled as workflows and executed on Clouds for scalability. They are: a) fMRI Image Registration (including visualization), b) Gravitational wave search. We will present general ideas on transforming complex batch processes into workflow models; selecting standard representations and languages for their representation; using a generic middleware for managing executions of workflows; and finally using Cloud platforms for scalable executions. We will also discuss underlying technologies used, their shortfalls and conclude the presentation with discussion on challenges and future work.
14:35 - 15:10 Sam Moskwa “Australia's Road to Exa-scale”

Today the fastest supercomputers are known as Petascale, capable of one quadrillion floating point operations per second. The US Department of Energy has the aim of achieving Exascale systems by 2018 requiring a thousand-fold increase.

In July 2011 the Japanese K-computer sat at the top of the list of the 500 fastest supercomputers worldwide. Producing 64 times the compute of Australia's largest system, the "Vayu" cluster at the National Facility. The Japanese economy produces only 5-6 times the GDP of Australia so we should ask if it is a failure of ambition, expertise, or lack of investment holding us back. However the news is not all bad. In 2012 Australia will enter the Petascale era with a new system at the National Facility. The following year a second Petascale system will be installed in the Pawsey Centre as part of the Australian Square Kilometre Array Pathfinder project. Should Australia succeed in its bid to host the international Square Kilometer Array it may result in being one of the first to host an Exascale facility.
15:10 - 15:30

SESSION 3 Chaired by Luke Domanski

15:30 - 16:15

Sam Moskwa “What I saw at SC11”

SC (formerly Supercomputing) is the International Conference for High Performance Computing, Networking, Storage, and Analysis. SC'11 will be held in Seattle on November 12-18 where the major hardware and software vendors and 10,000 attendees will converge to discuss all things HPC. With GPUs continuing their role as a disruptive technology, NVIDIA CEO Jen-Tsun Huang will deliver the opening keynote. In answer, Intel will be aggressively promoting their alternative to GPUs; the Many Integrated Cores architecture.

Expected topics in "What I saw at SC11" will include vendor roadmaps, parallel programming models, and challenges facing the HPC community.

16:15 - 16:45

All Open Discussion

*OpenCL and the OpenCL logo are trademarks of Apple Inc. used with permission from the
Tomasz Bednarz,
Nov 29, 2011, 4:51 PM
Tomasz Bednarz,
May 11, 2011, 12:41 AM
Tomasz Bednarz,
May 10, 2011, 11:32 PM
Tomasz Bednarz,
Nov 21, 2011, 1:26 PM
Tomasz Bednarz,
Nov 28, 2011, 2:52 PM