Notification of paper acceptance to authors extended to Nov. 7, 2016 due to high number of submissions.
High dimensionality is inherent in applications involving text, audio, images and video as well as in many biomedical applications involving high-throughput data. Many applications involving relational or network data also produce massive high-dimensional data sets. To deal with the challenges in processing and analysing such data sets, a wide range of approaches are available. These include "large p, small n" settings, dimensionality reduction, clustering, manifold learning, random projections and etc. Such approaches are crucial in dealing with issues concerning statistical reliability, revealing and visualizing structure hidden by the high dimensionality and noise, as well as saving the computation and storage burden.
The purpose of this workshop is two-fold: first to highlight novel research addressing high dimensionality and at the same time bringing in contact prominent researchers and practitioners in the particular aspect of big data analysis. The dual keynote talks from both the academia and the industry emphasizes the importance of bridging the gap between state-of-the-art research and practical applications.
The workshop's interests range from applications involving high dimensional data to the theoretical aspects of the problem. In addition, there is a particular interest in techniques that take advantage of data-parallel/graph-parallel platforms to effectively handle truly large-scale real- world problems, and techniques that improve memory efficiency, a premium in streaming and distributed environments.