Workshop in conjunction with the IEEE BigData Conference 2015

High dimensionality is inherent in applications involving text, audio, images and video as well as in many biomedical applications involving high-throughput data. Many applications involving relational or network data also produce massive high-dimensional data sets. To deal with such problems, a wide range of approaches are available. These include "large p, small n" settings, dimensionality reduction, clustering, manifold learning, random projections, etc. Such approaches are crucial in dealing with issues concerning statistical reliability, revealing and visualizing structure hidden by the high dimensionality and noise, as well as saving the computation and storage burden.

The purpose of this workshop is two-fold: first to highlight novel research addressing high dimensionality and at the same bringing in contact prominent researchers and practitioners in the particular aspect of big data analysis. The dual keynote talks from both the academia and the industry emphasize the importance of building bridges between state-of-the-art research and practical applications.

The workshop's interests range from applications involving high dimensional data to the theoretical aspects of the problem. In addition, there is a particular interest in techniques that take advantage of parallel platforms to effectively handle truly large-scale real-world problems, and techniques that improve memory efficiency, a premium in streaming and distributed environments.

Oct 29 - Nov 1, 2015 @ Santa Clara, CA, USA