Co-located with ICME 2015, Torino, Italy, 3 July 2015

  • Workshop program published
  • Submission deadline extended
  • Workshop registration is now open
  • Keynote lecture announced
  • Workshop features a demo session to promote applied research and interactions between academia and industry

Important deadlines:

 Paper Submission:  30 March 2015 10 April 2015 (firm)
 Author Notification:  30 April 2015
 Camera-Ready Upload: 15 May 2015 
 Workshop Day:  3 July 2015

 Keynote Lecture: Feature-preserving image and video compression 

Eckehard Steinbach
Technical University of Munich


In many mobile visual analysis scenarios, compressed images are transmitted over a communication network for analysis at a server. Often, the processing at the server includes some form of feature extraction and matching. It has been shown that image or video compression has an adverse effect on the quality of the features that are extracted at the server and as a consequence also negatively impacts the performance of visual analysis algorithms. In this talk, we discuss various solutions which address this problem and lead to significantly improved feature quality even for low bitrate image or video coding. We also compare the proposed schemes to alternative solution strategies which are based on explicit feature compression or patch-based feature communication.
Biography: Eckehard Steinbach is a Professor for Media Technology at the Technical University of Munich (TUM). His research interests are in the area of visual-haptic information processing and communication, 3D image analysis and synthesis, indoor localization, as well as networked and interactive multimedia systems. Dr. Steinbach has served on various conference committees, e.g., as co-chair of VCIP 2001, VMV 2003, the 1st Int. Workshop on Mobile Video (in conjunction with ACM Multimedia 2007), Hot3D 2011 (in conjunction with ICME 2011), the IEEE Packet Video Workshop 2012, IEEE HAVE 2012, and Hot3D 2015 (in conjunction with ICME 2015). In addition, he has served as a program co-chair for the systems track at ACM Multimedia 2009, the IEEE International Workshop on Multimedia Signal Processing 2010, and ICME 2014. He has been a guest editor for the special issue on “Multimedia over IP and Wireless Networks” of the EURASIP Journal on Applied Signal Processing, the special issue on “Advanced Video Technologies and Applications for H.264/AVC and Beyond” of the EURASIP Journal on Applied Signal Processing, the special issue on “Multimedia Applications in Mobile/Wireless Context” of the IEEE Transactions on Multimedia, and the special issue on “Wireless Video Transmission” of the IEEE Journal on Selected Areas in Communications.  Since 2006 Dr. Steinbach serves as an Associate Editor for the IEEE Trans. on Circuits and Systems for Video Technology (CSVT). Since 2011 he is also an Associate Editor of the IEEE Trans. on Multimedia. From 2008-2011 Dr. Steinbach served as a member of the IEEE Multimedia and Signal Processing (MMSP) Technical Committee. Until the end of 2010 he has served as a member of the ICME Steering Committee. In March 2005 Dr. Steinbach has been appointed as a guest professor at the Sino-German School for Postgraduate Studies (CDHK) at Tongji University in Shanghai.  Dr. Steinbach and his team have received several best paper, best student paper or best poster awards for their work. Dr. Steinbach is the recipient of the 2011 “Research Award” of the Alcatel-Lucent Foundation. He was elected Fellow of the IEEE in 2015 for his contributions to visual and haptic communications.  

Workshop description:

Traditional visual analysis algorithms have been mostly studied in a centralized scenario where all visual data is processed at a central location. However, for emerging applications such as mobile visual search, wireless camera networks and mobile augmented reality, several constraints in computational power, energy and bandwidth require a radically different solution. To enable a paradigm shift from centralized to distributed visual processing, challenges in computational efficiency, feature representation, energy consumption, data compression, object detection and tracking must be addressed. In addition, since the scale of visual data is massive, efficient representation methods and collaboration among the distributed entities are necessary to achieve rapid visual processing, largescale storage and flexible/scalable visual analysis.

Greeneyes project