Traditional research on intelligent video analytics has primarily focused on video analysis from fixed overhead cameras where techniques such as background modeling are commonly used for moving object detection.
More recently, wearable visual sensors and cameras mounted on aerial and ground vehicles are becoming increasingly accessible in terms of cost and availability, leading to new forms of visual sensing based on moving cameras. For example, dash cams are being mounted in police vehicles for license plate recognition; police officers are starting to use body-worn cameras in patrol operations; and drones are gaining significant popularity in a variety of applications, including law enforcement. These mobile devices are significantly expanding the scope of video analytics beyond traditional static cameras by providing quicker and more effective means of crime fighting, such as wide area monitoring for civil security and crowd analytics for large gathering and sports events. Combining stationary cameras with moving cameras enables new capabilities in video analytics, at the intersection of Wearables, Internet of Things, Smart Cities, and sensing.
The goal of this workshop is to bring together researchers from the area of intelligent video analytics from moving cameras (body cams, dash cams, drones and other UAVs), in order to discuss emerging technology in the intersection of these areas, as well as their societal implications.
We invite contributions on the topic of visual analytics, with special emphasis on video analysis from moving cameras. More specifically, topics of interest include, but are not limited to:
University of Central Florida
University of California, Riverside
Stony Brook University / Google Brain
Kitware, Inc.