IEEE AVSS 2016 Workshop on
Surveillance for Location-aware Data Protection

Description

Location-aware data protection refers to finding ways to control access by location -- for example, specifying that a file can only be opened if the user's device is inside the building, or the medical terminal can only be used if an authorized staff member is standing in front of the computer. This workshop will bring together researchers in video- and signal-based surveillance and data privacy to address challenges such as:

  • Cross-validation of presence in location from multiple signal sources (e.g. GPS, cellular, WiFi, Bluetooth, and webcams/surveillance video).
  • Location accuracy/adjustment based on fusion/cross-validation of multiple signal sources.
  • Access control based on location.
  • Data protection based on location with verification, e.g. for HIPPA enforcement.
  • High accuracy geo-fencing service, e.g. to improve accuracy in tracking animals or people within specified areas.
  • Privacy-preserving group member location service within an organization, e.g. enabling staff to locate a manager or students in an emergency situation.
  • Location tracking across wireless / camera networks (hand-off between beacons / cameras)
  • Comparison / fusion of inside-out approaches (radio / camera self-localization) and outside-in approaches (detection / localization in radio / camera networks)

This workshop will be held on Tuesday, August 23, 2016, in conjunction with the IEEE Advanced Video and Signal-based Surveillance (AVSS) 2016 conference, August 24-26, in Colorado Springs, CO.

Workshop Program

Tuesday, August 23, 2016

  • 13:30 Introduction
  • 13:40 Invited Talk: François Bémond, INRIA
    People localization across wireless / camera networks
  • 14:25 Invited Talk: Alessia Saggese, University of Salerno
    Embedded Vision: video analytics moving from the server to the edge
    The growing number of cameras spread over the territory has lead in the last decades to an increasing interest of the scientific community toward those solutions able to automatically analyze the scene so as to identify events of interest. In the last years, this interest has been accompanied by the introduction of very powerful, low-cost and energy efficient processors, which made it possible the porting of video analytics into embedded systems. This powerful combination has allowed to move from traditional “server side” applications to the new and more interesting “edge side” ones, thought and optimized for running directly on board of cameras. In this talk, the challenges and the limitations that edge side applications have to face with will be delineated, together with the tricks to overcome them. Furthermore, two edge side algorithms, respectively devoted to people counting and gender recognition, will be presented.
  • 15:10 Coffee break
  • 15:30 Invited talk: Senem Velipasalar, Syracuse University
  • 16:15 Invited talk: Andrea Cavallaro, Queen Mary University of London
    Autonomous robotic cameras for collaborative target localization
  • 17:00 Panel discussion
  • 17:30 End of workshop

Invited Speakers

François Brémond


INRIA Sophia Antipolis, France


François Brémond is a Research Director at INRIA Sophia Antipolis. He created the STARS team on the 1st of January 2012 and was previously the head of the PULSAR INRIA team in September 2009. He obtained his Master degree in 1992 from ENS Lyon. He has conducted research work in video understanding since 1993 both at Sophia-Antipolis and at USC (University of Southern California), LA. In 1997 he obtained his PhD degree from INRIA in video understanding and François Brémond pursued his research work as a post doctorate at USC on the interpretation of videos taken from UAV (Unmanned Airborne Vehicle) in DARPA project VSAM (Visual Surveillance and Activity Monitoring). In 2007 he obtained his HDR degree (Habilitation à Diriger des Recherches) from Nice University on Scene Understanding: perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition.He also co-founded the CoBTek team from Nice University on the 1st of January 2012 with P. Robert from Nice Hospital on the study of behavioral disorders for older adults suffering from dementia.

He designs and develops generic systems for dynamic scene interpretation. The targeted class of applications is the automatic interpretation of indoor and outdoor scenes observed by sensors and in particular by monocular colour cameras. These systems detect and track mobile objects, which can be either humans or vehicles, and recognize their behaviours. He is particularly interested in filling the gap between sensor information (pixel level) and behaviour recognition (semantic level). François Brémond is author or co-author of more than 140 scientific papers published in international journals or conferences in video understanding. He is reviewer for several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip JASP,...) and conferences (CVPR, ICCV, AVSS, VS, ICVS,…). He has (co-)supervised 13 PhD theses. He is an EC INFSO and French ANR Expert for reviewing project. He was teaching numerical classification at Nice University and video understanding in a High Engineer School at Master level.

He has participated to 12 European projects (Esprit, ITEA, FP6, FP7: PASSWORDS, ADVISOR, AVITRACK, SERKET, CARETAKER, CANTATA, COFRIEND, SERKET, VICOMO, VANAHEIM, SUPPORT, DEM@CARE), one DARPA project, 12 French projects (ANR, DGE, Prédit, TechnoVision, PACA, CG06,...), several industrial research contracts (Bull, Vigitec, SNCF, RATP, ALSTOM, STMicroElectronics, Thales, Keeneo, LinkCareServices, Neosensys...) and several international cooperations (USA, Taiwan, UK, Belgium) in video understanding. For instance, he has succeeded to recognize a large variety of scenarios in different applications: fighting, abandoned luggage, graffiti, fraud, crowd behavior in metro stations, in streets and onboard trains, aircraft arrival, aircraft refueling, luggage loading/unloading on airport aprons, bank attack in bank agencies, access control in buildings, office behavior monitoring for ambient intelligence, older adult activity monitoring for homecare applications and wasp monitoring for biological application. He has also participated to a series of ARDA workshops to build an ontology of video events.

Andrea Cavallaro


Professor of Mulimedia Signal Processing, School of Elec. Eng and Computer Science


Queen Mary University of London


Andrea Cavallaro is Professor of Multimedia Signal Processing and Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Area Editor for the IEEE Signal Processing Magazine and Associate Editor for the IEEE Transactions on Image Processing. He is an elected member of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee, and chair of its Awards committee. He served as an elected member of the IEEE Signal Processing Society, Multimedia Signal Processing Technical Committee, as Associate Editor for the IEEE Transactions on Multimedia and the IEEE Transactions on Signal Processing, and as Guest Editor for seven international journals. He was General Chair for IEEE/ACM ICDSC 2009, BMVC 2009, M2SFA2 2008, SSPE 2007, and IEEE AVSS 2007. Prof. Cavallaro was Technical Program chair of IEEE AVSS 2011, the European Signal Processing Conference (EUSIPCO 2008) and of WIAMIS 2010. He has published more than 130 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).

Alessia Saggese


Assistant Professor, Electronic and Computer Engineering


University of Salerno, Italy


Alessia Saggese received in 2010 the Laurea degree (cum laude) in Computer Engineering from the University of Salerno, Italy, with a thesis entitled “Un metodo per l’interpretazione automatica del comportamento di persone per applicazioni di videosorveglianza e relativa caratterizzazione sperimentale“. In July 2011 she received the computer engineering license.
In February 2014 she received the Ph.D. degree in electronic and computer engineering from the University of Salerno, Italy, and from the Ecole Nationale Supérieure d’Ingénieurs de Caen et Centre de Recherche (ENSICAEN), University of Caen Basse Normandie, France, with a thesis entitled “Detecting and indexing moving objects for Behavior Analysis by Video and Audio Interpretation”. The underlying research project of her thesis has been awarded by the “Università Italo Francese” – “Université Franco Italienne” (UIF-UFI) within the Project Vinci Framework and the thesis has been awarded in 2016 by the GIRPR, the Italian Chapter of the IAPR, as the best thesis in the period 2014-2015.

She is currently an Assistant Professor at the University of Salerno. Since July 2012 she is a member of the IAPR Technical Committee 15 (Graph-based Representations in Pattern Recognition).
Her research mainly interests computer vision and pattern recognition techniques for video and audio surveillance applications.

Senem Velipasalar


Associate Professor, Electrical Engineering and Computer Science


Syracuse University, New York, USA


Dr. Velipasalar's primary areas of research are embedded computer vision, mobile camera applications and wireless embedded smart cameras, which combine sensing, processing and communication on a single embedded platform. Dr. Velipasalar has been working on designing resource-efficient algorithms that are suitable for embedded platforms, fall detection, step counting and activity classification with wearable cameras, traffic light detection and alert signal detection with vehicle-mounted cameras, distributed target detection and tracking across overlapping and non-overlapping cameras, and resource allocation strategies and detection of events of interest on embedded smart cameras. Potential uses include military surveillance, public transportation, health care and elder care, traffic systems, and industrial and retail applications.

Call for Papers

Download the official call for papers here.

Authors are invited to submit original, unpublished manuscripts in standard IEEE proceedings format in PDF format, with a maximum length of six pages. Both technical papers and position statements are welcome. Submitted papers should address at least one of the aforementioned workshop topics or another relevant topic. Accepted papers must be presented by one of the authors at the workshop. Submit your paper via the EasyChair submission system.

Organizers

  • Edward Chow, University of Colorado Colorado Springs, USA
  • Jonathan Ventura, University of Colorado Colorado Springs, USA