News / Upcoming Events

  • Accepted Papers Announced:   the list of accepted papers for AISec 2013 has been released (click here for full paper list)
  • Deadline Extension:    the submission deadline has been extended until July 29, 2013  (23:59 PDT) [Closed]


The 2013 ACM Workshop on Artificial Intelligence and Security will be co-located with CCS - the premier computer security conference - in Berlin Germany on November 4, 2013. As the 6th workshop in the series, AISec 2013 calls for papers on topics related to both AI/learning and security/privacy.

Artificial Intelligence (AI), and Machine Learning (ML) in particular, provide a set of useful analytic and decision-making techniques that are being leveraged by an ever-growing community of AI and ML practitioners, including applications with security-sensitive elements. However, while security researchers often utilize such techniques to address problems and AI/ML researchers develop techniques for big-data analytics applications, neither community devotes enough attention to the other. Within security research, AI/ML components are often regarded as black-box solvers. Conversely, the learning community seldom considers the security/privacy implications entailed in the application of their algorithms when they designing them. While these two communities generally focus on different directions, where these two fields do meet, interesting problems appear. These have already raised many novel questions for both communities and created a new branch of research known as secure learning. Within this intersection, the AISec Workshop has become the primary venue for this unique fusion of research.

The past year has particularly seen increasing interest within the AISec / Secure Learning community -- first with a weeklong workshop at the Dagstuhl castle in Germany followed by the highly successful fifth AISec workshop. There are several reasons for this surge. Firstly, machine learning, data mining, and other artificial intelligence technologies play a key role in extracting knowledge, situational awareness, and security intelligence from Big Data. Secondly, companies like Google, Amazon and Splunk are increasingly exploring and deploying learning technologies to address Big Data problems for their customers. Finally, these trends are increasingly exposing companies and their customers/users to intelligent technologies. As a result, these learning technologies are both being explored by researchers as potential solutions to security/privacy problems, and also are being investigated as a potential source of new privacy/security vulnerabilities that need to be secured to prevent them from misbehaving or leaking information to an adversary. The AISec Workshop meets this need and serves as the sole long-running venue for this topic.

AISec serves as the primary meeting place for diverse researchers in security, privacy, AI and machine learning, and as a venue to develop the fundamental theory and practical applications supporting the use of machine learning for security and privacy. The needs of this burgeoning community who are especially focused on (among other topics) learning in game-theoretic adversarial environments, privacy-preserving learning, or use of sophisticated new learning algorithms in security is not met elsewhere.

Suggested Emphasis for 2013.

In the past year there has been a surge in the use of Big Data analytics for security. Machine learning, data mining, and other artificial intelligence technologies will play a key role in extracting knowledge, situational awareness, and security intelligence from Big Data, and the establishment of Security Information and Event Management systems. Startups like Click Security, Splunk, IPTrust and established organizations like Q1Labs (now IBM) are already big players in this field, which is poised to continue growing significantly in the coming years.