Invited Talks


Cyber Security and Artificial Intelligence: From Fixing the Plumbing to Smart Water

Carl Landwehr, University of Maryland

Abstract

Computer security and artificial intelligence in their early days didn't seem to have much to say to each other. AI researchers were interested in making computers do things that only humans had been able to do, while security researchers aimed to fix the leaks in the plumbing of the computing infrastructure or to design infrastructures they deemed leakproof. Further, AI researchers were often most interested in building systems with behaviors that could change over time through learning or adaptation, and hence were to some degree unpredictable. From the standpoint of security, unpredictable system behavior generally seemed undesirable.

But the two fields have grown closer over the years, particularly where attacks have aimed to simulate legitimate behaviors, not only at the level of human users but also at lower system layers. This talk will cover a bit of the history of computer security and artificial intelligence, identify a few connections between them, and conclude with a bit of speculation about some directions the fields might take.


Bio

Carl Landwehr is Program Leader for National Intelligence Community Information Assurance Research at the Intelligence Advanced Research Projects Activity (IARPA), on assignment from his position as Senior Research Scientist at the University of Maryland’s Institute for Systems Research. His IARPA programs aim for dramatic improvements in the overall trustworthiness of National Intelligence Community systems by focusing on accountable information flow, including technologies for privacy protection, and large scale system defense. He also serves as Editor-in-Chief of IEEE Security & Privacy Magazine. For many years he led a research group in Computer Security at the Naval Research Laboratory. Since then he has served as a Senior Fellow at Mitretek Systems (now Noblis), as the first Program Director for the National Science Foundations programs in Trusted Computing and cyber Trust. He has been active internationally as the founding chair of IFIP WG 11.3 (Database and Application Security) and is also a member of IFIP WG 10.4 (Dependability and Fault Tolerance). Dr. Landwehr has received Best Paper awards from the IEEE Symposium on Security and Privacy and the Computer Security Applications Conference. IFIP has awarded him its Silver Core, and the IEEE Computer Society has awarded him its Golden Core. His research interests span many aspects of trustworthy computing, including high assurance software development, understanding software flaws and vulnerabilities, token-based authentication, system evaluation and certification methods, multilevel security, and architectures for intrusion tolerant systems.



Opportunities for Private and Secure Machine Learning

Christopher W. Clifton, Purdue University

Abstract

While the interplay of Artificial Intelligence and Security covers a wide variety of topics, the 2008 AISec program largely focuses on use of artificial intelligence techniques to aid with traditional security concerns: intrusion detection, security policy management, malware detection, etc. This talk will address the flip side of the issue: Using machine learning on sensitive data.


The privacy-preserving data mining literature provides numerous solutions to machine learning on sensitive data, while protecting the data from disclosure. Unfortunately, privacy has yet to provide the economic incentives for commercial development of this technology.


This talk will survey this work (and open challenges) in light of problems that may have greater incentives for

development: collaborative machine learning by parties that do not fully trust each other. Opportunities include job brokerage (assigning jobs in ways that most efficiently utilize resources of competing companies), supply chain optimization, inter-agency data sharing, etc. Techniques similar to those in privacy-preserving data mining can enable such applications without the degree of information disclosure and trust currently required, providing a business model for development of the technology (and as a by-product, reducing the number of trusted systems that need to be secured.)


Bio

Dr. Clifton works on challenges posed by novel uses of data mining technology, including privacy-preserving data mining, data mining of text, and data mining techniques applied to interoperation of heterogeneous information sources. He has worked on AI Security (both impact of AI on security, and use of AI techniques to improve security) since the mid-1990's. He also works on database support for widely distributed and autonomously controlled information, particularly information administration issues such as supporting fine-grained access control.


Prior to joining Purdue, Dr. Clifton was a principal scientist in the Information Technology Division at the MITRE Corporation. Before joining MITRE in 1995, he was an assistant professor of computer science at Northwestern University. He obtained his Ph.D. in Computer Science from Princeton University in 1991, and Bachelor's and Master's from the Massachusetts Institute of Technology in 1986