Understand the fundamentals of modern computer security, including:
Threat modeling, including identification of assets, adversaries, and attack methods
Stakeholder analyses
Classic design principles, including “build it in”, defense in depth, least privilege, minimizing the trusted computing base, separation of code and data, resilience and recovery
Classic elements of security, including prevention, detection, and deterrence
The existence of both direct and indirect attacks (e.g., information leakage and side channels)
Understand the fundamentals of applied ethics and moral philosophy, including:
Classical ethical frameworks, including consequentialist ethics, deontological ethics, and virtue ethics
The existence of both strengths and limitations with these ethical frameworks
How these ethical frameworks relate, where they converge, and where they diverge
Example applications of these frameworks to moral dilemmas in computer security
How different people, through sound reasoning, may come to different conclusions on what is morally acceptable
Understand the fundamentals of inclusive and society-aware computer security, including:
The embedding of biases and worldviews into computer systems
Identifying and countering biases in threat modeling and the design and analysis of computer security systems
Understand historical examples of ethical issues and discussions in the computer security field, including:
The relationship between governments and citizens and the “Crypto Wars”
The relationship between patents / commercialization and security
The evolution of coordinated disclosure of vulnerabilities and the selling/stockpiling of vulnerabilities
The Menlo Report
Identify and reason through common scenarios in computer security ethics (see below)
Finding vulnerabilities and developing attacks, including:
Finding vulnerabilities in and developing attacks against a single system / product (e.g., finding vulnerabilities in a car’s computer components)
Finding vulnerabilities in and developing attacks against a wide collection of systems/products (e.g., finding vulnerabilities in a fundamental architectural primitive)
Developing and applying tools to find vulnerabilities in systems / products (e.g., developing new fuzzing or automated analysis capabilities)
When vulnerabilities can’t (easily) be fixed; when vulnerabilities may be industry-wide and may affect unknown systems; when vendors are adversarial or potentially adversarial
Developing new systems with strong security properties, including:
Systems with the potential to increase security for both “good” actors (e.g., people seeking privacy for personal activities) and “bad” actors (e.g., people seeking to commit a serious crime)
Systems where an adversary in the threat model is a government
When the security solutions are proprietary
When the security solution is hard to use and/or its use might subject a user to increased or different risks
When the security solution is released to the public while possibly being buggy and / or without a plan for continued maintenance
When the security solution decreases access, e.g., because of cost or accessibility
Measuring the properties of deployed systems, including:
Measuring the presence of vulnerabilities in deployed systems without exploiting those vulnerabilities
Evaluating deployed systems for vulnerabilities by exploiting those vulnerabilities
Measuring an attacker’s infrastructure in a way that could result in contributions to adversarial actions
Measuring an attacker’s infrastructure in a way that could result in financial or other gains to the attacker
Using an exploit as part of a measurement
Analyzing data, including:
Analyzing data obtained via a data breach
Analyzing data that is technically public but that people may have intended to be private
Analyzing data that was once public, collected by researchers, and has subsequently been deleted from public repositories
Analyzing data that people are actively trying to keep private
Analyzing data obtained through the participation in criminal ecosystems
Deanonymizing and studying “anonymized” datasets
Working with human subjects, including:
When working with human subjects in general
When human subjects are at risk, marginalized, or vulnerable
When human subjects are part of the experiment
When the work may expose team members to harm, including:
The safety of team members when studying disturbing or problematic content
The safety of team members when studying criminal activity
The role of power differentials in decisions related to team member safety (and beyond)
Creating data or tools that others may wish to see, evaluate, use, and build upon, including:
When the data may contain private information
When the data may contain information that could be used by an adversary (e.g., sensitive information about vulnerabilities or infrastructure)
When the tool could be used by an adversary
Whether to release the data or tool to the entire public, to only selected other people, or to no one at all
When the work may interact with laws or policies, including:
Federal Policy for the Protection of Human Subjects
The Digital Millennium Copyright Act (DMCA)
The Computer Fraud and Abuse Act (CFAA)
Electronic Communications Privacy Act (ECPA)
California Consumer Privacy Act (CCPA)
General Data Protection Regulation (GDPR)
Terms of Service