M1: Adversarial input
M2: Data poisoning
M3: Model stealing
M4: Mitigating attack threats to ML software with Denial-Of-Service
M5: Mitigating attack threats to ML software with Arbitrary Code Execution
M6: Feedback Weaponization Attack
M7: Privacy attack
M8: Backdoor attack