Poisoning mean the procedure where someone is purposely trying to exploit the ML model. Process to do poisoning-
1. Label modification
2. Data injection
3. Data modification
4. Logic corruption
While Poisoning, attackers don’t have access to the model and initial dataset, they only can add new data to the existing dataset or modify it.