Welcome to the website for Sherpa Recommendations
Most safeguards for ethics and human rights in AI rely on integrity and reliability of the technical systems. However, these AI systems may be subject to novel attacks and security vulnerabilities. Machine learning, for example, can be subjected model poisoning attacks. Technical security of AI is therefore a necessary condition for the robustness and reliability of the systems as well as ethical and human rights safeguards.
Developers of machine learning systems.
Undertake bespoke analysis of security risks for machine learning models.
This should include:
Careful and comprehensive threat enumeration and analysis are a crucial pre-requisite for designing effective model protection methods (different models and attacks against them often require different protection approaches).
Careful analysis of training data distribution across the clients (what can be realistically assumed) is important.
Real-time monitoring and analysis of client inputs are often required when attacks against ML models can cause substantial harm (in many scenarios, it is hard to ensure operational resilience only by design and implementation time efforts).
A managed monitoring service should be considered in cases when the precision or confidence of an automated real-time monitoring system in identifying poisoned inputs are not sufficiently high.
SHERPA report on