AI framework for assessing risk

by X | @rali2100 - Linkedin|R Ali

Created: 2024-01-11

According to this framework, AI use cases can be classified into three categories:

- Red-light use cases are those that are prohibited by law, such as using AI for surveillance that violates democratic rights, social scoring, or remote biometric monitoring. These use cases pose too much harm to individuals and society and should be avoided at all costs.

- Green-light use cases are those that are low-risk, such as using AI for chatbots, product recommendations, or video games. These use cases are generally acceptable and do not require much governance, as they have been used safely for several years.

- Yellow-light use cases are those that are high-risk, such as using AI for HR applications, family planning and care, surveillance, democracy, and manufacturing. These use cases require careful governance and precautions, as they can have significant impacts on people's lives, rights, and well-being.


For yellow-light use cases, the framework suggests four steps to ensure responsible AI:

- Ensure that there is high-quality, accurate data. Data is the fuel of AI, and it needs to be relevant, reliable, and representative of the target population and context.

- Embrace continuous testing. AI systems need to be tested and monitored for algorithmic bias and accuracy, both before and after deployment, to ensure safety, prevent privacy or cybersecurity breaches, and ensure compliance[^10^][10].

- Allow for human oversight. AI systems should not operate autonomously, but rather have human involvement and intervention to correct any errors or deviations from expectations.

- Create fail-safes. AI systems should have mechanisms to stop or suspend their operation if they pose any harm or risk to people or the environment.