Salesforce: Responsible AI Dev Lifecycle
Scope: Targets what the intentions of the model are
Review: Who the audience of the model is going to be
Testing: Testing for bias, testing in User Experience research groups
Launching: Following a typical alpha beta test release plan to ensure model issue fixes and catching of biases & other negative damage
Ensure your training data is diverse and representative of the population you aim to serve.
Regularly update the dataset to reflect changes in the population.
Testing: Conduct thorough testing to detect any biases in your model's outputs, like consulting a UX testing group.
Using 3rd Party Tools:
IBM offers AI Fairness 360 , an open-source and comprehensive toolkit for both the detection and elimination of bias in machine learning models
Design: Design models with transparency in mind, allowing for easier identification and rectification of biases.
Open Data and Open Source: Making data and source code openly accessible can allow for scrutiny and collaboration.
Explainability: AI algorithms should be designed in a way that allows for interpretability and understanding of how decisions are made. Techniques like model interpretability and explainable AI can help achieve this
Documentation: Document the model's decision-making process for accountability.