Potential Hazards
This application will be storing the information of potentially millions of patients, and for insurance of accurate and efficient data storage, algorithms will be used to effectively sort and store that data in a comprehensive way for health care providers. The use of algorithms or decision-making processes may cause bias or perpetuate discrimination. This would have serious consequences for patient care and safety, especially for BIPOC and female patients. If the system reflects the healthcare experiences of of the dominant culture or group, it may overlook unique needs of these minority groups. This would result in a disparity of healthcare outcomes and patient satisfaction among marginalized communities.
This article explores the algorithmic discrimination caused by AI-enabled employment and recruitment programs: It demonstrates the potential risk of racial and cultural biases that can be present within modern technical solutions. While our program would only consist of the storage of information inputted by real medical staff, the ramifications presented in this study still apply.
Solutions
To remediate this issue, the article above proposes technical measures such as unbiased dataset frameworks, improved algorithmic transparency, and external oversight, all of which can and will be implemented in our program. We shall implement unbiased dataset frameworks, and for transparency and oversight, we shall assemble a focus group of medical professionals of every race, ethnicity, gender, and sexual orientation to review algorithmic decisions every month, ensuring racial or gender bias is not present. If any discriminatory bias is reported at any point, we shall act immediately to resolve it.