The main guidance I have been given by the chief editor of Artificial Intelligence section of a prestigious research journal and researchers on artificial intelligence has been if humans do not understand the problem and solve it themselves, how could artificial intelligence learn if humans do not even know themselves?
For instance, autonomous driving with cars such as Waymo offering driver-less cars driven using computer vision and robotics techniques. However on ethnics and human safety, this is where the issues come into play, as for an unavoidable accident between two humans the artificial intelligence needs to make the decision using information gathered through the computer networks and embedded systems:
Which human's life is worth saving over another?
Take a moment to view the human eye as a camera. Through cameras, humans have worked on having images taken by cameras to be as close to human vision as possible through developments in imaging technology. Human memories are fallible, thus the invention of camera revolutionized how we share memories, as a photos can say more than thousands of words verbally and written.
Computer Vision can learn information Human Vision cannot perceive, being able to discern information from images that can be leveraged to support human decisions. If the average human does nothing but use artificial intelligence (AI) to replace their work without understanding what the work is, they are truly replaceable as anyone can use AI to do the work instead of hiring them. From my second year working in MESA lab and asking a fellow MESA lab graduate school member Justus about how my friend said AI will not be able to replace data science jobs and being answered that researchers at MIT are researching on using AI to replace data science jobs, there is a critical lack of understanding of artificial intelligence in the world. Thus, I strive towards explainable artificial intelligence (XAI).