Congratulations! You have completed all the material in the module! Take some time to think about the following questions, and possibly discuss them with your peers. Remember, if there is any topic you would like to revisit, simply rewatch the video or try the links located in the additional resources section!
In this Module, we discussed just a select few artificially intelligent technologies that pose ethical problems for society, such as recommender systems, decision-making AI, and autonomous machines. What other emerging technologies give us cause for concern? What ethical problems do they present us with?
While there is no doubt that AI presents ethical challenges, opinions vary on how concerned we should be about these technologies. For example, some argue that threats to privacy and the use of AI for surveillance could ultimately undermine democracy. Others feel more comfortable sharing their data online, viewing its use by companies as a means to create more powerful machine learning tools. How concerned do you think we should be about these technologies in this and other areas?
If you could come up with some kind of ethical code, or moral rules, to help shape the future of how we develop and use these powerful technologies, what would that look like?
References
[1] Anon, Philosophy: Branches of philosophy. Research Guides. Available at: https://libguides.francis.edu/c.php?g=182116&p=1199480 [Accessed January 26, 2022].
[2] Vincent, J., 2016. Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day. The Verge. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist [Accessed January 26, 2022].
[3] Anon, Artificial intelligence will do what we ASK. that’s a ... Available at: https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/ [Accessed January 26, 2022].
[4] Anon, Ethical issues in advanced artificial intelligence. Ethical Issues In Advanced Artificial Intelligence. Available at: https://nickbostrom.com/ethics/ai.html [Accessed January 26, 2022].
[5] Christian, B., The alignment problem by Brian Christian. Penguin Random House Canada. Available at: https://www.penguinrandomhouse.ca/books/665663/the-alignment-problem-by-brian-christian/9780393868333 [Accessed January 26, 2022].
[6] Wingard, J., 2019. The Tiktok Invasion: Massive success, massive mistakes, massive risks. Forbes. Available at: https://www.forbes.com/sites/jasonwingard/2019/12/06/the-tiktok-invasion--massive-success-massive-mistakes-massive-risks/?sh=1f2031cd1457 [Accessed January 26, 2022].
[7] Ohlheiser, A., 2021. Welcome to TikTok's endless cycle of censorship and mistakes. MIT Technology Review. Available at: https://www.technologyreview.com/2021/07/13/1028401/tiktok-censorship-mistakes-glitches-apologies-endless-cycle/ [Accessed January 26, 2022].
[8] Biddle, S., Ribeiro, P.V. & Dias, T., 2020. TikTok told moderators: Suppress posts by the "ugly" and poor. The Intercept. Available at: https://theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination/ [Accessed January 26, 2022].
[9] Tufekci, Z., 2018. YouTube, the Great Radicalizer. The New York Times. Available at: https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html [Accessed January 26, 2022].
[10]Heilweil, R., 2020. There's something strange about Tiktok recommendations. Vox. Available at: https://www.vox.com/recode/2020/2/25/21152585/tiktok-recommendations-profile-look-alike [Accessed January 26, 2022].
[11] Alexander, J., 2020. Tiktok reveals some of the secrets, and blind spots, of its recommendation algorithm. The Verge. Available at: https://www.theverge.com/2020/6/18/21296044/tiktok-for-you-page-algorithm-sides-engagement-data-creators-trends-sounds [Accessed January 26, 2022].
[12] Written by Olivia Little & Abbie Richards & Research contributions from Nena Beecham, C.E.& J.T., TikTok's algorithm leads users from transphobic videos to far-right rabbit holes. Media Matters for America. Available at: https://www.mediamatters.org/tiktok/tiktoks-algorithm-leads-users-transphobic-videos-far-right-rabbit-holes [Accessed January 26, 2022].
[13] Taulli, T., 2019. How bias distorts AI (Artificial Intelligence). Forbes. Available at: https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/?sh=2993930a7d87 [Accessed January 26, 2022].
[14] Winkler, R., 2021. WSJ News Exclusive | Apple is working on iphone features to help detect depression, cognitive decline. The Wall Street Journal. Available at: https://www.wsj.com/articles/apple-wants-iphones-to-help-detect-depression-cognitive-decline-sources-say-11632216601 [Accessed January 26, 2022].
[15] Reuters, 2018. Amazon ditches AI recruiting tool that didn't like women - national. Global News. Available at: https://globalnews.ca/news/4532172/amazon-jobs-ai-bias/ [Accessed January 26, 2022].
[16] Wall, S., 2021. We tested AI interview tools. here's what we found. MIT Technology Review. Available at: https://www.technologyreview.com/2021/07/07/1027916/we-tested-ai-interview-tools/ [Accessed January 26, 2022].