Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 308-318).
Bartolo, A., et al. (2022). Limitations and Challenges in Language Models. In Proceedings of the AAAI Conference on Artificial Intelligence.
Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, T. (2021). On the dangers of stochastic parrots: Can language models be too big? arXiv preprint arXiv:2104.04430.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357).
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bozdag, E. (2013). Privacy-enhancing technologies: A systematic review. Journal of Computer Security, 21(1), 1-42.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Retrieved from https://arxiv.org/abs/2005.14165
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186.
Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H., & Inkpen, D. (2020). Unicoder: A universal language encoder by pre-training with multiple cross-lingual tasks. Transactions of the Association for Computational Linguistics, 8, 135-151.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (pp. 4171-4186).
Dignum, V. (2020). Responsible artificial intelligence: How to develop and use AI in a responsible way. AI & Society, 35(3), 1-6.
Dixon, L., et al. (2018). Measuring and mitigating unintended bias in text classification. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society.
Dwork, C. (2006). Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (pp. 1-12).
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Luetge, C. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for datasets. arXiv preprint arXiv:1803.09010.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
Gupta, D. (2021). Fairness in AI Language Models. arXiv preprint arXiv:2102.02766.
Gururangan, S., et al. (2018). Annotation artifacts in natural language inference data. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics.
Han, S. H., Kim, K. W., Kim, S., & Youn, Y. C. (2018). Artificial Neural Network: Understanding the Basic Concepts without Mathematics. Dementia and neurocognitive disorders, 17(3), 83–89. https://doi.org/10.12779/dnd.2018.17.3.83
Hendrycks, D., & Gimpel, K. (2016). A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136.
Holtzman, A., et al. (2020). The Curious Case of Neural Text Degeneration. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
Hovy, D., & Spruit, S. L. (2016). The social impact of natural language processing. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Johnson, I., & Khurana, M. (2020). Study on the presence of bias in commercial AI systems. AI Now Institute.
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273-1282).
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2021). Model cards for model reporting. In Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency (pp. 119-133).
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. Retrieved from https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Retrieved from https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
Rahali, A., & Akhloufi, M. A. (2023, January 5). End-to-end transformer-based models in textual-based NLP. MDPI. https://www.mdpi.com/2673-2688/4/1/4
Rao, A., & Daumé III, H. (2020). Towards diverse and natural image descriptions via a conditional GAN. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1572-1584).
Speicher, T., & Subramanian, S. (2020). A framework for understanding unintended consequences of machine learning. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017) (pp. 6000-6010). Retrieved from https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2019). Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 5267-5277).
What are Neural Networks? | IBM. (n.d.). Retrieved 18 June 2023, from https://www.ibm.com/topics/neural-networks
How ChatGPT Will Impact Recruiting and Hiring. (n.d.). Retrieved 20 June 2023, from https://www.linkedin.com/business/talent/blog/talent-acquisition/chatgpt-impact-on-recruiting
The Dark Side of Using ChatGPT in the Workplace: What You Need to Know! | LinkedIn. (n.d.). Retrieved 20 June 2023, from https://www.linkedin.com/pulse/dark-side-using-chatgpt-workplace-what-you-need-know-nicholas-hill/
Ackerson, N. (2023, February 27). GPT Is an Unreliable Information Store. Medium. https://towardsdatascience.com/chatgpt-insists-i-am-dead-and-the-problem-with-language-models-db5a36c22f11
What is ChatGPT? Header: Photo by Mariia Shalabaieva on Unsplash
Learning Theory & Design Header: Photo by Glenn Carstens-Peters on Unsplash
Learning Context & Inclusive Design Header: Photo by Alina Grubnyak on Unsplash
Technology Rationale Header: Photo by Ales Nesetril on Unsplash
Module 1 Header: Photo by ilgmyzin on Unsplash
Module 2 Header: Photo by Google DeepMind on Unsplash
Module 3 Header: Photo by Google DeepMind on Unsplash
Module 4 Header: Photo by Luca Bravo on Unsplash
Module 5 Header: Photo by Google DeepMind on Unsplash
Module 6 Header: Photo from Canva
Module 7 Header: Photo by Google DeepMind on Unsplash