Faculty Development Specialist. Center for Teaching & Learning| Affiliate UNC Charlotte School of Professional Studies
https://teaching.charlotte.edu/about-ctl/staff/jordan-register/
I am a former high school mathematics teacher with a PhD in Curriculum and Instruction in Mathematics Education and a masters certificate in data science and business analytics. My research has focused on developing high school and undergraduate students’ critical and ethical reasoning in the STEM disciplines, namely data science, to promote: 1) ethical decision making in the data science industry, 2) diversity in the data science industry, 3) the ability to critically evaluate the impacts of data science, Big Data Analytics, Machine Learning, AI, etc., for individuals, groups, etc.
My doctoral studies centered around developing curricula and pedagogy that foster students critical ethical reasoning and decision making in data science. Naturally, the societal impacts of AI and Machine Learning are a major part of this work. Then, when generative AI hit higher education, it was my primary responsbility to (1) support faculty in developing AI literacy, (2) to help their students understand and use generative AI responsibly, and (3) to understand how faculty and instructional designers can use generative AI to augment their work. Finally, my unit has played a major role in bringing people together to engage in campus level conversations on AI policy and practice at UNC Charlotte.
While previously my focus was on the impacts of AI and Machine Learning more generally in society, my current focus has been tied to its effects on education and student learning. It is deeply troubling to me that technologies which have a tremendous impact on education and society are released without attention to or consultation with the impacted areas.
Those who adopt outright are ahead of the curve in terms of their ability to understand the tools and their impact, but no matter which option they choose, educators face an equity dilemma similar to the digital divide: where some students (and their instructors) have access to and literacy with the tools and others do not.
So how do we resolve this? How can we work together with Big Tech to deliver an education that students deserve, while also preserving the well-being of educators and continuing to engage students in liberal studies?
In my opinion, this is where the Tech Companies fall short -- they fail to prioritize the well-being of humans over innovation and profit. At the same time, higher education (and the education system more generally) needs to innovate -- I’d like to understand how we can work together to rectify this.
I approach AI from an educational perspective and so will speak from this lens. My main concerns with regard to AI and education are:
1. Intellectual property: Students enter their institutions of higher education with an infantile understanding of plagiarism, academic integrity and intellectual property. They may believe that citing their use of a tool equates to proper citation, which we know is not the case.
To be clear, I am not concerned with students cheating. Students have been “Cheating” since the beginning of time. Cheating is also subjective. What counts as cheating? Or could it be considered using your resources? It is cultural.
That said, intellectual property is something to be seriously considered and goes beyond giving credit to the originator of ideas (still important). The issue is that students may not understand where the information is coming from. They are unable to critically evaluate the content because they have no idea where the content originated. They cannot assess the accuracy of or biases present in the orginal piece because they have never accessed it. My fear is the potential for feedback loops and extreme cycles of misinformation.
2. De-valuation of Higher/Liberal Education, Teacher Burnout, Educational/Social Inequity: The impact of OpenAI releasing ChatGPT without any conversation or collaboration with the education system was a reactionary scramble to modernize higher education and the continued devaluation of high education in the US. Now don’t get me wrong, the education system has work to do in terms of innovating teaching and learning for current and future generations. However, Big Tech, being a powerhouse of innovation, could be working WITH universities and schools to help them upskill and modernize their curriculum. Instead, they chose to blindside the education system, leading to extreme instructor burnout due to having to completely redesign entire learning experiences to avoid cheating/plagiarism, and to learn the tools themselves while simultaneously promoting a narrative that higher education is outdated and overly expensive. Again I ask, what would happen if Big Tech and the US Education system worked together instead of being at odds?
Another concern here is that companies like OpenAI are shifting to for-profit business models, packing their programs as educational solutions for which there is no evidence or test of accuracy. For profit educational solutions have been shown time after time to increase educational and social inequities, and I fear that this “solution” may be detrimental to the well-being of many of our nations’ students. Put differently, when we require payment to be educated, only those who can afford it will benefit. These students (who likely come from more privileged backgrounds to begin with) are then catapulted into higher paying careers and decision making roles that prioritize populations and characteristics like their own. That is, for profit educational programs only work to widen the social and economic gaps that currently exist.
The lack of industry regulation is a major issue. Being that its not my area of expertise, I don’t have a lot to say about how to enforce it and will leave it to my more knowledgeable colleagues on this matter. What I can say is that I feel that the Education system and Big Tech have a lot of work to do in terms of coming together to create a better future. Profit can’t be the priority. Upskilling our future generations should be. This means that educators have to be willing to modernize and learn, and Big Tech has to be willing to commit to the education of our nation’s students.
Designing and testing tasks for ethical data science learning.
Designing and facilitating faculty workshops and trainings to promote an AI literate campus
Facilitating campus conversations on AI policy and practice
Automating Inequality by Virginia Eubanks and Data Feminism, by D’Ignazio and Klein. While both were written before generative AI hit, they incite deep reflection on how badly even well-intentioned systems can impact human beings due to a lack of understanding of different cultures and social realities. In particular, Data Feminism describes the privilege hazard that occurs when teams of data scientists create systems that reflect the needs and experiences of their privileged creators while Automating Inequality discusses the detrimental impacts of automating access to social services.