This section contains three areas of CPD activity from 2021-2024.
One of the main reasons I originally applied for CMALT status was that I saw huge potential in using learning technology to improve students' learning experience. For this refresh, my evidence primarily showcases how I have been developing technical skills in order to benefit my learners. I see the emergence of viable open-access AI as a fantastic emerging opportunity for enriching learning resources and programmes of study.
In my future plans in 2021, I specifically mentioned wanting to develop my skills in programming; I mentioned that I had taken Harvard University’s CS50 'Introduction to Computer Science' MOOC, which I later followed with their CS50P 'Introduction to Programming with Python'. These skills tie in directly with their CS50AI 'Introduction to Artificial Intelligence with Python', which I am currently working towards.
As a fully online university, UCEM's study material is all digital, which makes it well suited to potential enhancements and transformations.
UCEM's 2022 'HELP' project aimed to update programme and module curricula to help students make better use of their study time, while ensuring that there was still a good mixture of active learning types (which we have adapted from UCL's ABC framework). This was a huge task, affecting over 90 modules in a year, and it could not have been achieved without the ability to accurately estimate study time and categorise learning types correctly.
Any tool to assist with this process had to be easy to use to encourage uptake, and it needed to ensure consistency in results.
After discussions with teammates about how best to approach this, I took a range of online courses on advanced Excel functionality which would let me create a calculator spreadsheet that could be used by learning designers and academics. I explored the potential capabilities of Excel and took courses on features that the calculator would benefit from.
Here is a selection of certificates from relevant courses on LinkedIn Learning, and a screenshot of the overview tab from the calculator.
Calculator overview tab showing completed module data
This calculator played a central role in the 'HELP' project. It was received very well by colleagues who particularly appreciated the automated warnings when a particular unit of study was flagged as too heavy. Of course, the purpose was to benefit students rather than to build something flashy; a blog post I co-wrote with my colleague Sue Lowe collates positive student feedback and key statistics. These show an increase in satisfaction and pass rates. To quote the blog post:
'Module success rates (students passing their module) significantly improved across all modules as a result of the HELP project. Modules that began in Autumn 2022 saw a significant increase in success rates compared to their previous iteration.'
'Qualitative data from student end-of-module evaluations has shown a significant increase in student satisfaction regarding workload. Students were more satisfied overall, with comments including appreciation for how focused the material was.'
'The results we have seen fully vindicate the decision to develop the study load calculator. They demonstrate that students appreciate the reductions in study time across their modules. The impact overall on the UCEM student experience has been highly positive, with quantitative and qualitative feedback demonstrating that it has helped many students to balance their workload more effectively.'
I am still delighted with these results, and I feel they more than justify the team effort spent on the HELP project, including my time developing the calculator.
In October 2023 I began my Level 7 Artificial Intelligence (AI) Data Specialist apprenticeship with QA. This is very in-depth, making heavy use of Python to solve business problems using machine learning and other approaches. This gels well with my technical experience going back to Computing A-Level.
I am currently on my fourth module and have developed a range of skills, including:
building deep learning and machine learning prediction models using Python (including Tensorflow and Scikit-learn), and also using Microsoft Azure;
expanding my Excel knowledge, including converting spreadsheets to Pandas dataframes (and back again) for advanced data manipulation and analysis;
manipulating and visualising data using Matplotlib and Seaborn, and creating dashboards in Power BI;
using APIs to work with data, including querying the OpenAI API.
I will learn more in future modules, but I have already been able to create useful machine learning prediction models, and have written Python programs to help automate some of the more mundane aspects of the department's work. This will free up colleagues' time to be more creative.
One particularly useful example so far is a program to perform instant keyword searches across our full range of study material, including retrieval of the context the term has been used in. Our systems did not previously allow us to do this.
I regularly share any relevant findings with colleagues and managers, and take part in discussions about how new AI-related tools and approaches may be used.
I have passed three modules so far, with scores ranging from 68% to 80%. The screenshots are from QA's Canvas platform.
Grade for first module (80%) for Data Science Principles
Most recent grade (68%) for Machine Learning Using Cloud Computing
I am pleased with the skills I have gained on the apprenticeship so far. As well as developing my general programming skills, I feel like it has greatly expanded the scope of what is possible in my line of work.
It may sound as if this apprenticeship is so technical it is quite removed from learning and teaching, but I believe it is already having an impact, and that this impact will only get more and more visible as I keep applying my findings. One tool I have built so far can automatically identify all hyperlinks contained in a module and test whether they are working or not, which can save a significant amount of time spent manually checking each one.
Another use case is the ability to perform deep keyword searches across all study material. This changes the scope of what is possible when reviewing programmes and modules. Two examples:
I am taking part in a project to identify the use of unnecessary gender-specific language in our older study material; updates will be prioritised in order to increase inclusivity and better reflect UCEM's values.
If a key piece of information affecting learners needs to be updated, such as an update to legislation, a keyword search can help find all occurrences and enable a targeted rewrite.
As I progress, I would like to get to the point where my programs can be used by all relevant staff including learning designers, academics and our editorial team, to help them allocate their time where it will have the most impact.
Although I am doing a technical apprenticeship, the form of AI which has been in the news since early 2023 is generative AI, using tools like ChatGPT. In the first few months of 2023 I took about a dozen courses from different platforms on generative AI and its uses.
I have used my paid ChatGPT subscription to create a series of custom chatbots which will be shared with other staff. For example, I have created and trained a bot to automatically draft alt text for images which currently have none, following WCAG 2.2 standards. Naturally, this is following a strict internal QA process and subject expert review, but the intention is to roll it out across the department when we are satisfied.
After a few months of research and experimentation with AI, I was invited to present my findings to UCEM's senior leadership team, focusing on ways in which it can potentially assist learning designers and academics when developing modules. This 30-minute presentation and demo was well received and encouraged discussions at an executive level on how the institution should approach AI.
My presentation title slide is below.
I have also taken several online courses either directly or indirectly related to generative AI - a selection of certificates is below. I have also taken Coursera's AI for Everyone (which I audited, so I do not have a certificate).
My SLT presentation title slide
As always, using ed tech responsibly means the focus should be on the 'ed', not the 'tech'. There are many potential issues with generative AI, including:
biases inherent in the model's training data;
inaccuracies caused by 'hallucinations';
an over-reliance on the technology, leading to under-resourcing real people.
The courses I have taken on prompt engineering have identified strategies to try to minimise the first two issues. I have found that the more specific the prompt, the better the results.
So far all the custom bots I have created have been used for internal processes only, and preliminary feedback from staff is good; however, after thorough rounds of testing and scrutiny, we would eventually like to roll out relevant chatbots to students for limited purposes – e.g. for advice on referencing.
I feel that human connection is a vital part of the learning experience, especially in online learning, where a learner's interaction with staff is already limited. We cannot and should not replace teaching staff with bots, however tempting the cost savings may be when the technology is cheap and readily available; on this note, Mustafa Suleyman's book The Coming Wave makes the point that the next few years are going to bring great upheaval for this reason. We need to keep a sense of responsibility, and keep in mind that everything we do should benefit our learners.