AI is the development of computer systems that can perform tasks that normally require human intelligence—like recognising faces, understanding language, making decisions, or playing games.
Uses algorithms and models to "learn" from data (Machine Learning).
Neural networks mimic the human brain to help computers make predictions.
AI can improve over time by training on more data.
Virtual assistants (Siri, Alexa)
Chatbots and customer service tools
Recommendation engines (Netflix, YouTube)
Self-driving cars
AI in healthcare, law enforcement, and education
AI can increase efficiency, accuracy, and automation in many fields.
Raises ethical questions: bias in decision-making, job displacement, surveillance, and control.
Data privacy and how personal data is used.
Lack of transparency (the “black box” problem—how AI makes decisions isn’t always clear).
Bias in training data can lead to unfair or harmful results.
Who is responsible when AI causes harm?
Check out the Computer Science Field Guide: https://www.csfieldguide.org.nz/en/chapters/artificial-intelligence/
Watch the video and read through the chapters thoroughly.
Need to really understand the big picture so you can make an informed decision and relate to the topic.
Background AI understanding, on Neural networks and generative AI.
What is 'bias' and what types are there?
There are many different types of bias with a variety of classification systems for them:
Confirmation bias – looking for information that supports our beliefs while rejecting information that doesn’t.
Stereotyping – making assumptions or guessing the reason behind events or the way people behave using stereotypes.
Normalcy bias – thinking the situation we are currently in will always be the same. An example is thinking the climate crisis is overstated and things will be OK.
Motivated reasoning – believing arguments that favour our thinking are stronger than arguments that conclude the opposite to what we think. This is partially because arguments seem more plausible when they align with our existing beliefs and ideas.
Anchoring bias – when we rely heavily on one trait or piece of information, often the first we learned or viewed on a subject, when making a decision. This is also known as first impression bias.
“Science Learning Hub – Pokapū Akoranga Pūtaiao, The University of Waikato Te Whare Wānanga o Waikato, www.sciencelearn.org.nz”
AI hallucinations refer to when an AI model, particularly a generative AI, produces outputs that are inaccurate, nonsensical, or fabricated, despite appearing confident and coherent. These errors can range from minor factual mistakes to entirely made-up information, and can occur in various contexts, such as text generation, image creation, or even legal research.
Several factors contribute to AI hallucinations:
Insufficient or incorrect training data: Models trained on flawed or limited datasets may struggle to accurately represent real-world information, leading to fabricated or inaccurate outputs.
Bias in training data: Biased data can cause AI models to perpetuate and amplify existing societal biases, resulting in skewed or unfair responses.
Over-reliance on patterns: AI models may overgeneralize patterns from their training data, leading them to make incorrect assumptions or generate nonsensical outputs when faced with novel or complex situations.
Lack of real-world understanding: AI models lack true understanding and consciousness, so they may struggle to differentiate between factual information and fabricated content, especially when prompted with ambiguous or complex questions.
Timestamped Contents
Time Topic Speaker/Role
0:00
Introduction and housekeeping
Prof. Bryony James (Facilitator)
2:18
Opening Speaker – “Bots in Agricultural Robotics”
Prof. Mike Duke (Dean of Engineering)
6:15
Presentation from robot “Archie” on vine pruning using AI
Mike Duke + Robot simulation
8:55
AI and Automation: Replacing jobs with narrow AI
Mike Duke
9:27
Second Speaker – “Three Cs of AI: Cuts, Creation, Collaboration”
Dr Amanda Williamson (Deloitte & Univ)
12:05
Productivity study: AI assistants in call centres
Amanda Williamson
13:31
Democratisation of AI tools: Prompt engineering and job shifts
Amanda Williamson
14:35
Deepfakes and misinformation risk
Amanda Williamson
15:27
Third Speaker – “Creative Fear of ChatGPT”
Prof. Nick Agar (Philosopher)
17:21
Challenges in education: Students using AI, plagiarism and authorship
Nick Agar
20:04
Philosophical questions on AI-generated vs self-generated content
Nick Agar
21:28
Fourth Speaker – “Te Reo Māori, Data Sovereignty, and AI”
Assoc Prof. Te Taka Keegan (CompSci)
23:00
Māori students using GPT for Te Reo and concerns about data ownership
Te Taka Keegan
25:59
Opportunities for culturally aligned AI tools
Te Taka Keegan
Professor Mike Duke is the Dean of Engineering and Dr John Gallagher Chair in Engineering at the University of Waikato. Mike is a founding member of Waikato Robotics Automation and Sensing (WaiRAS) research group. He is a principal investigator of the MaaraTech MBIE Optimising Horticulture project, where he leads a team developing hardware for horticultural robotics.
Dr Amanda Williamson is a Senior Lecturer in innovation and strategy at the University of Waikato, and a Manager in AI & Data Consultancy at Deloitte. Her research interests include the adoption of artificial intelligence in business and the health of entrepreneurs, using innovative research methods. Amanda brings extensive knowledge of data analytics, machine learning and innovation strategy to her teaching, and has experience leading high-performing research teams internationally. She is also a member of the governance board of the Artificial Intelligence Researchers' Association in New Zealand.
Professor Nick Agar is a Philosopher and Professor of Ethics at the University of Waikato. He has been busy over the past thirty years exploring the ethical implications of technological change, and the ways in which genetic and cybernetic technologies may alter us.
Associate Professor Te Taka Keegan (Waikato-Maniapoto, Ngāti Porou, Ngāti Whakaue) is an Associate Professor of Computer Science, the Associate Dean Māori for Te Wānanga Pūtaiao (Division of HECS) and a co-director of Te Ipu Mahara (University of Waikato's AI Institute). His career has focused on te reo Māori in technology while lately he has been concentrating on activating Māori Data Sovereignty, investigating Artificial Intelligence for te reo Māori and understanding what particular cyber-security risks there are for Māori.
26:51
Panel summary
Prof. Bryony James
28:11
Audience Q&A begins – What is a Prompt Engineer?
Amanda Williamson
29:53
How will AI-driven disinformation affect social cohesion?
Nick + Amanda + Panel
32:18
Can legislation keep up with AI?
Panel discussion
36:55
How do we retrain fast enough for new AI jobs?
Mike + Amanda
39:05
What is the university doing to prepare students for the AI future?
Mike + Te Taka
41:02
Should educators encourage AI use instead of banning it?
Panel discussion
45:30
Guardrails: AI hallucinations, bias, and IP concerns
Amanda Williamson
47:09
Is AI inherently biased and how can we fix it?
Panel discussion
50:48
Could AI lead to a “Skynet” scenario?
Panel discussion (mostly optimistic)
56:22
Who sets the guardrails on AI content and ethics?
Panel discussion
58:24
Final reflections – What personally scares each speaker about AI?
All panelists
1:03:03
Closing remarks – Humanity, hope, and responsibility
Prof. Bryony James
AI algorithms are crucial in enhancing car safety by enabling advanced driver-assistance systems (ADAS) to detect hazards, predict potential collisions, and respond in real-time. These systems utilize sensors, cameras, and AI-powered algorithms to analyze data and assist drivers, ultimately reducing accidents and improving overall safety.
1. Lane Keeping Assist and Lane Departure Warning:
AI algorithms analyze images from cameras to detect lane markings.
They can identify when a vehicle is drifting out of its lane and alert the driver or even automatically steer the car back into the lane.
This helps prevent accidents caused by unintentional lane changes or drifting.
2. Adaptive Cruise Control (ACC):
ACC uses AI to maintain a safe following distance from the vehicle ahead.
The system automatically adjusts the car's speed based on traffic conditions and the speed of surrounding vehicles.
This helps prevent rear-end collisions and provides a smoother driving experience.
3. Automatic Emergency Braking (AEB):
AEB relies on sensors and AI algorithms to detect potential collisions.
If a collision is imminent, the system can automatically apply the brakes to prevent or minimize the impact.
AEB has been shown to significantly reduce the severity of accidents.
4. Pedestrian and Cyclist Detection:
AI algorithms can analyze camera images to identify pedestrians and cyclists.
By recognizing these vulnerable road users, the system can alert the driver or even initiate emergency braking to avoid a collision.
5. Blind Spot Monitoring:
Sensors (often radar) are used to detect vehicles in the blind spots of the car.
AI algorithms analyze the sensor data and alert the driver to the presence of a vehicle in the blind spot, helping to prevent lane change accidents.
6. Driver Monitoring Systems (DMS):
AI-powered DMS systems use cameras and sensors to monitor the driver's alertness and focus.
They can detect signs of drowsiness, distraction, or impairment.
The system can then issue warnings or even intervene to prevent accidents caused by a distracted or fatigued driver.
7. Enhanced Stability and Traction Control:
AI algorithms analyze sensor data (speed, steering angle, etc.) to improve vehicle stability and traction control.
In slippery conditions, the system can reduce steering input to prevent oversteer or understeer, enhancing control.
8. Predictive Maintenance:
AI algorithms can analyze vehicle data (engine performance, wear and tear on parts, etc.) to predict potential maintenance issues.
This allows for proactive maintenance, preventing unexpected breakdowns and ensuring the vehicle's safety.
AI algorithms are revolutionizing healthcare by assisting with diagnosis, treatment, and administrative tasks. Machine learning algorithms analyze medical images, patient records, and genetic data to identify patterns and predict outcomes with greater accuracy than humans in some cases. This enables earlier disease detection, personalized treatment plans, and more efficient resource allocation.
1. Diagnosis and Disease Detection:
Medical Imaging Analysis:
AI algorithms, particularly deep learning, excel at analyzing images like X-rays, MRIs, and CT scans to detect diseases like cancer, fractures, and cardiovascular issues. For example, AI can identify subtle signs of cancer in mammograms that might be missed by radiologists.
Early Disease Detection:
AI can analyze various data points, including patient history, genetic information, and even wearable sensor data, to predict the likelihood of disease development and identify individuals at high risk for proactive intervention.
Pathology:
AI algorithms can analyze tissue samples for diagnosis, aiding pathologists in identifying cancerous cells and other abnormalities.
2. Treatment Planning and Personalization:
Personalized Treatment:
AI analyzes patient data to create tailored treatment plans based on individual needs, genetic predispositions, and lifestyle factors.
Drug Discovery:
AI algorithms can analyze vast datasets of molecular information to identify potential drug candidates and predict their efficacy and safety, speeding up the drug development process.
Treatment Optimization:
AI can help doctors select the most effective treatment options by analyzing patient data and predicting treatment outcomes based on past cases.
3. Improving Healthcare Efficiency:
Administrative Tasks:
AI-powered tools can automate tasks like appointment scheduling, billing, and patient inquiries, freeing up healthcare staff to focus on patient care.
Remote Monitoring and Telemedicine:
Wearable sensors and AI-powered platforms enable remote patient monitoring and telemedicine consultations, expanding access to healthcare, particularly for those in remote areas.
Robotics in Healthcare:
AI-driven robots can assist with tasks such as surgery, rehabilitation, and medication dispensing, enhancing efficiency and precision.
4. Predicting Disease Outbreaks and Risks:
Predictive Analytics:
AI can analyze patient data and environmental factors to predict potential disease outbreaks, allowing for early intervention and resource allocation.
Risk Assessment:
AI algorithms can assess an individual's risk of developing certain diseases based on various factors, enabling preventative measures and personalized healthcare.
5. Addressing Bias and Ensuring Ethical Implementation:
Data Quality and Bias:
Healthcare AI systems rely on data, and it's crucial to address potential biases in datasets to ensure fair and accurate results.
Ethical Considerations:
Ensuring data privacy, security, and transparency is paramount when implementing AI in healthcare.
Human-Centered Approach:
Integrating AI into healthcare requires a human-centered approach, focusing on how AI can augment human capabilities and improve patient care, not replace it.
0:05 Introduction – What is AI ethics and why it matters
0:32 The core question: “Just because we can build it, should we?”
0:49 Ethics in real-world AI decisions (hiring, crime prediction, loans)
1:07 Overview: Bias, Fairness, Transparency, Accountability
1:55 Bias in AI – Sources, real-world examples, and consequences
2:25 Example: Facial recognition underperforming on dark-skinned faces
2:55 Example: Hiring algorithms learning bias from historical data
3:08 Example: Predictive policing and feedback loops
3:17 Example: Healthcare tools underrepresenting women and minorities
3:24 Causes of bias: training data, lack of team diversity, poor auditing
3:57 What does fairness mean in AI?
4:33 Equality vs Equity explained
4:41 Types of fairness: Demographic parity, Equal opportunity, Individual fairness
5:07 Accuracy ≠ fairness: group differences in performance
5:21 Fairness is a values problem, not just a math problem
The “Black Box” problem in AI
6:10 What does transparency look like?
6:30 Explainable AI (XAI): clarity in how decisions are made
6:56 Trade-off between model accuracy and interpretability
7:00 Why accountability matters (e.g. AI-caused car accident)
7:23 Legal responsibility in AI outcomes
7:36 International examples: The EU AI Act
8:00 How to build accountability: audit trails, human-in-the-loop, and clear ownership
8:21 Summary: Accountability = humans remain in charge of AI outcomes
8:29 Ethical AI needs to be designed in from the start, not patched in later
8:56 What can individuals do? Ask questions, demand transparency, support diversity, stay informed
9:47 Wrapping up: AI ≠ conscious — humans are, so values must guide AI use
DTTA will provide one at the start of Term 3. This will be advertised on the DTTA Mobilse forum.
This the DTTA Derived Grade Exam Resources for 91898 provided in 2024
Your teacher will provide this. Do your best and remember to give specific examples!