Comprehensively explain the key problems or issues related to Artificial Intelligence. (E)
To demonstrate Excellence, students should:
Choose two challenges facing AI development or use
Explain each challenge in depth
Give detailed, real-world examples (e.g. healthcare, self-driving cars)
Bias in AI refers to unfair or skewed outcomes resulting from biased training data or algorithms. This is a critical issue in both healthcare and autonomous vehicles.
AI diagnostic tools are trained on historical patient data—but if that data over-represents certain ethnicities, genders or age groups, the AI may perform worse on others.
Example: Studies have shown some AI tools for detecting skin cancer were less accurate for darker skin tones.
Impact: Leads to misdiagnoses, delayed treatment, and health inequity.
Autonomous vehicle vision systems are trained to recognise pedestrians, cyclists, and obstacles. If data sets don’t represent diverse conditions (e.g. dark-skinned pedestrians, people in wheelchairs, snowy conditions), the car may fail to detect them.
Example: In testing, some facial recognition and object detection systems performed worse with non-white individuals.
Impact: Increases the risk of accidents, raising serious safety and ethical concerns.
📌 Why this matters: Biased AI harms public trust, limits adoption, and can lead to legal action or reputational damage for companies. Addressing this requires diverse training data, human oversight, and transparent testing.
AI systems require vast amounts of personal data—from health records to driving patterns. But managing that data responsibly is a major challenge.
AI-powered health tools (e.g., digital assistants, predictive diagnosis) collect sensitive medical history.
Issue: If not properly protected, data breaches could expose personal health info, violating privacy laws like the NZ Health Information Privacy Code or international laws like GDPR.
Ethical Concern: Patients may lose trust in digital health solutions.
Autonomous vehicles collect continuous data about location, speed, traffic, and even passenger conversations (e.g., voice assistants in Teslas).
Issue: Where is that data stored? Who has access? Could it be hacked or misused?
Legal Concern: Laws around data ownership and surveillance lag behind the technology.
📌 Why this matters: If AI systems don’t safeguard data, public adoption will be limited, especially in critical sectors like health and transport.
Use a structure like this:
"One key issue facing AI in healthcare is bias. This happens when training data doesn’t represent all groups fairly. For example, AI tools diagnosing skin cancer can be less accurate for people with darker skin, which causes health inequity and ethical concern. Similarly, self-driving cars may misidentify pedestrians from underrepresented groups. These issues reduce public trust and safety, and require careful testing and regulation to overcome."
Weak AI is task-specific (e.g., Siri, ChatGPT, AI in X-rays).
Strong AI would match full human reasoning (not yet achieved).
Current AI in healthcare and self-driving is Weak AI, meaning it can’t explain itself, think abstractly, or understand context deeply.
This makes human oversight and ethical guardrails critical.
Source for comparison.