Artificial intelligence (AI) refers to the ability of a computer or machine to perform tasks that would normally require human intelligence, such as learning, problem-solving, decision-making, and more. There are several different approaches to building AI systems, including machine learning, where a system is trained on a dataset and can improve its performance over time, and rule-based systems, where the system follows a set of predetermined rules to make decisions. AI can be applied to a wide range of areas, including natural language processing, image and video recognition, robotics, and more. The goal of AI research is to create systems that can perform tasks at least as well as, or ideally better than, humans. AI has the potential to revolutionize many industries and transform the way we live and work, but it also raises important ethical and social questions.
From: https://ditchthattextbook.com/ai/
Watch:
Activity #1:
Work with a partner
Pick 2 topics below.
Using the questions below discuss the pros and cons of each topic.
Try one of the AI Challenges. If you are asked to sign up for ChatGPT you must ask your teacher to use their teacher account with their supervision.
How did you do? Write a reflection about your challenge.
Questions:
Privacy: How does AI affect our privacy? Should AI systems be allowed to collect and use our personal information? How can we protect our privacy in a world with AI?
Bias: Can AI be biased? How can biases in AI systems impact people? What can we do to ensure that AI is fair and doesn't discriminate?
Job Displacement: Will AI take away people's jobs? How can we prepare for changes in the job market caused by AI? Are there new jobs that AI can create?
Responsibility: Who is responsible if something goes wrong with an AI system? Should AI developers be held accountable for the actions of their creations? How can we ensure that AI is used responsibly?
Robot Rights: Should robots and AI have rights? What rights, if any, should they have? How do we determine what rights should be given to AI?
Transparency: Should AI systems be transparent and explain their decisions? How can we trust AI if we don't know how it makes decisions? What information should AI systems share with us?
Safety: How can we make sure that AI is safe to use? What are the potential risks of using AI? What measures should be in place to prevent accidents or misuse of AI?
Social Impact: How does AI affect our daily lives and society as a whole? What are the positive and negative impacts of AI on our society? How can we maximize the benefits of AI while minimizing the negative effects?
Data Ownership: Who owns the data that AI uses? Should individuals have control over their personal data that is collected and used by AI? How can we ensure that data is used ethically and responsibly?
Activity #2:
Watch this video
Play with Google Quick, Draw!
How many images did Quick Draw guess?
Check out the drawings available in Data
Pick one object.
What features of the item are common in most of the pictures of this item? What features are not common on these pictures?
Questions:
Is it OK for Google to use your drawings as data? Why or why not?
Did you realize, when you were playing with Quick, Draw!, that Google was keeping your images for use?
Can something have been done to make you clear-er on this?
Does it help to know that the data is all anonymized, so there’s no way to connect it directly to you?