Is using AI for school work cheating?
artificial | intelligence | inclined | nonetheless | model
artificial | intelligence | inclined | nonetheless | model
Ms. Khan’s students were discussing whether using AI for school work is a good idea.
Oralia said, “Why would we waste time doing something the slow way, when it can be done so quickly? It’s much more efficient to use AI, and then we can do a lot more in a day.”
“But isn’t it still good to learn to do things yourself?” Jay argued, “We still learn times tables even though we have calculators.”
Ms. Khan chimed in. “Before you can decide whether to use a tool, shouldn’t you know how it works? Large language models like ChatGPT, Bard, or any of the others are really complex, but the idea is pretty simple. On a basic level, an LLM works by selecting the most likely word to come next in a text. The language model uses all of the examples it has been fed to choose which words to use where.”
Oralia and Jay found this lesson online and shared it with their class.
Tiny Language Model
Imagine you’ve got a tiny language model trained only on the few sentences below. It is built to look at these examples and use them to complete sentence prompts. The model looks at the last word in a prompt, then searches through its data set to see what words can follow that word. If it finds multiple options, it picks one that occurs more often.
Snakes chase mice.
Birds eat bugs.
Spiders eat bugs.
Children squash bugs.
Cats chase mice.
Dogs chase cats.
Rabbits eat carrots.
How might the tiny language model likely complete these sentences? How do you know?
Bears chase ________________________ .
Children eat ________________________ .
Would you expect a large language model to make inaccurate predictions when completing these sentences?
Why or why not?
Did you know that users like you help train large language models? Everything you type into the program goes into teaching the model how to think and write. This has created concerns about privacy. Schools work hard to protect students’ data privacy. But many students share lots of their work and ideas with AI tools without even realizing it. Do you think this is a problem? If so, should students be permitted to take the risk, nonetheless? Or do you think that students should be protected from having their data used by AI programs?