There is some potential for using AI to help craft or first draft rubrics. However, there are still many limits
However, the quality of rubrics created by AI can be highly variable.
The quality of rubrics created by AI depends on the training data that the model has had access to – this is generally data from the USA and may reflect overseas metrics of success such as common core syllabuses. This can happen even if you provide source data from which to work, like a Framework document. It is also likely that the quality of rubrics generated by AI will vary from subject to subject. So, take care to review the output in detail to avoid unintended consequences.
The Australian National Framework for AI in Education explicitly states that where decisions are made by AI, there needs to be a line of sight and ultimately, a human decision.
5.1 Human responsibility: teachers and school leaders retain control of decision making and remain accountable for decisions that are supported by the use of generative AI tools.
Remember that sprites can play tricks on you; check anything you get against your own knowledge and understanding!
When prompting generative AI remember the following acronym SPRITE to guide prompting and increase the likelihood of a successful outcome.
Specific: ask the tool specifically for what you want
Parameters: give the tool clear parameters. You can copy and paste in parts of policy or achievement standards if you like
Role: specify the role you would like AI to play; e.g., a friendly teacher
Iteration: iterate by asking the tool to refine the output
Text type: clearly state the type of text you want – e.g., a table, a paragraph, a letter
Exemplars: offer examples of the kinds of things you want to see – e.g., “using the ACT senior secondary achievement standards for Arts…” or “in the style of a formal curriculum document”