Criterion D: Strand i
Criterion D
Criterion D: Strand i
Criterion D
1. Compose the questions
How to compose the questions?
Think about what kind of information you need.
Read the requirements for Strand ii, Strand iii, and Strand iv and base your questions on them.
Write questions to be answered by people who evaluate your design (eg: what is good, what needs to be better, etc). Maximum 5 questions.
2. Collect the data
Include a description of how you are going to carry out the test (such as the questions of the survey, interview or method, etc.)
How to collect data?
1. Who to interview?
Choose two-three different ways to carry out your test (one of them has to be self-evaluation):
a) peer evaluation (interview at least five peers. Google Form is good for this)
b) self-evaluation (check your Design specification. Comment on each criteria)
c) expert evaluation
2. Describe in detail each testing method: Who is going to evaluate your product? Describe how you are going to conduct it (do it), and why it is important to do it.
3. Complete the testing (eg: send your Google Forms; interview your mom, etc.). Make sure you have all their answers (typed or written).
3. Provide evidence
The evidence must be saved in your design folder. Acceptable evidence:
Google Form
or
typed feedback
or
handwritten feedback
or
recorded feedback
or
photos
Add links of all the documents you used in your testing (Google Forms, photos of your notes, etc). Don't forget to share a document with me (if you share a Google Form, use the Add collaborators option, or just save it in your folder.
Testing methods
How to examine/test your final product?
An effective measure of a design solution means that the student needs to test against every aspect of the Design Specification. These tests can be classified as follows:
user observation (color, shape)
user trials - the user is presented with the solution and is set a task to interact with it, with little or no guidance. The user’s interaction with the solution is observed and recorded.
performance test or user trial - how the product behaves and performs given a specific task or goal (ease of navigation through your website or application, pop-up book mechanisms work properly, mechanical toy performs as expected). A performance test could be used to determine the product’s function, ergonomics, and ease of use:
measurement of physical properties such as weight and size
timed tests (for web pages to load)
ease of navigation (through an interactive story, game or website)
physical examination (touch the product for sharp edges, safety)
timed test (for web page to load)
destructive tests assessing impact strength or flammability
cyclic tests
field trial or test rig - is a test of the performance of a solution under the conditions and situation in which it will be used. A product is continually used over and over and in different conditions. This test is used to test performance issues, reliability, durability.
expert appraisal - the expert has particular knowledge and skills that allow him or her to make judgments on the success of the solution (any person whose work is connected to design - architects, designers, artists, illustrators, film and media). The expert may be the client.
product analysis/comparison - compare your product with another existing product (use ACCESS-FM for the comparison).
For more information and examples, check the Criteria section in MYP Design website