The goal of Element H is to demonstrate a plan for testing developed prototypes. In order to do so, testing procedures and criteria must be carefully established. Accurate testing is key, which is why optimal and effective use of engineering and mathematical principles is used, taking the forms of quantitative and qualitative testing.
For testing, the prototypes will face both qualitative and quantitative to effectively gain more knowledge from users and to critique the Profectly prototypes. Through mathematical and engineering principles, mainly with that regarding software engineering, testing will be done effectively and optimally in order to obtain more accurate results that apply to the prototypes and provide insightful feedback to the prototypes and concepts.
Because the prototypes are apps and software-based, they can't easily be tested under conventional engineering criteria. However, it was concluded that testing would be most effective if the prototype would be compared to the user stories created in previous elements and, after some guidance from Ms. Mason, obtain testing and feedback from potential users through surveys and other similar means. For User Stories, by using a spreadsheet, user Stories are first defined to be used as criteria alongside the prototype scoring criteria. These two sets of criteria are applied cohesively with one another to the prototype in order to gain an accurate insight into its success. On a more detailed level, in order to get the score for the prototype in relation to the given user story, the user story is applied to all aspects of the prototype, and, then the extent to which it achieves the criteria is concluded. An average overall score is also taken from the individual grades. With the average score and the individual scores, the success of the prototype is concluded in relation to other prototypes or, if there is none, by team evaluation. For the survey feedback and testing from potential users, the best way that was concluded to do so was to make a few surveys (one for prototypes #2 and #3) with a set of tasks and questions, and a general survey that could be used on any devices and poses similar questions to that of the other surveys but lacks testing potential. Because all the questions are similar in the surveys, the team will be able to compile the data from all of them. It's also important to note that for potential user testing, the users will only take one of the three surveys, to prevent the repetition of the same ideas which would create unbalanced results.
Note: The testing for Prototype #1 will only be tested against user stories because testing it with potential users would be more repetitive than with Prototype #2 because they are both front-end designs and Prototype #2 is directly built off of Prototype #1.
As mentioned in Testing Procedures, the user stories and their established criteria will be used to test and define the success of various aspects of the prototypes. To determine the extent to which a user story is accomplished, a numerical grade being either of the following, 0-1-3-9, will define such. Scoring definitions and criteria for each number are provided below. The prototype gets an overall score based on the average mean grade among the individual stories. Essentially the higher the overall score, the better.
0 - Doesn't contain or show any form of or potential for a functional base that would apply to the user story criteria.
1 - Shows some form of potential for a functional base that applies to the user story, but serves no functionality (likely only UI or front-end based).
3 - Has some form of functionality that applies to the given user story, but doesn't complete all the required criteria and is in need of optimization.
9 - Successfully applies to the given user story and achieves all of its criteria.
For feedback and testing from potential users via surveys, the criteria will be identified through the feedback and criticisms provided in the surveys. A list of the pro and cons regarding the prototypes and their functionality will be compiled to gain a more thorough understanding of both the strengths and weaknesses of the prototypes and the concept in general.
In this sense, the team will have quantitative testing in the form of User Story testing, and qualitative testing from the user's and their provided feedback.
Due to prototypes #1 and #2 being solely front-end designs and lacking any sort of conventional app functionality, applicable engineering principles are few and far between, which is an oversight by the profectly team. Though there were clear engineering design principles shown through such. However, with Prototype #3 having engineered backend and functionality, it is prime for more user feedback regarding testing such through surveys and hands-on app testing. For user stories, engineering can be applied for this testing is that it is based on a mathematical grading system, with evident data collection techniques being prevalent in the other testing. All in all, both testing of prototypes through user stories and potential user feedback will provide effective qualitative and quantitative testing and qualitative and quantitative testing is essential to making a well-engineered and optimal product. There was also evidence of engineering principles used to develop ideas revolving around testing and the creation of the prototypes with the frequent use of engineering notebooks by the team to accurately document progress and note ideas.
In order to gain optimal feedback and validation from experts regarding the STEM work with app design and development, Profectly would need software engineers, graphic designers, app designers, etc. to gain effective, constructive criticism for both the UI front-end designs and the functionality aspects and code with the back-end prototypes. It would be crucial that there would be feedback for both the front-end and back-end prototypes because together, they demonstrate the true potential of what Profectly can be. Without one of those, Profectly cannot be entirely defined properly, which is why a variety of professionals, or an expert specializing in app development would be optimal.
For prototypes #1 and #2, there are minimal scientific principles applied to the design in and of itself, which means there isn't room for potential scientific concepts to be explored in the testing. Although, scientific concepts in testing will be far more applicable with the backend prototype #3, where software engineering techniques were applied through the code and database management with Firebase. There are some immediate troubles with this as it is difficult to apply scientific principles to developed software through testing. Therefore, in the testing outlined above, there will be not many scientific concepts found in such. However, in incremental testing for Prototype #3 there was far more testing regarding code and such which can be found in Element G.
On the prototyping design level, Figma was used to create prototypes #1 and #2, and Kodular for #3; however, for testing, Google Sheets will be used to methodically organize and calculate scores based on the user stories used as criteria for prototype testing. Other software, such as google surveys, will be used to conduct potential user testing and obtain feedback from such.