Validation and Results

Methods of Testing

There are two main tests that we will perform to prototype our project and validate that we have achieved our project objectives: (1) quality assurance testing and (2) user testing.

Quality Assurance Testing:

  • This method of testing was chosen because we plan to create a final project that is code-based.

  • The programming language the team will be using is Python.

  • The importance that our code runs properly is directly correlated to our success in assisting utility providers and government officials in determining the optimal return on investments when integrating renewable and non-renewable technologies.

  • With programming applications, calculations can occur, but the computer may run into difficulties with storing the answer correctly in memory, this results in the set of instructions given returning a value outside of the expected range (also known as overflow).

User Testing:

  • This method of testing was chosen because we want to make sure that our final deliverable is usable for our stakeholders.

  • User testing will occur as the program is being built.

    • Once a section of our website is completed, we will conduct one on one interviews with stakeholders to navigate the given section and rate the functionality of our platform.

    • Once interviews are completed, we will compile all stakeholder reviews into an overall approval percentage rating that will guide us on whether the platform is accepted by the majority of our stakeholders.

  • Despite the code being properly built and our platform technically working correctly, considerations for how intuitive it is to use must be considered, otherwise the product will be deemed as a failure if stakeholders do not understand how to use it. This user-based testing satisfies our third objective to ensure that our decision-making platform is both user friendly and human centered.

Data Expectations from Testing

Quality Assurance Testing:

The majority of data collected during the code test will be qualitative such as if the application runs, while the quantitative data collected will reflect on the execution time of the program. We expect the following:

  • Program will calculate expected outputs based on data input.

  • Program speed/output desired will be between 1-2 seconds.

  • Catch statements will stop the program from crashing or remaining stuck in loop when creating solutions for users.

  • If the code is given unexpected inputs (such as someone entering a letter when they were asked to input a number), then it will have proactive measures in place to ask the user to reinput correct values.

User Testing:

The data generated from the code tests will allow us to measure how users interact with the product. It will also open doors for other project considerations to occur such as defining a more user friendly design layout. We expect the following:

  • Data gathered will indicate the usability of our platform with metrics such as time spent on each page, the amount of questions that get answered incorrectly due to misunderstandings, and user feedback.

  • Data on task completion rate (how fast a user performs an action), and error rates (the percentage of bad entries input by the user).

Once the user tests our product the team will send out satisfaction surveys to be completed. User's will be asked to rate different functions of the program from extremely easy to extremely difficult. By gather this user feedback in the form of one on one interviews, and post interview surveys we expect to compile usable data that we will further incorporate into our final project.

Plans on Conducting Testing

To ensure that our end product is being built efficiently we plan on evaluating our model by:

  • Defining requirements needed for our program.

  • Supplying users with a project plan that specifies how the program will work from start to finish.

  • Giving our testers a thorough understanding of our features.

We plan on testing our program while developing it in order to pinpoint errors before we develop further and build on a broken foundation.

For quality assurance testing we will be performing:

  • Code Review - the code written by everyone will be reviewed by all teammates for clarity and understanding. Suggestions will be made on how to better write or optimize code.

  • Functionality Testing - testing of screen, buttons, and other functions within the program's user interface. This will ensure that the program is not crashing or running into errors.

  • Unit Testing - code will be tested and checked by inputting valid as well as invalid inputs.

Any amount of slowdown that the program experiences as a result of large calculations need to be recorded and tabularized to enable us to spot weak spots in the code.


For user testing we will be utilizing:

  • One on one interviews - to receive personalized feedback from users on corrections that can possibly be implemented into the code.

  • Satisfaction surveys - to have user's rate their experience with our product. The feedback given from the surveys will be useful to incorporate into our project as we continue working on it.

Initial Knowledge Required

Below are the major steps the team will take during testing.

Step 1:

  • Design our system requirements

  • Analyze requirements to ensure that there aren’t any bugs that generate errors.

  • Secure that the design is comprehensive and interactive.


Step 2:

  • Define project testing, project goals, and determining deadlines.

  • Fix any bugs

  • Begin conducting methods for testing

  • Ensure the management of resources.


Step 3:

  • Create application

  • Ensure application is meeting all workflow requirements

  • Perform basic functionality testing where all buttons, clicks, and outputs will be tested to ensure the program doesn't crash

  • Determine expected results in order to compare them with the actual results

Step 4:

  • Execute tests and report any bugs/issues with the code

  • The team will work together to fix any reported issues


Step 5:

  • Conduct more trial runs

  • Ensure that previous errors have been fixed.

Step 6:

  • Release application

  • Reviewing application to make sure that's stable

Validation of Results

Results will be validated by discussing the application results with system experts as well as interacting with stakeholders throughout our simulation process. The team will work on determining how close the application output aligns with the real system output. The program output and results will be compared with current tools on the market such as:

Timeframe of testing steps

The timeframe we have outlined is to run code tests during the initial stages of development while focusing on user testing as a more developed product is achieved. Fixing bugs in the code is much simpler to accomplish while the code is still being compiled, as the programming process is putting steps on steps of calculations together, and if one piece is broken then that results in later parts suffering as well. Quality assurance testing will be a constant throughout the entire programming phase and will only slow down once the desired platform and program is completed, at which point routine testing will still occur.

The focus at this stage would be the user testing, as a build of the program will be complete enough for decision makers to test. At this point, compiling feedback and results from user testing will be the top priority to further our understanding of how user friendly the platform is. Then, as adjustments are made to the site, routine code testing will occur to ensure changes do not break the algorithm of the website.