1. ZAP (40 points)
Client-side bypassing: For each of 5 test cases (4 points per test case):
(.25 points) A unique test case id that maps to the ASVS, sticking to Level 1 and Level 2. Provide the name/description of the ASVS control. Only one unique identifier is needed (as opposed to the example in the lecture slides). The ASVS number should be part of the one unique identifier. Find ASVS controls that relate to user input.
(1.5 points) Detailed and repeatable (the same steps could be done by anyone who reads the instructions) instructions for how to execute the test case
.5 points Anyone who ran the test case would do exactly the same thing ... exactly. Specific test input and steps are provided. Do not award points if generalized instructions are provided that give the intent of what should be done but different people might do slightly different things.
1 points Instructions are demonstrating solid understanding of the vulnerability that involves user input that could be changed in ZAP.
(1 point) Expected results when running the test case. A passing test case would indicate a secure system.
(1 point) Actual results of running the test case. [Provide these in a separate document at the END of your submission, not included in the main body of hte submission.]
(.25 points) Indicate the CWE (number and name) for the vulnerability you are testing for. Hint: The CWE number is provided in the ASVS document.
Fuzzing: For each test case
(5 points) Take a screen shot of ZAP output for running the fuzzing on the 5 test cases.
(3 points per each of the 5 true positive vulnerabilities)
Report the ruleset (Injection, Buffer Overflow, XSS, SQL Injection) you chose for each of the vulnerability type along with the results and instructions on how to do it,
What you believe the team would need to do to fix any vulnerabilities you find.
OR (15 points or 3 points*number of vulnerabilities less than 5 you are justifying can't be found) If you don't find any true positive vulnerabilities, provide your reasoning strongly as to why that was the case, and what mitigations strategy (explain well) the team must have in place such that there are no (or few) vulnerabilities. You would need to justify why you should get the points -- particularly given that other teams may have no problem finding 5.
After adding the score for all of the above test cases, deduct 5 points if the team did not compute the time they spent and defects found/hour.
2. Vulnerable Dependencies (25 points)
Task 1 (20 points)
Run two vulnerable dependency checker tools on OpenEMR
(10 points per tool) Report the results for each tool run on each module. The results should contain
(1 point) The number of total vulnerable dependencies for the module.
For the each of the 10 vulnerable dependencies from the tool (or less if the tool reports less than 10):
a. (3 points) The list of CVEs for the module. Also indicate if a security advisory is available for the vulnerability -- start out by doing a web search.
b. (3 points) Whether this module is a direct or transitive dependency of a module in OpenEMR.
c. (3 points) Whether a safer version of the module is now available.
Task 2 (5 points)
(2 points) Write a comparison report based upon what you found in Task 1. (2.5 points).
(1 point) Explain why you think the results differ among the tools.
(2 points) What do you think are the strengths and weaknesses of each tool
Hint: Your report may explain why a certain tool missed a vulnerable dependency that another tool has detected. What do you think are the strength and weaknesses of each tool)
3. Secret Detection (25 points)
(10 points per tool) Report the results for each tool with the found secrets.
(5 points) Determine and report the common secrets found by these two tools + Gitleaks you used in class. Explain why you think the results differ among the tools and write a comparison report. A good thing to examine is whether you think any of the results that differ are false positives and/or would be more advanced to detect.
You will be awarded up to 10 points for performing the peer evaluation. You will be awarded for:
The thoroughness and fairness of your feedback to the other team based upon using the rubric.
A demonstration of understanding of the expectations (as provided in the assignment, class material, rubric).
Generally not missing issues that the TAs pick up on.