This page is classified as INTERNAL.
NIST 800-53 (r4) Control:
The information system checks the validity of [Assignment: organization-defined information inputs].
NIST 800-53 (r4) Supplemental Guidance:
Checking the valid syntax and semantics of information system inputs (e.g., character set, length, numerical range, and acceptable values) verifies that inputs match specified definitions for format and content. Software applications typically follow well-defined protocols that use structured messages (i.e., commands or queries) to communicate between software modules or system components. Structured messages can contain raw or unstructured data interspersed with metadata or control information. If software applications use attacker- supplied inputs to construct structured messages without properly encoding such messages, then the attacker could insert malicious commands or special characters that can cause the data to be interpreted as control information or metadata. Consequently, the module or component that receives the tainted output will perform the wrong operations or otherwise interpret the data incorrectly. Prescreening inputs prior to passing to interpreters prevents the content from being unintentionally interpreted as commands. Input validation helps to ensure accurate and correct inputs and prevent attacks such as cross-site scripting and a variety of injection attacks.
NIST 800-53 (r5) Discussion:
Checking the valid syntax and semantics of system inputs—including character set, length, numerical range, and acceptable values—verifies that inputs match specified definitions for format and content. For example, if the organization specifies that numerical values between 1-100 are the only acceptable inputs for a field in a given application, inputs of 387, abc, or %K% are invalid inputs and are not accepted as input to the system. Valid inputs are likely to vary from field to field within a software application. Applications typically follow well-defined protocols that use structured messages (i.e., commands or queries) to communicate between software modules or system components. Structured messages can contain raw or unstructured data interspersed with metadata or control information. If software applications use attacker-supplied inputs to construct structured messages without properly encoding such messages, then the attacker could insert malicious commands or special characters that can cause the data to be interpreted as control information or metadata. Consequently, the module or component that receives the corrupted output will perform the wrong operations or otherwise interpret the data incorrectly. Prescreening inputs prior to passing them to interpreters prevents the content from being unintentionally interpreted as commands. Input validation ensures accurate and correct inputs and prevents attacks such as cross-site scripting and a variety of injection attacks.
38North Guidance:
Meets Minimum Requirement:
Implement static and dynamic code analysis tools (e.g., Fortify, SonarCube, VeraCode, etc.) and review/remediation to ensure information input validation techniques have been implemented correctly.
Implement web application scanning (e.g., WebInspect) on all new releases to check user input validation.
Ensure source code analysis and web vulnerability scans implemented in accordance with RA-5 and SA-11 include thorough testing for input validation attacks (may also be fulfilled via CA-8 with a penetration test).
Best Practice:
OWASP cheat sheets: https://cheatsheetseries.owasp.org/cheatsheets/Input_Validation_Cheat_Sheet.html
Unofficial FedRAMP Guidance: None
Assessment Evidence:
Same as RA-5:
Web applications (external and internal) and source code vulnerability scan results/reports.
Evidence of review and analysis of vulnerability scan results.
Tickets (or similar evidence) created as a result of scan findings to track vulnerability remediation to closure for:
A high finding (30 day remediation)
A moderate finding (90 day remediation)
A low finding (180 day remediation)
Documentation of false positives and operational requirements with business justification for web applications and source code scan results
Same as SA-11:
System generated list of application changes for the past 12 months (software development)
Sample of development project tickets or other evidence including the following details:
1. security impact analysis
2. testing
3. technical/peer review
4. management approval
5. implementation/deployment
6. static code analysis
7. dynamic code analysis/vulnerability scans
Source code scan report on installed release
Evidence that results of source code scan were analyzed, documented, and remediated (as necessary)
CSP Implementation Tips:
Amazon Web Services (AWS): TBD
Microsoft Azure: TBD
Google Cloud Platform: TBD