Software can introduce multiple depth and broad degrees of errors ranges from:
- Functional errors
- Performance errors
- Deadlock
- Race conditions
- Boundary errors
- Buffer overflow
- Integration errors
- Usability errors
- Robustness errors
- Load errors
- Design defects
- Versioning and configuration errors
- Hardware errors
- State management errors
- Metadata errors
- Error‐handling errors
- User interface errors
- API usage errors
- and more
Hence, all codes should be designed to be testable by various tools and approaches. Hence, the development process: test-driven development.
- Compiler - catches types, statements, and code-to-program interpretation error.
- Static Analysis - catches known codes problems via code analysis through cumulative insights and data-driven way.
- Manual/Formal Verification - manual testing and outcome verification. Certification is given for compliance.
- Automated Unit Testing - automated testing, regressively or cherry-picking.
- Who do what test?
- Developer? Do what? Functional, Boundary Value Analysis testings?
- Tester? Do what? Black-box/White-box testings? Penetration testing?
- Quality Assurance Team? Monitoring and management? Managing bugs?
- Customer? Report bug?
- When to test?
- Before development? Smoke test?
- During development? Unit test? Regression testing? Performance testing?
- After development? End-to-End (E2E) testing? Performance testing?
- Before shipping?
- How far should we test?
- Define acceptable coverage?
- Critical path tested?
- Method coverage?
- Statement coverage?
- When to stop testing?
- Is the test achieving the overall values?
- Is the test gives sufficient confidences over the quality assurance / guarantees?