Building an Automated Validation Test Suite
To properly utilise an automated validation test suite, it must be both fast and reliable. Test suites that fail to run or catch errors negate the value of automated validation. If team members must follow behind the test suite and verify that all tests ran, the automation aspect is unfinished. Furthermore, the test suite should give alerts when errors do occur. With an adequately reliable test suite, the team should be able to assume that the absence of notification implies that the software ran without errors, not that there might be a problem with the test suite.
Beyond reliable, the test suite must also be fast. A set of testing procedures that takes numerous hours to run can slow down the DevOps pipeline. Inhibiting the pipeline is directly counter to the goals of first way DevOps. Instead, the system must be able to quickly determine whether code changes were successful. A faster turnover for automated validation allows more responsive bug fixes. Faster response to bugs yields a more optimised total system, and a better product in turn.
Non-Functional Requirements Testing
The test suite must accommodate for both functional and non-functional requirements. The functional requirements are what the product should do, or how it should function. This mostly constitutes behaviours and intended operations. Non-functional requirements are more about the context in which the product operates, or measures about the responsiveness of the software. Some non-functional requirements include maximum concurrent users, latency, and other properties that might not be explicitly stated in the specification or request.
Teams incorporate non-functional requirements into the test suite through a number of methods. One common method is simulating multiple users for load testing. By running multiple concurrent commands and tests on the same instance of a product. If the product has issues maintaining multiple users, a simulation through simultaneous tests should reveal any faults in performance during peak use. Similarly, teams can use logging to measure latency and time required to execute operations. If there are expected time thresholds for these operations, then logging should indicate whether the product finishes these tasks in an acceptable time frame or if the team should attempt to optimise the software.
Without testing the non-functional requirements for a product, teams might create software that accomplishes what it is marketed to do, but still fail to satisfy users. Instead, incorporating both functional and non-functional requirements into the test suite allows teams to consistently produce software that does what it should do, in a way that users expect.