Last month, I suggested mechanisms and tools to help testers acquire the data they need for testing. The process is simple and straightforward—if testers and test data engineers follow specific, definable steps. One of those steps is making precise, complete data requests, which in turn involves having well-defined data requirements.
But what happens if the data requirements aren’t producing the right data? Data requirements (and the requests and data sets that derive from them) should enable complete test coverage for the component or function being tested. And, that’s not always an easy task, especially in situations where an application has been tweaked over time with quick fixes, potentially by different developers responding to “urgent” requests from myriad departments. Writing requirements that achieve complete test coverage can be nearly impossible.
If tests aren’t identifying enough of the defects that show up in production—even if you followed the suggestions I offered last month—you may not be requesting the right data in the first place. As a result, you and your team may be coming under fire for producing poor-quality test results, when in reality, the problems may not be your tests at all. The trouble may be rooted in more fundamental issues—what I call the building blocks of good test data management.
Some of these issues are outside of the testing function. Fixing them may not be within the control of a tester or even the testing lead. However, illustrating to management that these problems are resulting in poor test results is absolutely a function of quality assurance. It’s also a good way to deflect misplaced blame targeted at the testing team.
Evaluate the Infrastructure
Business requirements must exist before testers can identify appropriate requirements and in turn enable test data engineers to gather the right data. Data models must accurately define the internal schema of the database. Data dictionaries must appropriately describe the type of data and its “rules” (format, structure, use).
In some situations, data models and dictionaries don’t reflect the current functionality of the application. Both are living documents but aren’t always updated as frequently as changes occur. Over time, code changes can essentially put data models and data dictionaries out of sync with the functionality they describe. Some companies—especially those that use “codeless” tools to develop software—don’t have data models and dictionaries at all. It’s very difficult to identify meaningful data sets without up-to-date data models and data dictionaries.
Update Your Test Cases
As with data models and data dictionaries, test cases must evolve over time as the application changes, but this doesn’t always happen. Is it a pain in the butt to develop new test cases to evaluate functionality that arises within the application? Certainly. Is it vital to complete test coverage? Absolutely.
If resource constraints prevent a testing team from developing new or sufficient test cases, it’s time for the test lead to talk to management. Otherwise, someone is being lazy. Optimally, testing teams should work closely enough with developers and business analysts to know not only what improvements have been made but also what is being planned. Tools exist that can keep the information pipeline flowing between development and testing.
A Brighter Day
Resolving these issues will likely require executive management acceptance and support, because it will definitely involve additional resource allocation. It will also require a mindset shift on the part of development and testing personnel.
Improvement won’t happen overnight, and in fact, companies that tackle these endemic problems in bite-sized chunks end up with the best results. Even then, it may be hard work, but the results will be better-quality software produced more quickly. In the end, the time saved in defect elimination will more than pay the firm and its employees back. The greater user adoption and satisfaction that follows will be a bonus.