Quality Data for Quality Testing

Technology can be an accelerator, but, if not properly tested, it can bring everything to a halt…

Software is often intended as an accelerator, to make lives easier while driving up efficiency and efficacy. However, as organizations and individuals come to depend on software more, the greater the impact when it fails. This has been vividly illustrated by the aviation industry, where faulty software has literally brought business to a standstill.

We noted back in December how a dormant line of code created chaos when it caused a system failure at the main UK air traffic control centre in Swanwick. The busiest British airports were brought to a halt, with flights unable to load passengers. On the Friday it occurred, 70 flights were cancelled at Heathrow alone. The failure persisted overnight, with a further 38 flights cancelled on Saturday morning.[1]

Pilots can therefore be forgiven if they felt a bit of déjà vu last week when a glitch in an iPad App, FliteDeck, caused American Airlines to ground dozens of planes.

The airline adopted the App, which delivers charts and information to pilots while in the air, to avoid pilots carrying more than 16kg of paperwork on board with them. They estimated that this would save a massive $1.2 million in fuel a year, in addition to reducing preparation time and offering real-time updates. [2]

Time, cost and ease were therefore the motives – three common reasons for adopting technology. However, it appears that improving quality might have been overlooked by the developers, undermining any possible time and cost savings.

In particular, it appears that a possible data scenario had not been tested for. When a duplicate chart in the America’s chart database for Reagan National Airport occurred, spokesman Mike Pound described, “The app could not reconcile the duplicate, causing it to shut down.”[3]

Even if such a duplicate had not occurred in production before, it should have been tested against. Complete testing requires that all possible – not just past – scenarios have been covered. Proper QA therefore depends upon the existence of test cases which cover the maximum functionality possible, and the data needed to push these paths through the software.

As will be discussed in the upcoming Grid-Tools webinar, Quality Data for Quality Testing!, production data does not provide this. Though high in volume, it tends to be very samey, and is drawn from “business as usual” transactions that have occurred previously. It therefore does not cover future scenarios and is sanitized by its very nature to exclude bad data and negative paths.

So long as production data, which typically provides just 10-20% coverage, is relied upon, the 80% of testing which should be negative will be neglected. In practice, the only effective and efficient way to produce all the data needed for effective testing is to automatically generate it synthetically.

Combined with data profiling, synthetically generated ‘mock’ or ‘dummy’ data can be produced on the basis of a model of production data, to guarantee that it is realistic and fit for testing. The difference is, however, that from the initial data model, data can be created and tailored to test cases so that it covers 100% of possible scenarios – including outliers, unexpected results and negative paths. Software can therefore be fully tested, to make sure that it really does save time and money, instead of causing costly failures which bring business to a standstill.

“Quality Data for Quality Testing”, will l take place on the 14th of May 2015, at 12:00 BST and 12:00 EDT. You can sign up below:

Register for 12:00-12:45 BST

Register for 12:00-12:45 EDT

 

[1] http://www.theguardian.com/uk-news/2014/dec/12/heathrow-london-air-space-closed-computer-failure

[2] http://www.bbc.co.uk/news/technology-32513066

[3] http://www.bbc.co.uk/news/technology-32513066

Leave a comment