Repeatable and Trustworthy

These are key tenets of any enterprise testing organization but too often the testing teams are at the mercy of the hardware, operating system and middleware configuration of their system under test environments. How can the testing professionals really do their job without a clean and well controlled laboratory? They are mixing chemicals in dirty Erlenmeyer flasks, BOOM!

Recently we discovered a production incident that caused catastrophic failures in transaction communications. The QA team did not discover the miscommunication because the defect is currently non-reproducible in the test environment because the configurations are too different. The complexity of heterogeneous and EAI enabled solutions makes this reality very common.

Then the incident or issues can recur due to a lack of automation and controls in the build and deployment pipeline. I like the analogy of a desktop PC. How fast and cleanly does your new laptop work on the first day you use it. Then a few months later it’s out of whack. Re-installing the OS seems to fix the problem. Well in the enterprise new ideas and tools such as Puppet and Chef are making it possible for every deployment and install to be clean just like that.

At another organization, there was a major outage related to a build deployment that was signed off by the testing team. The test team did not see any of the errors that occurred in production. The QA team became deeply engaged in the root cause analysis. They needed to understand what was missing in their final validation processes. The team discovered that the test environment hardware was configured differently at the network load balancing and physical network tier. In many places there was no load balancing or firewalls at all in the environment. The file systems were laid out differently. The amount of log storage, the permissions and even the way that data were stored were not the same between production and the validation solution. I claim these are ‘functional’ defects in the environments that manifest themselves during operations.

In order to help these organizations, I did some research on the latest trends in Continuous Integration. The CI pattern applies automation to improve the engineering of software development. I found a great book called Continuous Delivery. The ideas within it apply automation to improve the engineering of the equally important components of HW/OS/MW and application deployment.

In the future we will see even more automation for the networking with software defined networks and software defined computers such as HP Moonshot. For now let’s see what we can make of this change in mindset at the mainstream IT level. I posed the following questions to a few of my fellow travelers to gauge their feedback.

1)      Describe the deployment pipeline in your current engagement.

2)      How would you benefit from automated feedback processes?

3)      How does this concept change the role of performance testing?

4)      Why don’t you keep your work in a version control system at the customer site?

5)      What are some tools and checks for validating configuration?

 

I will post their feedback soon.

 

David Guimbellot, Area Vice President of Test Data Management & Continuous Delivery at Orasi Software

By Jim Azar

James (Jim) Azar, Orasi Senior VP and Chief Technology Officer A 29-year veteran of the software and services industry, Jim Azar is charged with oversight of service delivery, technology evaluation, and strategic planning at Orasi. Among his many professional credits, Azar was a co-founder of Technology Builders, Inc. (TBI), where he built the original CaliberRM requirements management tool. Azar earned a B.S. in Computer Science from the University of Alabama, College of Engineering, where he was named to the Deans’ Leadership Board. He furthered his education with advanced and continuing studies at Stanford University, Carnegie Mellon, and Auburn University at Montgomery. Azar has been published in both IEEE and ACM.

Leave a comment