Performance Assessment – Collaboration

Learn to Collaboratecollaborate1

Communication breakdown is not unique to IT. Somehow, most IT organizations create towers. Maybe this happens to reduce the number of status meeting or to allow for a tree-shaped management chain. When leadership creates these separate teams, the intention is not to prevent the individual contributors from collaborating and working efficiently, but that’s what tends to happen. In my opinion, the waterfall process came under fire due to these risks so agile was born.

Iterative Collaboration

Scrum teams are now tight groups that cross disciplines to achieve a smaller demonstrable goal in a finite time. This is great when the performance team can enumerate the requirements and bring the correct tools and resources into the iteration. Too often however, there are some formerly centralized teams in performance that become too specialized or rigorous in their processes.

Let’s figure out why performance/capacity testing is a poor fit in the agile world. First, the organizations become accustomed to a poor waterfall release schedule. Specifically, the performance sign-off comes after the functional validation phases. While this is somewhat easy to justify, this is always a precarious and dangerous position. Seriously, how often is functional testing squeezed at the end already? When you are the last thing to stand in front of a release, you can be the hero or the goat.

Do you have success criteria that add value at every phase or iteration?

Functional Readiness

success failureThe other issue is determining when functional testing is “good enough” to begin performance testing. If the performance tests are so fragile that they require a completely working solution, then they are not designed to help a project under development. At every phase of a project, you should run performance evaluation.

Does the functional testing team understand your criteria?

Do you know which tests are most critical to your success?

The Risk of Complexity

Even if your project is not moving to agile, your process might be too risky. If your solution requires a highly complex and expensive capacity test that occurs at the tail of a major project, then you are outmoded. Why?

  1. If you do find a KPI fault or you miss a resource utilization goal, how hard is it for the development team to fix this? Without frequent validations of performance throughout the project they must consider too many changes to the solution.
  2. Why didn’t you find performance issues earlier in the cycle? Functional automation teams have migrated left. Why couldn’t you work with the development and ops team to prototype a subset of the solution somewhere in the cloud, in a VM, with service virtualization, unit testing, or something that could have helped earlier.
  3. If the cost of your simulation and test environment are too high or just not very accurate, then a more cost effective troubleshooting solution is production. If the support team is already fixing performance issues there, then I would argue that enabling them with richer tools is a more effective solution. Real-world networks, hardware, monitors, batch operations, etc. are a whole new class of performance issues. Furthermore, every improvement here is the real deal. You could construct synthetic data or stub out the real parts of your application.
  4. Performance optimization is the counter argument to production validation. However, if you are not driving hypotheses through a suite of experiments to improve the solution, then what is the real value of that expensive test and evaluation rig?

Is your performance testing process easy to understand for all your partners?

Do they agree with the value that performance testing provides?

DevOps and Performance Engineering

togetherThis leads to the next question. How can you collaborate more effectively with the operations teams? The performance validation process should help them define their alerts and resource utilization targets. The diagrams required to sign off performance are also the critical ones for supporting a solution in production. Do you create a knowledge transfer process for your volume analysis, latency targets, business transaction event sequence diagrams, load balancing recommendations, etc. for your operations team? Or do you work with operations to create what they need?

How can performance engineering improve operations in development and production?

Business Collaboration

During the design of the new solutions, the performance engineering team collaborates with the architecture team to define the latency, throughput, and utilization goals for a solution. These are driven by a collaboration with the business owners that can map these goals for company growth, as well as understand the value proposition for a faster and more effective computer system. Modern application performance monitoring tools can parse business transaction in real time to associate transaction latency directly to sales throughput.

Transaction Analytics

How do you inform the business about performance in test and production?

How does the business provide input to the engineering team?

Transformation

Performance is more important than ever before. End users are more sophisticated about their application demands, while at the same time they have experience with computer systems that are very fast and efficient. These users have deeper expectations in 2016. How are you optimizing your processes, practices and solutions?

 

By Jim Azar

James (Jim) Azar, Orasi Senior VP and Chief Technology Officer A 29-year veteran of the software and services industry, Jim Azar is charged with oversight of service delivery, technology evaluation, and strategic planning at Orasi. Among his many professional credits, Azar was a co-founder of Technology Builders, Inc. (TBI), where he built the original CaliberRM requirements management tool. Azar earned a B.S. in Computer Science from the University of Alabama, College of Engineering, where he was named to the Deans’ Leadership Board. He furthered his education with advanced and continuing studies at Stanford University, Carnegie Mellon, and Auburn University at Montgomery. Azar has been published in both IEEE and ACM.

Leave a comment