Optimizing the User Experience: How’s That Coming Along?

iStock_000071628411_XXXLargeGoogle the search term, “user experience,” and the search engine produces 237 million results. User experience is everywhere, with articles proclaiming it is the “end all and be all” of software today.

As it happens, those articles are right. Most software professionals know, even if they don’t address the fact, that user experience has become extremely critical to application adoption and success. It also makes or breaks the success of websites, search engines, and just about any digital experience that consumers have today.

Reality Check

User experience has been a challenge since the beginning of computing, but it exploded when mobile apps were introduced and continues to be a primary focus. However, today’s users are demanding more, faster, and better in everything, from the web apps that handle e-commerce sessions to the ERP platforms that run business operations.

Making the situation worse for developers, users are demanding more intuitiveness, ease of use, and integration. One example is cross-platform functionality and stickiness. I am not referring to the need to test each app on Apple and Windows PCs and hundreds of tablet and phone models. Users want a seamless experience for any given transaction—whether app or browser-based, or both—across every device they own.

For example, assume a user accesses a web browser interface on an office PC to start a restaurant reservation. He or she isn’t sure about the time and stops. Once the user confirms the time with friends, while heading home on the train, he or she expects to resume the session via the mobile version of the app and complete the reservation using session information saved from the PC.

Unless you have your head buried in the sand, none of this comes as any surprise to you. You are likely an app user yourself, and as a software professional, you may have thought about these very challenges. Yet, every day, business analysts, developers, and testers do stick (or keep) their heads in the sand when it comes to working on their own apps. The current workload is already too heavy, and marketing hasn’t asked for that feature yet, so why worry, right?

iStock_000022980132IllustraExploring the Problem

Although lack of training or outdated processes, inadequate testing due to budget constraints, and other factors can result in a poor user experience, let’s assume these are not the issues at play. For the sake of argument, everyone is meeting job targets and management isn’t completely unreasonable about budget and timing. Here’s how it unravels.

Marketing pros or business analysts conduct surveys, interviews, and focus groups to discover what users want. They record this information, which they hope is representative and not colored by each user’s particular idiosyncrasies. Then, they communicate it in their jargon to the development and testing folks, who are responsible for building the code and building/running the tests to ensure everything functions properly.

Next, developers go off and write code based upon what, from their perspective, the marketers said the users wanted. Testers perform unit, function, and hopefully performance tests to ensure that the resulting code works as the developers wrote it with minimal defects. At no point does anyone consider whether what they are doing is something the user will care about—or want to use.

If the release bombs with users, marketers blame the software teams. Other departments might get involved when marketing starts hearing chatter about user problems. If the release met objectives—which to developers and testers means mechanically tight with defects below acceptable thresholds—quality assurance (QA) likely won’t see a problem, and the general consensus by project leaders will be, “The teams did their jobs. The release is solid.” Everyone goes away and says, “We did our best. Users are crazy.”

In Reality, the Process Is Crazy

Given that multiple groups of people, all of whom have different reference points, work on products independently without ever reconnecting with the groups who will actually use the products, it’s a miracle that users weren’t more dissatisfied sooner. For decades, users were conditioned to accept buggy software that didn’t work intuitively as the status quo.

Today they reject that notion, yet in many instances, outdated mechanisms and processes continue. Even assuming developers and testers are in sync (often they are not; a discussion for a different day), other likely impediments include:

  • Marketing and business experts can’t communicate effectively with developers and testers and neither knows how to fix it (or even knows the breakdown is occurring).
  • Upper management is more worried about how much it is spending on each release than on whether or not what users want is actually communicated to anyone.
  • Most of the people who are involved with or work on the product never try using it themselves.

As a result, millions of dollars are expended creating temples to technology that are built exactly as designed, but no one wants to visit them.

bulbI do not have the “total solution” for this, but I can offer one bit of advice. If users are the most important part of this equation, more of them need to provide real-world insight. Whether it is during the release cycle or before or after a release will vary depend on timing and budget. Nevertheless, at some point, users—or individuals capable of working from their perspective— must be more involved. That means:

  • Companies must ensure real people (not software) experiment with and report back about release candidates—either through crowdsourcing or by assigning the task to testers who have the talent to test as a user would.
  • User experience (UX) tests must address users at all levels, not just power users.
    • Testers must explore multiple scenarios—not just check the steps to do something but take wrong turns and see where users end up and how they get back on track. Complete UX testing optimally includes not only the “happy path” where all goes well but also those paths where customers have to cancel or abandon the transaction.
    • Testers must try to accomplish tasks without having any instructions at all. Crowd testing can really pay off here, as it is important for some testers to have no pre-conceived notions about how the app is supposed to work.
  • For the most effective input, testers should consider, and note at every step, whether the functions are elegantly designed:
    • Easy to accomplish with minimal steps and thought
    • Accessible/can be completed when multitasking
    • Do everything they should (or if other features would be a good addition)
  • UX testing must consider both functional and non-functional user experience and evaluate both as quantitatively as possible, keeping in mind that user performance expectations often differ by app function and are based on a user’s experience with the app.
  • Company representatives must validate and evaluate the input and proactively decide how and when to implement the feedback in a meaningful way. This process is highly subjective and cannot be automated or performed by entry-level QA engineers. Despite this step being crucial, costs and time often cause it to be stricken from the plan.

Major organizations use alpha and beta testing to achieve some of these goals, but even here, testing is rarely as detailed as I described above. For the average company, UX testing may not happen at all, often due to budget and time constraints.

Organizations must find some way to examine and validate the user experience. If they do not have the in-house talent, they should hire it. The effort is worth the expenditure. Otherwise, user adoption will drop, corporate reputation (or employee productivity) will be negatively impacted, and the app may be abandoned with all its investment lost. That is a “user experience” no company wants to have.

Author’s Note: Although not a focus of this discussion, user monitoring tools such as HPE AppPulse, can also offer valuable insight for app design. However, no software can replace real-world UX testing.

 

Excerpt from Upcoming Blog

There are tools that can monitor production (or test) user actions in real time and report on what they did or didn’t accomplish. Using a tool like this (e.g. HPE AppPulse Mobile) provides an insight into whether the use cases we’re testing are sufficient. That is, they can provide more context around where the design is faulty and how intuitive the app really is.

 

Leave a comment