Application Performance is in the Eyes of the End-User

Size: px
Start display at page:

Download "Application Performance is in the Eyes of the End-User"

Transcription

1 Application Performance is in the Eyes of the End-User Reference Code: CYIT0122 Publication Date: June 2011 Author: Tony Baer SUMMARY Catalyst Ovum view Regardless of whether your organization's applications target internal staff, business partners, or consumers, the lifeblood of your business can hinge on the quality of the end user experience. The end user experience is where the rubber meets the road for application performance. External customers will abandon poorly performing websites, while internal staff will find their effectiveness compromised as internal systems fail to respond. The growing complexity of the application delivery chain makes the task of ensuring a good end user experience much easier said than done. Inside the firewall, many organizations are mixing virtual and physical environments, deploying additional application tiers, embracing service orientation, and leveraging cloud computing resources. Outside the firewall, growing reliance on third party content delivery and other service providers, a proliferation of browsers, and the emergence of mobile and other alternate client devices, is making application delivery far more complex. No matter how complex the application delivery chain, many enterprises will find their businesses on the line if they are unable to deliver well-performing applications to the end users. The end user experience is a critical pillar of application performance management. It requires a global view that tracks the path of an application from data center through the Internet, and all the way to "the last mile" of delivery to the end user's computer or device. Although a well-established discipline, traditional application performance management approaches provide highly silo'ed views of application delivery to the end user. A full picture requires breadth and depth, and a view Application Performance is in the Eyes of the End-User (CYIT0122) Ovum (Published 06/2011) Page 1

2 of the full business impact. That encompasses the breadth of visibility from the data center, across all tiers, Internet backbones, third party service providers, to the end users' mobile device or computing browser. It also demands depth of problem root cause isolation, with the capability to perform deep dive measurements and diagnostics into any component that the application travels. Putting matters in perspective, the solution must enable people to understand the business impact of specific performance issues so they can prioritize their resolution. Key Messages The quality of the end user experience has significant impact on the bottom line End user experience can vary widely depending on location, browser, device and traffic volume The application delivery chain has become highly complex Traditional "end to end" performance management and monitoring approaches fail to cover delivery of the application through Internet backbones, service providers, and local mobile carriers or Internet service providers The only way to ensure delivery of quality user experience and resolve performance problems in a timely manner is to have insight across the entire delivery chain -- from data center through services providers to the end-user browser or device When performance issues occur, it is essential to have the ability to prioritize resolution by business impact TRADITIONAL MONITORING DOESN'T TELL THE FULL STORY APM is a well-tread discipline Application performance management is a well-established discipline, with a multitude of tools and approaches for measuring and managing the user experience. Among them are tools that instrument discrete elements of infrastructure, such as server, network or storage monitoring that can provide an aggregate picture on metrics such as capacity utilization, I/O, performance, and availability. With infrastructure monitoring as the foundation, application performance management has provided a different perspective by defining the application as the unit, dissecting its performance, and in some cases diving down to the thread execution or component level. Recent refinements have included "end to end monitoring," that typically correlate measurements of back end application performance with metrics on how -- and how fast -- the application renders on the client. Another recent refinement has been the emergence of transaction monitoring that takes a bottom up view of application performance as measured by the journey that individual transactions traverse as they execute on the back -end and render on the device. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 2

3 These metrics proved sufficient when application architectures were relatively simple and wellcontained, such as three-tier client/server applications or dedicated request/response web applications where interactions were targeted against a single back-end web and application server. As shown in Figure 1, the application delivery chain today has grown far more complex. Given this complexity, transaction management approaches provide valuable information especially for complex, composite applications -- but do not paint the big picture. Similarly, end-toend approaches typically miss what happens to applications in transit. Issues can occur anywhere, from object or class level within the software container to servers, networks, and virtualization layers. Once outside the firewall, application delivery can get compromised as the consequence of poor performance from third party service providers; congestion across Internet backbones, local ISPs, or mobile carrier networks; and performance issues with end user browsers or mobile devices. Traditional approaches cannot answer the all-important questions of why your online consumers are abandoning their shopping carts, or why your internal staff is experiencing productivity issues. Retaining eyeballs remains important for B2C and enterprise applications Studies performed for Compuware have proven that customers will abandon poorly performing websites. Internet leaders' studies have also shown that performance matters: Bing found that a 2 second slowdown caused a 4.3% reduction in revenue per user, and Google determined that a 400 millisecond delay resulted in 0.59% fewer searches per user. This translates, not only to abandoned shopping carts, but loss of reputation that can impact future business. However, just as important is the impact on internal end users with enterprise applications. While internal staff may not have the option to abandon their screens, internal productivity and the ability to execute to business plan will be severely affected if the performance of applications as experienced by employees is poor. Think about call center employee that can't resolve a customer's problem on the first call leading to churn/lost customers, or the insurance agent that favors one provider over another due to Quotation System performance. For application performance, the end user experience is where the rubber meets the road. For applications involving human interaction, the end user experience ultimately determines how effectively an employee can carry out the processes that run the business, or how likely a customer is willing to click "buy," return to your site, and encourage friends to do so. Good end user experiences are essential for the bottom line. Managing the end user experience requires a global view that correlates the big picture with the under the hood detail, from the first mile to the last. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 3

4 Figure 1. Application Delivery chains growing more complex Source: Compuware It's a jungle out there When organizations develop new applications, they typically do not rip and replace functionality. Instead, they leverage what's in place and add new functions that extend the number of tiers, programming languages, or architectural approaches such as virtualization to the mix. The result is that the application delivery chain grows more complex. Consequently, there are many things that can happen to an application on its pathway to the end user, which complicates the task of an APM solution. Although the back-end could be as simple as a straightforward user interaction with a dedicated transaction system, more often it executes in a data center where the architecture has grown far more complicated. The same holds true for the Internet where the delivery path involves multiple variations. Inside the data center, the application delivery chain may entail: A mix of virtual and physical environments. As a result, monitoring a server with multiple resident virtual images will provide a faithful picture of capacity utilization and performance, but does not provide the true picture of what is occurring with a specific application executing in one of those images. Multiple tiers; most data centers have evolved from three-tier architectures of the distributed client/server era to up to 10 or 20, loosely coupled tiers. Complex architectures that may include service (SOA) orientation, mobile components, and wide area network optimization systems. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 4

5 Growing consumption of cloud computing resources, as enterprises move their applications in whole or part to private or public cloud computing environments. Outside the firewall, the variables for application delivery include: Growing reliance on third party hosts and content delivery networks (CDNs) to accelerate performance through multiple local points of presence. The average web transaction today involves more than 10 distinct hosts. An explosion of browsers that have challenged the traditional dominance of Microsoft Internet Explorer for the web experience on traditional clients. A proliferation of smart phone clients and tablets. The 2007 release of the Apple iphone has redefined the mobile experience, with end users expecting a vivid -- and often unique -- experience through their powerful hand held clients. Subsequent introduction of Android along with new platforms from RIM, Microsoft, and Palm, have made this an increasingly diverse target for monitoring. A multitude of mobile device rendering approaches with the emergence of HTML5 and rich frameworks such as Adobe Flash providing alternatives to native rendering environments on devices such as the iphone and Droid. Emergence of composite applications that consume services from multiple sources or are rendered through mashups on the browser, dictating the need to monitor and model complex interactions between components executing on the client. What is Performance in the Eyes of the End-User? Simply stated, the end user experience is the sum of all occurrences when an end user interacts with an application. This definition applies, regardless of whether it is a popular consumer gaming site or an internal enterprise application. Managing the end user experience involves monitoring "the first mile" in the data center through the entire delivery chain to "the last mile" where the user is physically located. Although by definition, the end user experience applies to applications that involve human interaction, they may also rely on machine-to-machine applications that perform intermediate functions, such as invoking business rules or consuming third party data services, such as retrieving traffic reports for a GPS routing application. PUT ALL THE PIECES TOGETHER Ask the big questions first When monitoring the end user experience, start by asking the basic questions first: 1. Is my application performing properly? 2. What is the business impact? Is it impacting internal productivity or driving customers away? Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 5

6 3. Is the problem affecting my most important customers or users? 4. Is the problem affecting all users -- or just a portion? 5. How is my performance trending over time and how do I compare to my competitors? Then get the story from all angles Monitoring and managing the end user experience requires a multi-pronged approach that correlates data from each of these physical and virtual layers. It requires the ability to combine different views -- from data center to service provider and client -- to get the big picture on the quality of the end user experience -- and whether it is adequate to keep eyeballs glued, or for enterprise end users to perform their task efficiently. There are two basic methods for measuring application performance: continually measuring realuser experience capturing all dimensions of application performance from the perspective of the end-user, and conducting synthetic tests that execute specific scripts. Gaining a full understanding of what has gone right or wrong with the end user experience requires a complete 360-degree view, because each monitoring and testing approach is used for different purposes and provides complementary pieces of the full story. Real-user monitoring provides the insight if users are getting the service levels that they expect, and identifies the business impact of performance problems so your organization can prioritize problem resolution. Conversely, synthetic monitoring is useful for providing apples-to-apples comparisons because they test for specifically-defined conditions and scenarios. Synthetic monitoring also provides unique insight into the performance of 3rd party services that are increasingly impacting performance and the end user experience. They complement real-user monitoring by ferreting out potential problems before they become noticeable to end users. Listen from the first mile all the way to the last mile Look at the data center -- on premises or in the cloud Managing the complete end user experience requires the ability to sense performance at all points. It begins at the first mile in the data center whether it is in the server, the virtualization layer, or down to the application component or class. This encompasses monitoring: Mainframe compute cycles; Capacity utilization of distributed servers; Database performance; Middle tier operation including application servers, load balancers, service busses, and message-oriented middleware; Application internals down to the Java or.net component or class level; Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 6

7 Virtualized operating system instances; Internal networking infrastructure; and Cloud computing service providers Continue through web infrastructure The next logical step is vigilant monitoring of web infrastructure, by monitoring performance from the "outside-in." It ends at the "last mile," where local factors specific to the ISP, browser, connection speed, mobile service carrier, local mobile delivery towers add significant variables in tracking end user experience. While an organization has direct control of its performance inside the firewall, outside the firewall key areas to manage include: Should I deploy a CDN to accelerate performance and ensure consistency across regions? Are my applications performing well across all browsers and devices? Are my third party services meeting their SLA commitments? Enforcing service level agreements (SLAs) with third party service providers requires good data to ensure your service providers are meeting their contractual promises. Tracking performance across common carriers such as ISPs and telecom mobile carriers is essential for determining ultimate responsibility to service lapses. This is useful information when negotiating payments or contracts with third party service providers at billing or renewal time. Don't forget the last mile Triggering the application in the data center and providing efficient delivery through the Internet or carrier network, the battle isn't won until it executes and renders on the end user client. For the end user experience, performance of the client is the ultimate acid test. Successfully tracking the end user experience requires a multi-faceted approach that encompasses: Performing "real user testing" that records the actual experience of the user on a real application on the user's actual device and platform environment. Conducting synthetic testing from geographically disbursed consumer-like computers, mobile devices, and browsers to provide objective data that can be compared across applications, load types, time of day, and region; and Proactive testing of web and mobile sites across all major browsers, operating systems, and mobile devices. Monitoring must be ongoing in order to nip potential issues in the bud; additionally, load testing from the last mile should be part of the test harness before rolling out a new technology, product, or marketing campaign. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 7

8 Make Performance Data Actionable Collecting data on the end user experience is half the battle; the full value of an end user experience driven APM strategy is presenting data in views that are meaningful and immediately actionable to each stakeholder group. Given the complexity of the application delivery chain, presenting the right data is essential so stakeholders can act -- rather than puzzle through endless screens of cryptic log data and nuisance alarms. There are two major constituencies: business stakeholders and IT professionals. While they each require visibility on how specific applications are performing, they have different uses for the information. Consequently, role-based dashboards are essential. While there is some common information -- such as identifying which applications are not performing -- other pieces of the picture will differ. For Business stakeholders They must see how well specific business processes are executing. Specific to the end user experience, they must understand what levels of service or performance are actually being delivered to the end user. Because the level of service ultimately aggregates to the bottom line, they must be able to correlate the real user experience with its impact on revenues so they can answer questions such as "at what rate did revenue drop in the wake of a service interruption?" or "What was the rate of sales growth when service was improved?" Figure 2 provides an example of an operations dashboard that enables IT to immediately identify transactions that are not performing; prioritize performance issues by business impact; and drill down to infrastructure and software code level to troubleshoot and resolve. Business stakeholders must understand the business severity of the issue with key indicators such as impact on conversions, rate of shopping cart abandonment, brand reputations, and so on. With that information, they can help IT prioritize which application performance issues need to be addressed, and in which order of priority. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 8

9 Figure 2. High-level Business Dashboard Source: Compuware For Technology stakeholders Application development and/or IT Operations practitioners are concerned with the performance of specific elements of IT and network infrastructure, and the performance of third party service providers, ISPs, local carriers, and end user client types. They must be able to identify the most pressing problems so they can prioritize and accelerate resolution. Figure 3 shows a business stakeholder's view that presents performance data for specific business services, size of the user population affected, and overall perceptions of service quality. They must be able to quickly identify problems and have the ability to drill down to the infrastructure, software, or service provider levels to find where the problems are originating and what are the root causes. Time is money. The faster that problems are identified, the earlier they can be resolved, with less impact on sales or staff productivity. The more proactive that business and IT are at monitoring the end user experience, the greater the chance that they will avoid problems that hit the bottom line. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 9

10 Figure 3. Business Stakeholder dashboard Source: Compuware RECOMMENDATIONS FOR ENTERPRISES See your applications from the eyes of the end user For applications involving human interaction, the end user experience is where the rubber meets the road. Delivering a good experience to the end user ultimately impacts the business, whether that comes from the revenues that are realized from an e-commerce site, brand reputation, or the ability for employees to be productive. Targeting the end user experience should be a key pillar of your organization's application performance management strategy. Implement the right solution To track the end user experience, an application performance management solution must include the following core capabilities: Performance Measurement -- Track application performance from the perspective of the end user across the entire application delivery chain to provide objective data that can be used for resolving -- or averting -- problems. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 10

11 Problem Resolution -- Identify and isolate fault domains to determine root causes of application performance problems. Performance Improvement -- Continuously identify, prioritize, implement, and measure the results of opportunities for improving application performance. Production Readiness -- Ensure that the user experience can be maintained with proactive load that is conducted prior to launching new or scaling out applications, or deploying infrastructure changes. Performance reporting -- Generate role-based reporting that pinpoints performance trends and directly quantifies their impacts on the business. Getting started: Take it in stages Given the complexity of today's application delivery chain, it is not feasible for most organizations to master every aspect of managing the end user experience all at once. A staged roadmap that provides tangible payback at each step makes the most sense. Maturity models are a common mechanism for expressing the preparedness of an organization for meeting specific business challenges. They can provide useful guidelines for preparing staged roadmaps that make the task practical. A maturity model that Compuware has devised provides a useful guideline (see Figure 4). It characterizes the phases -- from reactive to optimized and pervasive. However, what is more important is setting attainable goals at each step along the way, covering: "Reactive" stage, where organizations lack visibility into the end-user experience and respond to problems identified by end users. "Aware" stage, where organizations get beyond incident and begin to monitor applications from the end user's perspective "Effective" stage, where the organization becomes more savvy as it extends monitoring instrumentation across the application delivery chain resulting in heightened understanding of application performance and the ability to accelerate root cause analysis and prioritize resolution by business impact. "Optimized" stage, where a broader cross-section of applications and transactions are monitored with deep dive diagnostics across the entire application delivery chain, and insight is provided into investments that will drive the greatest performance improvements. "Pervasive" stage, where end user experience testing becomes widespread and closed-loop, and where predictive analytics are a core part of the toolkit and knowledge base so organizations can nip latent problems in the bug. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 11

12 Figure 4. Compuware APM Maturity Model LEVEL 1 REACTIVE Limited awareness of end-user experience (EUE) app performance Reactive problem resolution and frequent war rooms Technology-centric, element-level visibility EUE Visibility LEVEL 2 AWARE Basic awareness of EUE app performance Can identify problems, but rootcause analysis takes too long Performance baselined and trends tracked Prioritize by Business Impact LEVEL 3 EFFECTIVE EUE & transaction visibility across the app delivery chain Accelerated problem resolution via deep-dive diagnostics Problems prioritized by business impact Optimize App & IT Performance LEVEL 4 OPTIMIZED Broad EUE visibility and deep dive diagnostics across app delivery chain Automation of problem analysis and diagnosis Initiatives prioritized based on business impact with deep root cause insight Business Agility and Competitive Edge LEVEL 5 PERVASIVE Active management of the application delivery chain Real-time visibility used to orchestrate service delivery Leverage collective intelligence across the internet 18 Copyright Ovum. All rights reserved. Ovum is part of the Datamonitor Group. Source: Compuware Clearly, there is no single silver bullet roadmap that will work for all organizations. The usefulness of a maturity model is that it provides a guideline by which organizations can set step-by-step goals towards their ultimate objective of having the capability to nip potential end user experience problems in the bud before they depress productivity or affect revenues. More importantly, it provides a path that allows organizations to realize time-to-benefit early, and allows them to learn from their experience to increase bottom line benefits to the business over time. Conclusion For applications that involve human interaction, the end user experience ultimately determines the success or failure of an application. Poor user experiences cost money -- whether that be in lost business or reputation for an external-facing web application, or productivity or the ability to Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 12

13 support the organization's core mission for internal applications. Consequently, for these applications, the end user experience should be a core pillar of your organization's APM strategy. Managing the end user experience requires a view that is sufficiently broad and deep to cover the entire application delivery chain, from first mile to last. It requires a view that provides actionable information, highlighting the business impact; providing the means to drill down to perform root cause analysis; and delivering it with role-based views that are meaningful to IT and business stakeholders, respectively. Organizations embarking on this journey should take a staged approach that delivers tangible impact at each step along the way. Application Performance is in the Eyes of the End-User Ovum (Published 06/2011) Page 13