IBM. Processor Migration Capacity Analysis in a Production Environment. Kathy Walsh. White Paper January 19, Version 1.2

Size: px
Start display at page:

Download "IBM. Processor Migration Capacity Analysis in a Production Environment. Kathy Walsh. White Paper January 19, Version 1.2"

Transcription

1 IBM Processor Migration Capacity Analysis in a Production Environment Kathy Walsh White Paper January 19, 2015 Version 1.2 IBM Corporation, 2015

2 Introduction Installations migrating to new processors often perform capacity assessments against workloads running on the migrated processor. These assessments are done to validate the expected capacity relationships and to ensure workloads running in the new environment are working as expected. The ability to do this analysis on a production system is made more difficult because of the varying nature of work on the system under review. Unlike a benchmark, where the load and test data are controlled and the system is configured to minimize change, production systems by their nature have significant variability. An analysis of any production system needs to be able to identify workloads where the inherent variability of the production system is reduced so comparative analysis can be done. In order to evaluate a production system it is necessary to apply some tests to determine workload repeatability and remove potential variability. Otherwise capacity information can appear to be out of bounds, when in fact, the differences can be due to a change in workload patterns. Multiple factors beyond just a new, faster processor can impact performance and throughput metrics. These factors include, but are not limited to: changes in the amount of processor storage application changes increase in the number of end users accessing the systems operating system and middleware maintenance changes in operating system levels exploitation of new features within the operating system, such as IRD or z/architecture significant changes in the size, and/or number of databases, or database buffering techniques use of dynamic SQL which changes access path based on power of the processors use of never ending work searches, a faster processor will search much more (and use more CPU) than a slower processor latent demand Other situations can also impact how workloads run on a newly installed processor. The most common situation is having a workload which previously was constrained by engine speed disrupting the allocation of CPU resources. Workloads which previously ran when the other workload was constrained now do not run as often causing issues in either performance or throughput. Changes in workload mixture can often be mistaken for under-performance of a new processor. This document will outline a methodology used to identify and assess relative processor capacity after a processor change for production environments. The methodology relies upon the application of statistical tests to try and identify stable and repeatable workloads in a production environment. However, as with any methodology, the rigor with which the analysis is done can impact the effort. This document will provide information regarding issues and conditions which can be encountered doing an analysis of this type. Statistical tests are used to identify workloads z Systems Performance IBM Corporation, 2015 Page 1

3 for further analysis, but as we will see, passing any given statistical test is only the first step in the analysis phase. Background Before we begin with the statistical tests we must first reflect upon the data used as a basis for the comparison. For z Systems processors this information is based on LSPR benchmarks. Information on LSPR is found at URL: https: www-304.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprindex?opendocument LSPR benchmarks provide information on a series of IBM benchmarks, run in a controlled environment, which are used to describe relative processor capacity. These benchmarks are used as the foundation in any capacity planning effort involving z Systems processors. Armed with the LSPR information the following steps need to be done to ensure appropriate capacity expectations are set for the processor migration which will be reviewed. First a workload needs to be developed for the LPARs involved in the migration. When choosing a workload there are two factors used to describe how well a workload should perform on a z Systems processor. These factors are the Level 1 Miss Percentage and the Relative Nest Intensity experienced by a workload. On a z Systems processor there is a hierarchy of hardware caches provided which are populated with data and instructions from central storage. The hardware caches reside in different locations within the microprocessor and have different sizes and performance capabilities. Generally, caches lower in the hierarchy are smaller and have better performance characteristics than caches at higher levels. On most current z Systems processors there are four levels of hardware caches; levels 1-2 resides on the core, level 3 on the chip and is shared by the cores on the chip and level 4 resides above the chip and is shared by all the cores on all of the chips. Caches residing above the core are referred to by an engineering term called the nest. When sourcing either data or instructions from higher levels of the cache hierarchy it will take more time, (processor cycles), than if the data is found in the level 1 cache. So understanding how often the requested information is not found in the level 1 cache, (called the Level 1 Miss Percentage (L1MP)), and where in the hardware cache heirarchy it is found directly influence the cycles per instruction required to run a workload. Workloads can be described based on how intensively they use the caches in the nest, hence a workload s relative nest intensity. The extent to which a workload relies upon caches in the nest or central storage will influence the number of cycles per instruction the workload will require. The LSPR data which is used to build capacity relationships among different z Systems processors is organized into 3 workload types based on a range of measured values for both the L1MP and the RNI. The workloads are called called LOW, AVERAGE, and HIGH. In order to do capacity planning for z Systems processors the workloads running on your current processors need to determine the L1MP and the RNI. The ability to measure and derive the L1MP and the RNI for a workload is provided by the CPU Measurement Facility, (CPU MF). Introduced with the IBM z10 processor this facility will z Systems Performance IBM Corporation, 2015 Page 2

4 provide the necessary data to determine the correct workload to do capacity planning. For more information on enabling the CPU MF facility please review the detailed planning information at URL: Once the workload has been established it is now necessary to determine the expected capacity of the before and after processor configurations. This step requires the analyst to identify the LPAR effects of both the before and after configurations. IBM provides a tool to help with this analysis called the IBM Processor Capacity Reference (zpcr). The zpcr tool is available as a free as is tool available by web downloading. To get the zpcr tool and information on training please see URL: One of the benfits of using the zpcr tool is it has the ability to read in the z/os CPU MF data to automatically determine the correct workload based on L1MP and RNI. Inputs to this phase would be SMF 70 and 113 records. In this phase it is important to use actual system configurations, and not configurations used in the capacity planning process. Many times the configuration used in the planning process are not the configuration actually deployed throughout the migration. It is imperative though for capacity expectations to be set using actual configurations and not planned configurations. This is especially true if the processor migration is across a lengthy time span. Each step where analysis is done needs to build capacity expectations based on the before and current configurations. So for instance, if the end game of a processor migration is to be a 16 CP processor, but a staged migration happens where the first LPARs migrated are only on a 10 CP processor the capacity expectations need to be rebuilt for the 10-way processor and not the 16-way processor. It is often at this point a checkpoint should be done to ensure all analysts involved in the process agree on the capacity expectations which will be used in the later data analysis. But once the capacity expectations are set, the next step is to begin the analysis phase. It is important to note the capacity expectations set in this phase are not used at all in the analysis phase. The analysis phase is going to provide an estimate of the relative capacity seen in the migration. The answer from the capacity analysis is compared against the expected capacity to determine how well the upgraded processor has met the capacity expectations. Data Analysis In order to control the variability which exists in a production environment the workloads are subjected to a series of statistical tests to see if they meet repeatability criteria. The approach used during the analysis is to look at the CPU used in relation to the I/O rate. The approach is based on the concept there exists a relationship between I/O and CPU usage. If I/O rates increase then it is expected to also see a comparable increase in CPU usage. It also is expected repeatable work would have the same CPU to I/O pattern each time the work runs. There are issues with this approach. First, the I/O patterns may, in fact, change between periods but within each period they are consistent, and so comparison between periods gives a false positive. Second, many workloads, especially online workloads, accessing a database manager z Systems Performance IBM Corporation, 2015 Page 3

5 will not execute a significant number of I/Os. Instead, depending upon the buffer hit ratios, the CPU demand of applications can increase due to increased workloads but if the locality of data reference is high in the database buffers additional database I/Os may not be needed and the I/O per CPU metric used in the analysis fails. In cases where database workloads are heavily used it is sometimes necessary to use transaction information to establish repeatability. In these cases it will be necessary to identify CPU per transaction code as a basis for analysis. Again there are the capabilities for false positives even at a transaction level. A transaction can be changed in such a way the code execution is consistent before the change, and consistent after the change, but change has occurred which impacts the CPU demand of the work. The identification of false positives from true conditions is more art form than science. It is also required in any analysis relying upon statistical tests to identify a condition. This document will discuss in more detail steps which need to be applied against the data to identify and manage false positives. The methodology followed in this phase of data analysis is to identify a set of repeatable work in a before period and a set of repeatable work in an after period. The before period best suited to this work is the period directly before the processor migration. The after period can be subject to more discretion. Generally the period where the majority of the migration is complete is best, though in a large server consolidation effort earlier analysis may be useful, say when a large or important LPAR has been migrated. B e fo re P ro c e s s o r A fte r P ro c e s s o r A ll J o b s R e p e a ta b le J o b s R e p e a ta b le J o b s A ll J o b s C o m m o n R e p e a ta b le J o b s D o th e a n a ly s is o n ly o n C o m m o n R e p e a ta b le J o b s Amount of Data Since statistical tests will be applied against the data it is crucial to have enough samples to make valid comparisons. Generally for batch and address space analysis this means at least 10 business days of data for both the before and after periods. Though analysis can be done with fewer samples the confidence in the data is reduced with fewer samples. Through experience we have not seen much more value added by having more than 10 days of data. Different data may be required if there are wide variations in workload throughout the period. For instance, application profiles can change dramatically based on month, quarter, or year end processing. It may be this change in profile which may lead the analyst to pick the same type of week in successive months rather than consecutive weeks. Inability to control this type of application z Systems Performance IBM Corporation, 2015 Page 4

6 profile can reduce the effectiveness of the analysis as large portions of the work may fail the repeatability tests. There are no hard and fast rules about picking representative periods on which to base the analysis. Instead the team doing the analysis needs to be aware of how different periods may affect the results and work as a team to provide the best possible input for the different periods. Any known issues with the periods chosen for review should be noted and documented as part of the analysis report. For online transaction data, such as CICS, IMS or Websphere, sufficient samples per transaction code are easy to obtain. For this type of data we recommend reviewing 1 hour of data from a peak time for 2-3 days for both the before and after period. This allows the analyst the ability to compare transaction codes for repeatability throughout the peak hour and across like hours on different days. Again the major goal of the analysis is to discover data which appears to be repeatable and stable enough in both periods to allow conclusions to be drawn regarding the relative capacity changes seen as a result of the processor migration. Because of the volume of transaction data available it may be very acceptable to identify certain activity levels which must be passed for inclusion in the analysis. An example of this would be transaction codes are included if they have more than 100 executions and/or use more than 5 seconds of CPU time. Any decisions made regarding which data is included in the analysis should be documented. Analysis Methodology As stated earlier we are going to build CPU to I/O relationships, or CPU per tran code relationships for work units within the before and after periods. In any given period work which is identified as the same, (same job name, step name, procname for address space analysis, or same SMFID, APPLID and transaction code for online analysis), will be examined for repeatability among the period s samples. Work which is not repeatable in the before period cannot be used as a basis for comparison to the after period. Likewise, work which is not repeatable in the after period cannot be used as a comparison to the before period. The CPU per I/O ratio or the CPU per tran ratio is computed for the appropriate period, and all samples are used to determine an average for the work item. The standard deviation of the work is computed, (a test for skewness), and a coefficiency of variance is computed, (standard deviation/average). Work units are accepted for inclusion in the appropriate period if the coefficiency of variance is 0.1 or less. Values higher than this indicate the work units within the same period are not repeatable and are excluded. For each time period the set of work units which pass the repeatability tests will be used to form a union of common work units which passed the repeatability tests. It is this union which will be reviewed to assess the relative processor capacity achieved as a result of the migration. z Systems Performance IBM Corporation, 2015 Page 5

7 For each before/after work unit identified a work throughput ratio (WTR) is calculated for the work unit. This is repeated for all work items which passed the previous repeatability tests in both periods. Two statistical metrics are created: Weighted Average. Weighted by the CPU contribution of the work unit. Median. Middle point of all of the WTRs The first test is to review the median WTR and the weighted average. If the work units used in the analysis are showing consistency among themselves then the median and the weighted average should be close. If they are not then some additional analysis may need to be done to understand why the values are different. Before we begin discussing steps necessary to ensure the validity of the data it is important to discuss what the expectations are for work units relative to the per CP capacity expectation set earlier. It is not expected for every work unit in the analysis to have the same WTR. In fact just the opposite is true. Work units will commonly be above and below the expected capacity. What we are looking for is on average the work is meeting the expectations. Handling Outliers in the Analysis When doing an analysis of this type it is necessary to review the data and identify and handle outliers. As previously discussed there can be false positives in the data; work which passes the statistical tests but doesn t pass the smell test. Results which are so obviously wrong they need to be removed from the result set. For example let s say the capacity expectation of a new processor would have a per CP WTR of 1.7. After running the analysis we find a work unit which has an WTR of 8.0. In this case it is unlikely the work ran this fast. In truth it is most likely the work which we tried to identify in the production system as the same was in fact not the same at all. The work met all of the statistical tests of a consistent pattern before and a consistent pattern after, but in reality the work unit in question is so far outside the expected range of performance it needs to be removed from the analysis. Additional investigation may be warranted to understand the job but it is unlikely the job is representative of the work running on the new processor. Again it needs to be remembered not every work unit will see the same WTR. Some variation is expected. In any distribution outliers will exist. The impacts of managing outliers needs to be done in a transparent manner. All outliers removed should be documented with the reason for the removal. But removal of outliers is considered normal and is a necessary step in an analysis of this type. Assessment The final test once the outliers have been removed is to compare the weighted average WTR against the per CP capacity expectations set in the earlier step. IBM has stated the capacity expectations which are set via zpcr and LSPR data have a confidence factor of +/- 5%. If the weighted average WTR is within 5% of the capacity expectation the processor is judged to be z Systems Performance IBM Corporation, 2015 Page 6

8 meeting expectations. Values off of this will require additional investigation. It may be necessary to examine the work in more detail to get a better idea of what is happening. There may be environmental factors or tuning factors which need to be investigated to better understand the work. These types of activities are outside the scope of this document. z Systems Performance IBM Corporation, 2015 Page 7

9 Appendix A Outlier Management Techniques Management of outliers and the function of trimming are actions which need to be applied against the result set. The following offers explanations of why certain trimming actions are appropriate to take. The goal of the analysis is to do as little trimming as possible. Weighted Average Different than the Median In general these two metrics should be close. What causes these two metrics to be different? Generally these situations arise with the following conditions: A work unit with a large amount of CPU whose WTR is out of sync with the rest of the work. A number of work units with a large number of executions whose WTR is out of sync with the rest of the work. Lots of work units which don t use a lot of CPU and have a WTR out of sync with the rest of the work. Simple Average vs the Weighted Average When using a simple average all work units in the analysis are judged to be equal contributors. A weighted average on the other hand gives more impact to work units which have more executions or use more CPU. Since this is a capacity exercise we are looking for work which drives the utilization of the processor. The utilization of the processor is more sensitive to larger jobs or jobs which have more executions. Lots of little jobs can influence a simple average but their impact on the utilization of the processor is negligible. WTR much different than expectations Certain work units are going to have WTRs much higher or lower than expected and can reasonably be explained. In many cases this work is demonstrating a false positive and should be evaluated for removal. This is a matter of judgment, there is no hard and fast rule. Result sets from application environments with unequal amounts of stable workloads Understanding the topology of the work which is being measured as part of this analysis is important. As stated before not all work will get the same WTR. We have also described why a weighted average is more important to use than a simple average. Namely a weighted average will better represent how the processor s capacity is being used. With this knowledge there are cases where the topology of the work influences the analysis. It is critical to understand a topology s influence and to look for cases where some additional conditions need to be examine before conclusions can be made about the relative performance of a new processor. The easiest method of demonstrating this condition is to build a sample environment. In this case we will use a CICS MRO environment made up of TORs, AORs, and FORs. In reality any application environment which has multiple components can exhibit this behavior but we use a CICS example because this type of environment is well understood. In order to highlight this discussion the following table will be used to represent the condition. z Systems Performance IBM Corporation, 2015 Page 8

10 We will discuss each row in the table and how it influences the analysis. Row CLASS A (AOR) CLASS B (FOR) CLASS C (TOR) CPU Seconds Description 2 77% 15% 8% Ratio of CPU Used by Class Benchmark WTR for each Class 4.77*1.9= *1.85= *1.7= ITRR weighted on USED CPU Seconds CPU Seconds - Repeatable Work 6 20% 2% 78% Ratio of repeatable seconds 7.2*1.9=.38.02*1.85= *1.7= ITRR weighted on Repeatable CPU Seconds In this environment there are three classes of work in the environment. As can be seen from row 1 the total CPU seconds used by the three classes equals 6500 seconds across some interval. The CPU seconds are not evenly distributed. Class A has either more work, or more complex work and represent 77% of the total CPU used. Class A is very much like an AOR where most of the transactions logic is executed. Class B is like a FOR where it is primarily I/O bound and the CPU used is primarily to manage data access. Class C is like a TOR where the function is primarily transaction management and routing, with very little logic execution. Row 2 has the ratio of work as judged by CPU seconds used by each class. Normally with a weighted average we would use row 1, CPU seconds, to drive the analysis. Clearly the work in Class A drives most of the capacity of the LPAR. Let s assume for right now we have total knowledge of the characteristics of this workload. Assume this workload was run in a controlled benchmark and we could directly measure the WTR for the work. The results of such a benchmark are listed in row 3. We have different WTRs for each of the three classes in the same environment. This is not unusual. Using this information and the CPU used for each class we can build a weighted WTR for the work using the ratio of CPU seconds by class. As such we get a WTR of 1.87 for the environment. This number is clearly being driven by the 1.9 WTR seen from the Class A workload. But production systems don t have the luxury of benchmarks to derive WTRs. Instead we need to apply statistical tests to see if we can identify the relative performance of work. So after applying the statistical tests for repeatability we get the number of CPU seconds which represents the repeatable workloads. This is documented in row 5. What you see is if you determine the ratio of repeatable seconds for the different classes of work the ratios are very different from the actual CPU seconds used in the environment. To further this example let s assume when we obtained the WTRs for the repeatable workloads the WTR s for the different classes matched the WTR s derived from the benchmarks (Row 3.) Using the information in row 6 it would now be possible to also build a weighted average based not on total CPU seconds but on repeatable CPU seconds. This is what is shown in row 7. In this z Systems Performance IBM Corporation, 2015 Page 9

11 case the overall WTR for the environment is being driven by Class C even though Class C represents only 8% of the entire capacity of the environment. Clearly two very different answers are derived, and both have merit depending upon the point of view. But why does it happen, and what do you do about it? Why Does it Happen? Experience has shown us two major reasons for why it happens: 1. Precision of the data 2. Trivial amount of CPU time Precision is the easiest to understand. If the most precision you have in the analysis is.0001 and the work is very trivial you will have a lot of work at the low end of the precision range. Any CPU per tran value, or I/O per CPU value from to will resolve to Work which isn t alike will look alike due to rounding. The second factor is CPU trivial work. The trivial nature of the CPU used in a job or transaction will cause work to fall into the precision issue mentioned above. It will also cause other issues on the processor. Processor performance is improved when cache and high speed buffer hit ratios are high. When very trivial work gets dispatched it needs to build it s working set, but just as the work gets started and can begin to see improved hit ratios the work ends. More complex transactions see more benefit, and hence better performance, because it can take advantage of the working sets it has built. This is why it is not expected for all work to get the same WTR as published LSPR ITRRs. Some workloads are more able to take advantage of the elements of the microprocessor design. But though some workload may see slightly less performance the impact is also negligible because the CPU content of the work is trivial. If the work became more complex then the work would begin to be able to take more advantage of the buffering and caching in the processor. Let s use the table we built above to demonstrate this point. Via the benchmark we know the Class 3 work has an WTR of 1.7. But as we do this work we publish an average WTR for the environment of The Class 3 work at 1.7 is 10% off of this published number. What does this translate to overall though. If we take 10% of the 500 seconds used by the Class 3 work this would be 50 seconds out of the 6500 seconds used. This would translate to approximately 0.8% of the CPU capacity used in the interval. The WTR is lower, but the amount of work represented is also correspondingly small, as is the impact on overall processor capacity. What To Do About It? Knowing it is happening is the first step. Understand much of this happens when the ratio of CPU seconds used becomes much different than the ratio of repeatable CPU seconds. Second, review the workload to see if they have precision issues, or are CPU trivial, or both. z Systems Performance IBM Corporation, 2015 Page 10

12 One approach to handling this type of situation is to assume the WTR created for the classes using the repeatable workload is a fair representation of the work running in the related class. Then using the WTR s derived from the repeatable workloads use as the weighting factors the CPU seconds used in the interval rather than the repeatable CPU seconds. Again, this type of outlier handling will not be needed if the ratio of CPU used seconds is close to the ratio of repeatable CPU seconds. z Systems Performance IBM Corporation, 2015 Page 11

Managing CPU Utilization with WLM Resource Groups

Managing CPU Utilization with WLM Resource Groups Managing CPU Utilization with WLM Resource Groups Fabio Massimo Ottaviani EPV Technologies February 2013 1 Introduction At any z/os site hardware costs are mostly driven by installed CPU capacity while

More information

z/os WLM Are You Set Up to Fail?

z/os WLM Are You Set Up to Fail? z/os WLM Are You Set Up to Fail? Bradley Snyder IBM Summer Share - Pittsburgh August 7, 2014 Insert Custom Session QR if Desired. Agenda Setting the right goal IBM Recommendations not being followed Service

More information

TPF Performance Metrics: Using The TPF ITRR Tables

TPF Performance Metrics: Using The TPF ITRR Tables TPF Performance Metrics: Using The TPF ITRR Tables Bill Supon What Are "ITRRs"? TPF ITRRs are a tool to used to help with the capacity and performance planning for TPF systems. TPF ITRRs are conceptionally

More information

The Major CPU Exceptions in EPV Part 1

The Major CPU Exceptions in EPV Part 1 The Major CPU Exceptions in EPV Part 1 Mark Cohen Austrowiek EPV Technologies March 2014 1 Introduction EPV provides many exceptions in order to help customers identify anomalies, performance issues and

More information

z/os Performance Case Studies on zhpf & Coupling Facility Paulus Usong, IntelliMagic Performance Consultant

z/os Performance Case Studies on zhpf & Coupling Facility Paulus Usong, IntelliMagic Performance Consultant z/os Performance Case Studies on zhpf & Coupling Facility Paulus Usong, IntelliMagic Performance Consultant paulus.usong@intellimagic.net Agenda 1. Brief Overview of IntelliMagic Technology Who is IntelliMagic

More information

The MLC Cost Reduction Cookbook. Scott Chapman Enterprise Performance Strategies

The MLC Cost Reduction Cookbook. Scott Chapman Enterprise Performance Strategies The MLC Cost Reduction Cookbook Scott Chapman Enterprise Performance Strategies Scott.Chapman@epstrategies.com Contact, Copyright, and Trademark Notices Questions? Send email to Scott at scott.chapman@epstrategies.com,

More information

2018 IBM Systems Technical University

2018 IBM Systems Technical University Session z104138 Container Pricing: So easy even Frank can do it! Cheryl Watson, Frank Kyne cheryl@watsonwalker.com Watson & Walker 2018 IBM Systems Technical University October 10, 2018 Hollywood, Florida

More information

How fast is your EC12 / z13?

How fast is your EC12 / z13? How fast is your EC12 / z13? Barton Robinson, barton@velocitysoftware.com Velocity Software Inc. 196-D Castro Street Mountain View CA 94041 650-964-8867 Velocity Software GmbH Max-Joseph-Str. 5 D-68167

More information

How much capacity can your mainframe really deliver?

How much capacity can your mainframe really deliver? How much capacity can your mainframe really deliver? Fabio Massimo Ottaviani EPV Technologies White paper 1 Introduction In recent years most performance analysts have started using the zpcr tool, provided

More information

ROI Strategies for IT Executives. Syncsort ROI Strategies for IT Executives

ROI Strategies for IT Executives. Syncsort ROI Strategies for IT Executives ROI Strategies for IT Executives Syncsort ROI Strategies for IT Executives Introduction In the 1996 movie Jerry Maguire, the character Rod Tidwell played by Cuba Gooding Jr. was always challenging Jerry

More information

LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION

LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION March 2009 0 LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION DataDirect Shadow - version 7.1.3 - Web Services Performance Benchmarks Shadow v7 ziip Exploitation Benchmarks 1 BUSINESS

More information

White Paper. SAS IT Intelligence. Balancing enterprise strategy, business objectives, IT enablement and costs

White Paper. SAS IT Intelligence. Balancing enterprise strategy, business objectives, IT enablement and costs White Paper SAS IT Intelligence Balancing enterprise strategy, business objectives, IT enablement and costs Contents Executive Summary... 1 How to Create and Exploit IT Intelligence... 1 SAS IT Intelligence:

More information

z Systems The Lowest Cost Platform

z Systems The Lowest Cost Platform z Systems The Lowest Cost Platform Using IT Economics (TCO) to prove that z Systems is in fact the lowest cost platform, with the highest QoS! David F. Anderson PE dfa@us.ibm.com June 24, 2016 Christopher

More information

Mainframe Software Costs Taming the Beast. Cheryl Watson, Watson & Walker Frank Kyne, Watson & Walker

Mainframe Software Costs Taming the Beast. Cheryl Watson, Watson & Walker Frank Kyne, Watson & Walker Mainframe Software Costs Taming the Beast Cheryl Watson, Watson & Walker Frank Kyne, Watson & Walker www.watsonwalker.com technical@watsonwalker.com 1 Welcome Thank you for attending this session! Who

More information

SAVE MAINFRAME COSTS ZIIP YOUR NATURAL APPS

SAVE MAINFRAME COSTS ZIIP YOUR NATURAL APPS ADABAS & NATURAL SAVE MAINFRAME COSTS ZIIP YOUR NATURAL APPS Reduce your mainframe TCO with Natural Enabler TABLE OF CONTENTS 1 Can you afford not to? 2 Realize immediate benefits 2 Customers quickly achieve

More information

Fractal Exercise. Fractals, task farm and load imbalance

Fractal Exercise. Fractals, task farm and load imbalance Fractal Exercise Fractals, task farm and load imbalance 2 Contents 1 Introduction and Aims... 3 1.1 Mandelbrot Set... 3 2 Looking at the concepts... 4 2.1 What is a task farm?... 4 2.1.1 Using a task farm...

More information

Service Differentiation: Your 3-Step Plan. Differentiation, Service DNA, and Continuous Improvement in Field Service

Service Differentiation: Your 3-Step Plan. Differentiation, Service DNA, and Continuous Improvement in Field Service Service Differentiation: Your 3-Step Plan Differentiation, Service DNA, and Continuous Improvement in Field Service Introduction This white paper discusses service differentiation: doing more with less,

More information

Can You Teach a New Capacity & Performance Specialist Old Tricks? Hindsight and Insight

Can You Teach a New Capacity & Performance Specialist Old Tricks? Hindsight and Insight Can You Teach a New Capacity & Performance Specialist Old Tricks? Hindsight and Insight Cheryl Watson Watson & Walker, Inc. August 13, 2008 Session 3117 www.watsonwalker.com home of Cheryl Watson s TUNING

More information

To MIPS or Not to MIPS. That is the CP Question!

To MIPS or Not to MIPS. That is the CP Question! To MIPS or Not to MIPS That is the CP Question! SHARE Anaheim EWCP Gary King IBM March 3, 2011 1 2 Systems and Technology Group Trademarks The following are trademarks of the International Business Machines

More information

PERFORMANCE MANAGEMENT ASG-TRITUNE. ASG-APC for TriTune

PERFORMANCE MANAGEMENT ASG-TRITUNE. ASG-APC for TriTune ASG-APC for TriTune OVERVIEW IT organizations are being asked to reduce processing costs while simultaneously improving service levels with increasingly fewer skilled technical staff and constant budget

More information

z/vm Capacity Planning Overview

z/vm Capacity Planning Overview Making systems practical and profitable for customers through virtualization and its exploitation. - z/vm z/vm Capacity Planning Overview Bill Bitner z/vm Customer Care and Focus bitnerb@us.ibm.com Trademarks

More information

Changes in Workload License Charges with the zenterprise

Changes in Workload License Charges with the zenterprise Changes in Workload License Charges with the zenterprise Al Sherkow I/S Management Strategies, Ltd +1 414 332-3062 al@sherkow.com http://www.sherkow.com Version 1103A (01MAR2011) March 1, 2011 Session

More information

What You Need to Know About IBM s Sub-Capacity Pricing & SCRT

What You Need to Know About IBM s Sub-Capacity Pricing & SCRT What You Need to Know About IBM s Sub-Capacity Pricing & SCRT Al Sherkow I/S Management Strategies, Ltd. August 15, 2013 Session Number 13738 Objectives for Today Overview of Sub-Capacity Workload License

More information

z/vm Capacity Planning Overview SHARE 120 San Francisco Session 12476

z/vm Capacity Planning Overview SHARE 120 San Francisco Session 12476 z/vm Capacity Planning Overview SHARE 120 San Francisco Session 12476 Bill Bitner z/vm Customer Care and Focus bitnerb@us.ibm.com Introduction Efficiency of one. Flexibility of Many. 40 years of virtualization.

More information

IBM Break free from impractical and costly IT solutions

IBM Break free from impractical and costly IT solutions IBM Break free from impractical and costly IT solutions Facts that really matter when deciding whose IT solutions to trust 2 Break free from impractical and costly IT solutions Here s one of the biggest

More information

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling Scheduling I Today! Introduction to scheduling! Classical algorithms Next Time! Advanced topics on scheduling Scheduling out there! You are the manager of a supermarket (ok, things don t always turn out

More information

Motivation. Types of Scheduling

Motivation. Types of Scheduling Motivation 5.1 Scheduling defines the strategies used to allocate the processor. Successful scheduling tries to meet particular objectives such as fast response time, high throughput and high process efficiency.

More information

To MIPS or Not to MIPS. That is the CP Question!

To MIPS or Not to MIPS. That is the CP Question! To MIPS or Not to MIPS That is the CP Question! SHARE Anaheim 15201 EWCP Gary King IBM March 13, 2014 1 2 Systems and Technology Group Trademarks The following are trademarks of the International Business

More information

SE350: Operating Systems. Lecture 6: Scheduling

SE350: Operating Systems. Lecture 6: Scheduling SE350: Operating Systems Lecture 6: Scheduling Main Points Definitions Response time, throughput, scheduling policy, Uniprocessor policies FIFO, SJF, Round Robin, Multiprocessor policies Scheduling sequential

More information

Simplifying Hadoop. Sponsored by. July >> Computing View Point

Simplifying Hadoop. Sponsored by. July >> Computing View Point Sponsored by >> Computing View Point Simplifying Hadoop July 2013 The gap between the potential power of Hadoop and the technical difficulties in its implementation are narrowing and about time too Contents

More information

CS 147: Computer Systems Performance Analysis

CS 147: Computer Systems Performance Analysis CS 147: Computer Systems Performance Analysis Approaching Performance Projects CS 147: Computer Systems Performance Analysis Approaching Performance Projects 1 / 35 Overview Overview Overview Planning

More information

SHARE session Greg Caliri BMC Software, Inc. Lexington MA, USA

SHARE session Greg Caliri BMC Software, Inc. Lexington MA, USA SHARE session 17370 Greg Caliri BMC Software, Inc. Lexington MA, USA Darrell Huff (1913-2001) There is terror in numbers.. This is INTRO stuff Sure, you can lie, but you don t WANT to lie And you don t

More information

<Insert Picture Here> Oracle Exalogic Elastic Cloud: Revolutionizing the Datacenter

<Insert Picture Here> Oracle Exalogic Elastic Cloud: Revolutionizing the Datacenter Oracle Exalogic Elastic Cloud: Revolutionizing the Datacenter Mike Piech Senior Director, Product Marketing The following is intended to outline our general product direction. It

More information

Capacity Management - Telling the story

Capacity Management - Telling the story Capacity Management - Telling the story What is a Story? It is either: a. an account of incidents or events b. a statement regarding the facts pertinent to a situation in question Data is nothing more

More information

IBM Tivoli OMEGAMON XE on z/vm and Linux

IBM Tivoli OMEGAMON XE on z/vm and Linux Manage and monitor z/vm and Linux performance IBM Tivoli OMEGAMON XE on z/vm and Linux Highlights Facilitate the cost-effective migration of workloads onto mainframes by monitoring z/vm and Linux performance

More information

Cheryl s Hot Flashes

Cheryl s Hot Flashes Cheryl s Hot Flashes Cheryl Watson SHARE, August 1998 Session #2543 Watson & Walker, Inc. 1605 Main Street, Suite 610, Sarasota, FL 34236 941-366-7708, fax: 941-366-6479 email: cheryl@watsonwalker.com

More information

Addressing UNIX and NT server performance

Addressing UNIX and NT server performance IBM Global Services Addressing UNIX and NT server performance Key Topics Evaluating server performance Determining responsibilities and skills Resolving existing performance problems Assessing data for

More information

Optimizing resource efficiency in Microsoft Azure

Optimizing resource efficiency in Microsoft Azure Microsoft IT Showcase Optimizing resource efficiency in Microsoft Azure By July 2017, Core Services Engineering (CSE, formerly Microsoft IT) plans to have 90 percent of our computing resources hosted in

More information

A Systematic Approach to Performance Evaluation

A Systematic Approach to Performance Evaluation A Systematic Approach to Performance evaluation is the process of determining how well an existing or future computer system meets a set of alternative performance objectives. Arbitrarily selecting performance

More information

ON THE USE OF BASE CHOICE STRATEGY FOR TESTING INDUSTRIAL CONTROL SOFTWARE

ON THE USE OF BASE CHOICE STRATEGY FOR TESTING INDUSTRIAL CONTROL SOFTWARE School of Innovation Design and Engineering Västerås, Sweden Thesis for the Degree of Bachelor of Science in Computer Science 15.0 credits ON THE USE OF BASE CHOICE STRATEGY FOR TESTING INDUSTRIAL CONTROL

More information

Windows Server Capacity Management 101

Windows Server Capacity Management 101 Windows Server Capacity Management 101 What is Capacity Management? ITIL definition of Capacity Management is: Capacity Management is responsible for ensuring that adequate capacity is available at all

More information

Tips and Tricks for Allocating MIPS Costs Managing the Complexity

Tips and Tricks for Allocating MIPS Costs Managing the Complexity Tips and Tricks for Allocating MIPS Costs Managing the Complexity Steven Thomas, Chief Technology Officer SMT Data A/S 07/11/2017 Session LC Business Value from Capacity and Performance Data Understanding

More information

The Wow! Factor: A Perfect Example of Using Business Analytics and Cloud Computing to Create Business Value

The Wow! Factor: A Perfect Example of Using Business Analytics and Cloud Computing to Create Business Value Case Study The Wow! Factor: A Perfect Example of Using Business Analytics Introduction This Case Study is about how one enterprise (IBM) found a way to extract additional business value from its existing

More information

A Examcollection.Premium.Exam.35q

A Examcollection.Premium.Exam.35q A2030-280.Examcollection.Premium.Exam.35q Number: A2030-280 Passing Score: 800 Time Limit: 120 min File Version: 32.2 http://www.gratisexam.com/ Exam Code: A2030-280 Exam Name: IBM Cloud Computing Infrastructure

More information

Common Mistakes in Performance Evaluation

Common Mistakes in Performance Evaluation Common Mistakes in Performance Evaluation The Art of Computer Systems Performance Analysis By Raj Jain Adel Nadjaran Toosi Wise men learn by other men s mistake, fools by their own. H. G. Wells No Goals

More information

An Oracle White Paper June Leveraging the Power of Oracle Engineered Systems for Enterprise Policy Automation

An Oracle White Paper June Leveraging the Power of Oracle Engineered Systems for Enterprise Policy Automation An Oracle White Paper June 2012 Leveraging the Power of Oracle Engineered Systems for Enterprise Policy Automation Executive Overview Oracle Engineered Systems deliver compelling return on investment,

More information

Oracle Financial Services Revenue Management and Billing V2.3 Performance Stress Test on Exalogic X3-2 & Exadata X3-2

Oracle Financial Services Revenue Management and Billing V2.3 Performance Stress Test on Exalogic X3-2 & Exadata X3-2 Oracle Financial Services Revenue Management and Billing V2.3 Performance Stress Test on Exalogic X3-2 & Exadata X3-2 O R A C L E W H I T E P A P E R J A N U A R Y 2 0 1 5 Table of Contents Disclaimer

More information

A CIOview White Paper by Scott McCready

A CIOview White Paper by Scott McCready A CIOview White Paper by Scott McCready 1 Table of Contents How to Craft an Effective ROI Analysis... 3 ROI Analysis is Here to Stay... 3 When is an ROI Analysis Not Necessary?... 3 It s More About the

More information

A Conditional Probability Model for Vertical Multithreaded Architectures

A Conditional Probability Model for Vertical Multithreaded Architectures A Conditional Probability Model for Vertical Multithreaded Architectures Robert M. Lane rmlkcl@mindspring.com Abstract Vertical multithreaded architectures can be estimated using a model based on conditional

More information

AMP/ADTECH SOA Workshop. August 2017

AMP/ADTECH SOA Workshop. August 2017 AMP/ADTECH SOA Workshop August 2017 Software Developer Generations (1) Four Generations of Software Developers 1 st Generation 1950s through the 1960s 2 nd Generation 1970s through the 1980s 3 rd Generation

More information

System. Figure 1: An Abstract System. If we observed such an abstract system we might measure the following quantities:

System. Figure 1: An Abstract System. If we observed such an abstract system we might measure the following quantities: 2 Operational Laws 2.1 Introduction Operational laws are simple equations which may be used as an abstract representation or model of the average behaviour of almost any system. One of the advantages of

More information

Challenges of MSU Capping w/o Impacting SLAs

Challenges of MSU Capping w/o Impacting SLAs Challenges of MSU Capping w/o Impacting SLAs Oct 4 th 2016 at St. Louis CMG Donald Zeunert BMC Software Abstract Numerous IBM capping options to lower MLC costs. Difficult to manage and offer SLA risks.

More information

IBM Service Management solutions To support your business objectives. Increase your service availability and performance with IBM Service Management.

IBM Service Management solutions To support your business objectives. Increase your service availability and performance with IBM Service Management. IBM Service Management solutions To support your business objectives Increase your service availability and performance with IBM Service Management. The challenges are clear for today s operations If you

More information

z/tpf and z13 Mark Gambino, TPF Development Lab March 23, 2015 TPFUG Dallas, TX

z/tpf and z13 Mark Gambino, TPF Development Lab March 23, 2015 TPFUG Dallas, TX z/tpf and z13 Mark Gambino, TPF Development Lab March 23, 2015 TPFUG Dallas, TX Disclaimer Any reference to future plans are for planning purposes only. IBM reserves the right to change those plans at

More information

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. 1 Copyright 2011, Oracle and/or its affiliates. All rights ORACLE PRODUCT LOGO Virtualization and Cloud Deployments of Oracle E-Business Suite Ivo Dujmović, Director, Applications Development 2 Copyright

More information

Ch. 6: Understanding and Characterizing the Workload

Ch. 6: Understanding and Characterizing the Workload Ch. 6: Understanding and Characterizing the Workload Kenneth Mitchell School of Computing & Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 Kenneth Mitchell, CS & EE dept., SCE,

More information

... Oracle s PeopleSoft Enterprise Human Capital Management upgrade planning Tools and techniques for the upgrade decision

... Oracle s PeopleSoft Enterprise Human Capital Management upgrade planning Tools and techniques for the upgrade decision Oracle s PeopleSoft Enterprise Human Capital Management upgrade planning Tools and techniques for the upgrade decision........ Mike Curtis IBM Oracle International Competency Center September 2012 Copyright

More information

2018 IBM Systems Technical University

2018 IBM Systems Technical University Session z100832 What is Container Pricing Really All About? Cheryl Watson cheryl@watsonwalker.com Watson & Walker 2018 IBM Systems Technical University May 2, 2018 Orlando, Florida Please note IBM s latest

More information

IBM WebSphere Extended Deployment, Version 5.1

IBM WebSphere Extended Deployment, Version 5.1 Offering a dynamic, goals-directed, high-performance environment to support WebSphere applications IBM, Version 5.1 Highlights Dynamically accommodates variable and unpredictable business demands Helps

More information

Enterprise Software Performance Engineering

Enterprise Software Performance Engineering Enterprise Software Performance Engineering Presented By: Walter Kuketz Scaling your career.. Software Performance matters - everywhere DTG Facebook IPO 2 Who is responsible for end-to-end system performance?

More information

Requirements for a Successful Replatform Project

Requirements for a Successful Replatform Project Requirements for a Successful Replatform Project Replatform Preparation A successful Replatform Project begins with assessing and validating the current system, applications, jobs and associated application

More information

to make DevOps a success

to make DevOps a success TIPS FOR OPS 5 ways to use performance monitoring DevOps to make DevOps a success A case study: How Prep Sportswear moved from monolith to microservices Table of contents Introduction 4 Section 1: The

More information

The New zenterprise A Cost-Busting Platform

The New zenterprise A Cost-Busting Platform The New zenterprise A Cost-Busting Platform TCO Lessons Learned, Establishing Equivalence (aka Improving ROI with CICS and the Mainframe) Total Cost of Ownership In Context: The Mainframe Is IBMs Premier

More information

Ensure Your Servers Can Support All the Benefits of Virtualization and Private Cloud The State of Server Virtualization... 8

Ensure Your Servers Can Support All the Benefits of Virtualization and Private Cloud The State of Server Virtualization... 8 ... 4 The State of Server Virtualization... 8 Virtualization Comfort Level SQL Server... 12 Case in Point SAP... 14 Virtualization The Server Platform Really Matters... 18 The New Family of Intel-based

More information

Containing MLC Costs For Mobile and New Workloads

Containing MLC Costs For Mobile and New Workloads Containing MLC Costs For Mobile and New Workloads Frank Kyne (frank@watsonwalker.com) Cheryl Watson (cheryl@watsonwalker.com) 1 Session 19098 August 4, 2016 Welcome THANK YOU for attending this session!

More information

HPC Workload Management Tools: Tech Brief Update

HPC Workload Management Tools: Tech Brief Update 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 HPC Workload Management Tools: Tech Brief Update IBM Platform LSF Meets Evolving High Performance Computing

More information

CF Activity Report Review

CF Activity Report Review CF Activity Report Review Bradley Snyder(Bradley.Snyder@us.ibm.com ) IBM Corporation Tuesday, August 11, 2015 Session Insert Custom Session QR if Desired Copyright IBM Corp. 2015 Trademarks The following

More information

Provide Your SAP Applications the Best Response Time, Tightest Security and Highest Availability with Radware s Certified Solution

Provide Your SAP Applications the Best Response Time, Tightest Security and Highest Availability with Radware s Certified Solution Provide Your SAP Applications the Best Response Time, Tightest Security and Highest Availability with Radware s Certified Solution The Challenge SAP applications serve as the missioncritical infrastructure

More information

BMC Transaction Management. Delivering on the promise

BMC Transaction Management. Delivering on the promise BMC Transaction Management Delivering on the promise 9/2/2006 Agenda Landscape review Strategy 2 Business Transactions... Define how business gets done Depend on increasingly complex technology for execution

More information

How Much Bigger, Better and Faster?

How Much Bigger, Better and Faster? How Much Bigger, Better and Faster? Dave Abbott, Corporate Computing and Networks, Mike Timmerman, Professional Services, and Doug Wiedder, Benchmarking Services, Cray Research, Inc., Eagan, Minnesota

More information

Sizing SAP Central Process Scheduling 8.0 by Redwood

Sizing SAP Central Process Scheduling 8.0 by Redwood Sizing SAP Central Process Scheduling 8.0 by Redwood Released for SAP Customers and Partners January 2012 Copyright 2012 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted

More information

IBM i5 server delivers global consolidation for Greif

IBM i5 server delivers global consolidation for Greif IBM Case Study IBM i5 server delivers global consolidation for Greif Overview Challenge Overcome high cost and inefficiencies caused by multiple, diverse ERP systems and computing platforms Solution Consolidate

More information

An Oracle White Paper May A Strategy for Governing IT Projects, Programs and Portfolios Throughout the Enterprise

An Oracle White Paper May A Strategy for Governing IT Projects, Programs and Portfolios Throughout the Enterprise An Oracle White Paper May 2010 A Strategy for Governing IT Projects, Programs and Portfolios Throughout the Enterprise EXECUTIVE OVERVIEW CIOs are constantly being asked to service the gap between where

More information

Motivating Examples of the Power of Analytical Modeling

Motivating Examples of the Power of Analytical Modeling Chapter 1 Motivating Examples of the Power of Analytical Modeling 1.1 What is Queueing Theory? Queueing theory is the theory behind what happens when you have lots of jobs, scarce resources, and subsequently

More information

The Sysprog s Guide to the Customer Facing Mainframe: Cloud / Mobile / Social / Big Data

The Sysprog s Guide to the Customer Facing Mainframe: Cloud / Mobile / Social / Big Data Glenn Anderson, IBM Lab Services and Training The Sysprog s Guide to the Customer Facing Mainframe: Cloud / Mobile / Social / Big Data Summer SHARE August 2015 Session 17794 2 (c) Copyright 2015 IBM Corporation

More information

Using SAP with HP Virtualization and Partitioning

Using SAP with HP Virtualization and Partitioning Using SAP with HP Virtualization and Partitioning Introduction... 2 Overview of Virtualization and Partitioning Technologies... 2 Physical Servers... 2 Hard Partitions npars... 3 Virtual Partitions vpars...

More information

Service management solutions White paper. Six steps toward assuring service availability and performance.

Service management solutions White paper. Six steps toward assuring service availability and performance. Service management solutions White paper Six steps toward assuring service availability and performance. March 2008 2 Contents 2 Overview 2 Challenges in assuring high service availability and performance

More information

Bed Management Solution (BMS)

Bed Management Solution (BMS) Bed Management Solution (BMS) System Performance Report October 2013 Prepared by Harris Corporation CLIN 0003AD System Performance Report Version 1.0 Creation Date Version No. Revision History Description/Comments

More information

Dynamics NAV Upgrades: Best Practices

Dynamics NAV Upgrades: Best Practices Dynamics NAV Upgrades: Best Practices Today s Speakers David Kaupp Began working on Dynamics NAV in 2005 as a software developer and immediately gravitated to upgrade practices and procedures. He has established

More information

IBM Oracle ICC. IBM Power Systems and IBM FlashSystem Flash Storage Performance for Oracle Applications

IBM Oracle ICC. IBM Power Systems and IBM FlashSystem Flash Storage Performance for Oracle Applications IBM Power Systems and IBM FlashSystem Flash Storage Performance for Oracle Applications Oracle Alliance Oracle Team Alliance Team International Competency Center International Competency Center Patrick

More information

Introduction to IBM Insight for Oracle and SAP

Introduction to IBM Insight for Oracle and SAP Introduction to IBM Insight for Oracle and SAP Sazali Baharom (sazali@my.ibm.com) ISV Solutions Architect IBM ASEAN Technical Sales Target Audience Consultants, Managers, Sr. Managers Infrastructure Service

More information

Kepion Solution vs. The Rest. A Comparison White Paper

Kepion Solution vs. The Rest. A Comparison White Paper Kepion Solution vs. The Rest A Comparison White Paper In the Business Intelligence industry, Kepion competes everyday with BI vendors such as IBM Cognos, Oracle Hyperion and SAP BusinessObjects. At first

More information

comp 180 Lecture 04 Outline of Lecture 1. The Role of Computer Performance 2. Measuring Performance

comp 180 Lecture 04 Outline of Lecture 1. The Role of Computer Performance 2. Measuring Performance Outline of Lecture 1. The Role of Computer Performance 2. Measuring Performance Summary The CPU time can be decomposed as follows: CPU time = Instructions --------------------------------- Program Clock

More information

Container Pricing for IBM Z Scott P. Engleman z/os Offering Manager IBM Poughkeepsie, New York

Container Pricing for IBM Z Scott P. Engleman z/os Offering Manager IBM Poughkeepsie, New York Container Pricing for IBM Z Scott P. Engleman z/os Offering Manager IBM Poughkeepsie, New York engleman@us.ibm.com IBM Z / October 2018 / 2018 IBM Corporation 1 Agenda Container Pricing Concepts Announcements

More information

A FRAMEWORK FOR CAPACITY ANALYSIS D E B B I E S H E E T Z P R I N C I P A L C O N S U L T A N T M B I S O L U T I O N S

A FRAMEWORK FOR CAPACITY ANALYSIS D E B B I E S H E E T Z P R I N C I P A L C O N S U L T A N T M B I S O L U T I O N S A FRAMEWORK FOR CAPACITY ANALYSIS D E B B I E S H E E T Z P R I N C I P A L C O N S U L T A N T M B I S O L U T I O N S Presented at St. Louis CMG Regional Conference, 4 October 2016 (c) MBI Solutions

More information

Fluid and Dynamic: Using measurement-based workload prediction to dynamically provision your Cloud

Fluid and Dynamic: Using measurement-based workload prediction to dynamically provision your Cloud Paper 2057-2018 Fluid and Dynamic: Using measurement-based workload prediction to dynamically provision your Cloud Nikola Marković, Boemska UK ABSTRACT As the capabilities of SAS Viya and SAS 9 are brought

More information

Features and Capabilities. Assess.

Features and Capabilities. Assess. Features and Capabilities Cloudamize is a cloud computing analytics platform that provides high precision analytics and powerful automation to improve the ease, speed, and accuracy of moving to the cloud.

More information

Metalogix Replicator for SharePoint

Metalogix Replicator for SharePoint Metalogix Replicator for SharePoint Product Analysis by Joel Oleson May 2013 Product Analysis by Joel Oleson for Metalogix. Published on SharePointJoel.com on May 21, 2013. SharePointJoel.com Overview

More information

An Oracle White Paper January Upgrade to Oracle Netra T4 Systems to Improve Service Delivery and Reduce Costs

An Oracle White Paper January Upgrade to Oracle Netra T4 Systems to Improve Service Delivery and Reduce Costs An Oracle White Paper January 2013 Upgrade to Oracle Netra T4 Systems to Improve Service Delivery and Reduce Costs Executive Summary... 2 Deploy Services Faster and More Efficiently... 3 Greater Compute

More information

Common Mistakes and How to Avoid Them

Common Mistakes and How to Avoid Them Common Mistakes and How to Avoid Them 2-1 Overview Common Mistakes in Evaluation Checklist for Avoiding Common Mistakes A Systematic Approach to Performance Evaluation Case Study: Remote Pipes vs RPC 2-2

More information

Rethinking HSM Policies with Automated Storage Tiering

Rethinking HSM Policies with Automated Storage Tiering Rethinking HSM Policies with Automated Storage Tiering Session ID 15997 Tuesday, August 5, 2014: 03:00 PM - 04:00 PM, DLLCC, Room 305 Insert Custom Session QR if Desired. AGENDA Overview Reducing Storage

More information

Scheduling Principles and Problems

Scheduling Principles and Problems Scheduling Principles and Problems Call Center Scheduling -- The art and science of getting the just right number of people in their seats at precisely the right times to handle the calls. Too many at

More information

Managing Applications with Oracle Enterprise Manager 10g. An Oracle White Paper November 2007

Managing Applications with Oracle Enterprise Manager 10g. An Oracle White Paper November 2007 Managing Applications with Oracle Enterprise Manager 10g An Oracle White Paper November 2007 NOTE The following is intended to outline our general product direction. It is intended for information purposes

More information

Operating Systems. Scheduling

Operating Systems. Scheduling Operating Systems Fall 2014 Scheduling Myungjin Lee myungjin.lee@ed.ac.uk 1 Scheduling In discussing processes and threads, we talked about context switching an interrupt occurs (device completion, timer

More information

Realize the potential of a connected factory

Realize the potential of a connected factory Realize the potential of a connected factory Azure IoT Central Empower your digital business through connected products How to capitalize on the promise Azure IoT Central is a fully managed IoT SaaS (software-as-a-service)

More information

Views from the Field. Decision-Making at Non-Profits.

Views from the Field. Decision-Making at Non-Profits. By Steve Scheier Scheier+Group Produced in partnership with CommongoodCareers Page 1 Views from the Field. Decision-Making at n-profits. n-profit organizations benefit from a cadre of hard-working, dedicated

More information

IBM Systems Move to IBM

IBM Systems Move to IBM IBM Systems Move to IBM Migration made easy. Built on trust. Based on experience. In both business and IT, change is a constant. However, unexpected change can wreak havoc on carefully planned business

More information