IBM Tivoli Workload Automation V8.5.1 Performance and Scale Cookbook

Size: px
Start display at page:

Download "IBM Tivoli Workload Automation V8.5.1 Performance and Scale Cookbook"

Transcription

1 IBM Tivoli Workload Automation V8.5.1 Performance and Scale Cookbook March 2010 IBM Tivoli Workload Automation V8.5.1 Performance and Scale Cookbook Document version 1.0 Monica Rossi Tivoli Workload Automation Performance Test IBM Rome Lab Copyright International Business Machines Corporation All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

2 CONTENTS List of Figures...iii List of Tables... iv Revision History...v 1 Introduction Scope Hardware and software configuration Tivoli Dynamic Workload Console test environment configuration Results Tuning for concurrent users Tivoli Workload Scheduler agent test environment configuration Results Tuning for heavy agent s workload z/os Replan/Extent test environment configuration Results Tuning for Replan/Extend z/os lightweight end-to-end mini-scale test environment configuration Results Tuning for lightweight end-to-end mini-scale ii

3 LIST OF FIGURES Figure Figure Figure Figure Figure Figure iii

4 LIST OF TABLES Table Table Table Table Table Table Table Table Table Table iv

5 REVISION HISTORY Date Version Revised By Comments February 27, Tivoli Workload Scheduler dev team Version based on internal performance report v

6 1 Introduction The IBM Tivoli Workload Automation products suite helps in reducing the complexity of managing the workload on all the distributed and z/os systems by modeling, planning, running, and controlling every phase of the batch workload. A general overview is given in Figure 1.1. The entire portfolio includes the following products targeted at different workload scheduling market segments: Tivoli Workload Scheduler for z/os Mainframe batch and real-time workload scheduling. Tivoli Workload Scheduler for z/os is a calendar and event-based enterprise batch job scheduler for mainframe environments. Tivoli Workload Scheduler for Distributed Distributed batch and real-time workload scheduling. Tivoli Workload Scheduler Distributed is a calendar and event-based enterprise batch job scheduler for distributed environments (Linux, UNIX, Windows). Tivoli Workload Scheduler for Applications Batch and real-time workload scheduling for packaged distributed business applications. Extends the calendar-based and real-time workload batch workload scheduling to SAP, PeopleSoft, and Oracle business applications. Tivoli Workload Scheduler LoadLeveler Parallel scheduling system with highly scaleable parallel computing capabilities in HPC grids. Dynamically allocates workloads to specific resources in AIX and Linux-based System p and System x server clusters based on resource utilization and available capacity, and business policies including resource and application priorities IBM Tivoli Workload Scheduler for Virtualized Data Centers Integration between IBM Tivoli Workload Scheduler, Tivoli Workload Scheduler LoadLeveler and non-ibm Grid Middleware the support the Open Grid Services Architecture standards. The portfolio also includes the following components (not chargeable products). Tivoli Dynamic Workload Console Web-based single point of control for all products in the Dynamic Workload Automation Portfolio. This console can be used by all the different users and for all the main operations (configuration, monitoring, planning, and so on). Depending on the customer business needs or organizational structure, Tivoli Workload Automation distributed and z/os components can be used in a mix of configurations to provide a completely distributed scheduling environment, a completely z/os environment or a mixed end-to-end z/os and distributed environment. Using the Tivoli Workload 1

7 Scope Scheduler for Applications component you can manage workload on non-tivoli Workload Scheduler platforms and Enterprise Resource Planning Applications. Figure Scope This document provides some results for the Tivoli Workload Automation V performance tests. In particular we provide some performance results for the involved components which are highlighted in red in Figure 2.1: Lightweight end-to-end (lightweight end-to-end from now) scheduling from Tivoli Workload Scheduler for z/os Tivoli Dynamic Workload Broker integration in Tivoli Workload Scheduler product Web UI (Tivoli Dynamic Workload Console) concurrent users support 2

8 Scope Figure 2.1 Tivoli Workload Scheduler V8.5.1 for z/os The new end-to-end solution can efficiently manage a network composed of more than 1000 agents connected to a single Tivoli Workload Scheduler for z/os Controller. The old end-to-end solution with tracker agents has a limit of 1000 agents supported. Performance objective is to demonstrate that the new end-to-end infrastructurel behaves like a full z scheduling environment. Replan/Extension of a daily plan including 100,000 jobs on 10 agents. Comparison of the time spent from the plan creation start time to the first job run. Performance objective is to demonstrate that time in new lightweight end-to-end context must be comparable to (and not above 10%) the time in the z-scheduling environment context. Tivoli Workload Scheduler Tivoli Workload Scheduler agent s heavy workload support. 3

9 The new lightweight Tivoli Workload Scheduler agent can manage a workload of about 250,000 jobs on a single agent. It corresponds to about 200 jobs per minute for 24 hours. Performance objective is agent should stay alive with low footprint and no memory leaks. Tivoli Dynamic Workload Console must be able to support hundreds concurrent users Performance objective is Tivoli Dynamic Workload Console must be able to support hundreds concurrent users, with few of them using graphical views. To simulate many concurrent users, IBM Rational Performance Tester was used. Tivoli Dynamic Workload Console refresh of the query action on the plan objects improvement Performance objective is Tivoli Dynamic Workload Console V8.5.1 query refresh response time will improve by bout 20% in comparison with Tivoli Dynamic Workload Console V Hardware and software configuration This chapter covers all the aspects used to build the test environment and run test scenarios: topology, hardware, software and scheduling environment. It is split into four subsections: two for distributed test results and two for z/os test results. 3.1 Tivoli Dynamic Workload Console test environment configuration Figure 3.1 shows the topology used to build the environment to perform specific Tivoli Dynamic Workload Console-based tests: concurrent users (see related performance goal statement on page 3) and query refresh see related performance goal statement on page 3). From now this environment is referred to as ENV1. In Table 3.1 shows the hardware details of all the machines used in the test environment ENV1. 4

10 Figure 3.1 ENV 1 PROCESSOR MEMORY SWAP Tivoli Workload 4 X Intel Xeon CPU 3.80 GHz 4 GB 2 GB Scheduler server DB server 4 X Intel Pentium III (Katmai) MHz 3 GB 2 GB Tivoli Dynamic 4 X 2 PowerPC_POWER MHz 4.12 GB 2 GB Workload Console Server RPT Server 4X Dual-Core AMD Opteron(tm) 2210 EE 1,80 GHz 5 GB - Client Intel Pentium IV - 2,80 GHz 3 GB - Table 3.1 Table 3.2 shows the software details of all the machines involved in the test environment ENV1. ENV 1 OS TYPE SOFTWARE Tivoli Workload Scheduler RHEL 5.1 (Tikanga) Tivoli Workload Scheduler server DB server SLES 10 DB2 : 9.5 FP1 Tivoli Dynamic Workload AIX 5.3 ML 08 Tivoli Dynamic Workload Console Console Server RPT Server Win 2K3 Server sp2 RPT Client Win XP (sp2) Mozilla FireFox Table 3.2 For Tivoli Dynamic Workload Console tests we have Tivoli Workload Scheduler DB size of 100 job definitions and 1000 job streams (each job stream contains 100 jobs, has 2 run cycles, and all types of dependency). Tivoli Workload Scheduler Plan size: 100K job instances and 1000 job stream instances. 5

11 3.1.1 Results With this configuration we were able to successfully run the following scenario: Scenario: 102 users connected at the same time: 2 of them use graphical views, the others are operators, developers, and administrators. All users log in within 1 minute of each other. Users are divided in six groups. Five groups are composed of 20 users each performing the following monitoring tasks: All Jobs in waiting Browse job log Monitor prompts All Jobs success All Job stream in plan All Jobs in error Each user can run more than one task in sequence. Some queries are small (for example 400 objects) while other are large (for example 30K objects) based on a plan of about 100K jobs and 1000 job streams. The last group consists of two users performing graphical view tasks: Job stream view Impact view Some delays between tasks and groups were introduced to simulate a real workload. Each user can run more than one task in sequence. For these specific tests, Rational Performance Tester (RPT) was used to simulate concurrent users. Moreover with the same configuration (ENV1) we measured that Tivoli Dynamic Workload Console V8.5.1 is better than Tivoli Dynamic Workload Console V8.5 by bout 62% performing Jobs in plan query refresh (that is, a query of 100 jobs against a plan of 100K jobs) Tuning for concurrent users To manage the concurrent users with this workload, some tuning is needed. Embedded Application server (ewas) changes: Server.xml file JVM entries: initialheapsize="1024" maximumheapsize="2048" transportchannels entry: "WC_adminhost" maxopenconnections="20000" 6

12 3.2 Tivoli Workload Scheduler agent test environment configuration Figure 3.2 shows the topology used to build the environment to perform Tivoli Workload Scheduler tests: Tivoli Workload Scheduler Agent s heavy workload support (see related performance goal statement on page 3). This environment will be referred to as ENV2. For ENV2 there are two different configurations (ENV2A and ENV2B) which run same tests on different operating systems. Table 3.3 and Table 3.4 show the hardware details of all the machines used in these environments. Figure 3.2 ENV 2A PROCESSOR MEMORY SWAP Tivoli Workload 4 X Intel Xeon CPU 3.80 GHz 4 GB 2 GB Scheduler + DB server Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 2.66 GHz 2,5 GB - Table 3.3 ENV 2B PROCESSOR MEMORY SWAP Tivoli Workload 2 X PowerPC_POWER GHz 6 GB 512 MB Scheduler + DB server Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 2.80 GHz 4,5 GB 2 GB Table 3.4 7

13 Table 3.5 and Table 3.6 show the software details for ENV2 and its two different configurations (ENV2A and ENV2B). ENV 2A OS TYPE SOFTWARE Tivoli Workload Scheduler + DB server AIX 6.1 ML 02 Tivoli Workload Scheduler Agent Win 2K3 Server sp2 Table 3.5 Tivoli Workload Scheduler DB2 9.5 FP1 Tivoli Workload Scheduler ENV 2B OS TYPE SOFTWARE Tivoli Workload Scheduler + DB server RHEL 5.1 (Tikanga) Tivoli Workload Scheduler Agent SLES 10 pl 1 Table 3.6 Tivoli Workload Scheduler DB2 9.5 FP1 Tivoli Workload Scheduler For Tivoli Workload Scheduler Agent s workload scenario we had a DB size of 174 job definitions and 1 job stream. The necessary workload was obtained with a script submitting via conman (sbs) one job stream containing 174 jobs with AT condition every minute (at job stream level) with a delay of 40 seconds between each job stream. Tivoli Workload Scheduler Plan final size: 250K job instances and 1436 job stream instances. Jobs are scheduled on the Broker workstation (Tivoli Dynamic Workload Broker) Results With these configurations we were able to successfully run the following scenarios: Scenarios: 1. Submit a workload of about 250,000 jobs on a single agent (UNIX) to be run in 24 hours 2. Run the same scenario against WIN agent Tuning for heavy agent s workload To achieve this workload some tuning is needed. Tivoli Dynamic Workload Broker side changes: In the JobDispatcherConfig.properties file, add the following sections to override default values: # Override hidden settings in JDEJB.jar Queue.actions.0 = cancel, cancelallocation, cancelorphanallocation Queue.size.0 = 10 8

14 Queue.actions.1 = reallocateallocation Queue.size.1 = 10 Queue.actions.2 = updatefailed Queue.size.2 = 10 # Relevant to jobs submitted from Tivoli Workload Scheduler bridge, when successful Queue.actions.3 = completed Queue.size.3 = 30 Queue.actions.4 = execute Queue.size.4 = 30 Queue.actions.5 = submitted Queue.size.5 = 30 Queue.actions.6 = notification Queue.size.6 = 30 This is because the default behavior consists of 3 queues for all the actions: we experienced that at least 7 queues are needed, each having a different dimension. Alert: the default configuration (where not explicitly said the default queue size is 10) Queue.actions.0 = cancel, cancelallocation, completed, cancelorphanallocation Queue.actions.1 = execute, reallocateallocation Queue.size.1 = 20 Queue.actions.2 = submitted, notification, updatefailed In ResourceAdvisorConfig.properties file add the following sections: #To speed up resource advisor TimeSlotLength=10 MaxAllocsPerTimeSlot=1000 MaxAllocsInCache=

15 This is just to improve broker performance to manage jobs. Tivoli Workload Scheduler side changes: Set broker CPU limit to SYS (that is unlimited) This is because the notification rate is slower than the submission rate so Tivoli Dynamic Workload Broker (job dispatcher) threads are queued waiting for notification status: in this way the maximum Tivoli Workload Scheduler limit (1024) is easily reached even if the queues are separated. Embedded Application server (ewas) changes: Resource.xml file Connection pool for "DB2 Type 4 JDBC Provider": maxconnections= 150 Server.xml file * JVM entries: initialheapsize="1024" maximumheapsize="2048" This is a consequence of managing more than 100 submitting concurrent threads. Operating system changes: * Set ulimit to unlimited (physical memory for java process) 3.3 z/os Replan/Extent test environment configuration Figure 3.3 shows the topology used to build the environment to perform Replan/Extension tests with 100K jobs to be deployed on 10 agents, that is, 10K jobs for each agent (see related performance goal statement on page 3). This environment is referred to as ENV3. 10

16 Figure 3.3 Table 3.7 and Table 3.8 show the hardware and software details of all the machines used in ENV3. ENV 3 SYSTEM/PROCESSOR MEMORY SWAP Tivoli Workload Scheduler for z/os CMOS z Mod GB - Tivoli Workload Scheduler Agent 2 X PowerPC_POWER GHz 2 GB 768 MB Tivoli Workload Scheduler Agent 2 X PowerPC_POWER GHz 2 GB 768 MB Tivoli Workload Scheduler Agent 2 X PowerPC_POWER GHz 2 GB 768 MB Tivoli Workload Scheduler Agent 2 X Intel Pentium IV 3.0 GHz 3 GB - Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 2.80 GHz 2,5 GB - Tivoli Workload Scheduler Agent 2 X Intel Pentium IV 3.0 GHz 2,5 GB - Tivoli Workload Scheduler Agent Intel Pentium IV 1.8 GHz 1 GB - Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 3.06 GHz 2,5 GB - Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 2.66 GHz 2,5 GB 2 GB Tivoli Workload Scheduler Agent 4 X Intel Xeon CPU 2.80 GHz 4,5 GB 2 GB Tivoli Workload Scheduler for z/os CMOS z Mod GB - Table 3.7 ENV 3 OS TYPE SOFTWARE Tivoli Workload Scheduler for z/os z/os 1.8 Tivoli Workload Scheduler for z/os Tivoli Workload Scheduler Agent AIX 5.3 ML 07 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent AIX 5.3 ML 07 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent AIX 5.3 ML 06 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Tivoli Workload Scheduler Tivoli Workload Scheduler Agent RHEL 5.1 Tikanga Tivoli Workload Scheduler Tivoli Workload Scheduler Agent SLES 10 pl 1 Tivoli Workload Scheduler

17 Table 3.8 The scheduling environment used to obtain the workload is: one group containing 10 job streams with 10 trivial jobs each. The group has a Run Cycle with Every Option enabled Results With these configurations we were able to successfully run the following scenarios: Scenarios: 1. Extend of 100,000 jobs plan on 10 Tivoli Workload Scheduler lightweight end-toend agents compared to extend of 100,000 jobs on 10 z/os CPUs 2. Extend of 100,000 jobs plan on 10 Tivoli Workload Scheduler lightweight end-toend agents compared to extend of 100,000 jobs on 10 Tivoli Workload Scheduler standard agent The results show that under the specified conditions lightweight end-to-end is about than 70% better than the traditional E2E and is almost comparable to z/os CPUs Tuning for Replan/Extend To run the above scenarios, some Controller parameters were changed to look for a product queue: HTTP Server thread number - CLNTHREADNUM(number of threads 10) The number of threads that are used by the HTTP client task to submit more than one job at the same time. Valid values are from 5 to 100. HTTP Client thread number - SRVTHREADNUM(number of threads 10) The number of threads that are used by the HTTP Server task to process more than one event notified by the z-centric agents at the same time. Valid values are from 2 to 100. Run1 CLNTHREADNUM(10) SRVTHREADNUM(10) Run2: CLNTHREADNUM(15) SRVTHREADNUM(15) After these two runs we can say that increasing these parameters has a slight impact on this kind of workload and configuration so the default values are recommended. 12

18 3.4 z/os lightweight end-to-end mini-scale test environment configuration Figure 3.4 shows the topology used to build the environment to perform lightweight end-toend mini-scale tests (see related performance goal statement on page 3). This environment is referred to as ENV4. Figure 3.4 Table 3.9 and Table 3.10 show the hardware and software details of all the machines used in the test environment ENV4. ENV 4 SYSTEM/PROCESSOR MEMORY SWAP Tivoli Workload Scheduler for z/os CMOS z Mod GB - Agent Simulator 4 X Intel Xeon CPU 3.06 GHz 2,5 GB - Agent Simulator 4 X Intel Xeon CPU 2.66 GHz 2,5 GB 2 GB Agent Simulator 4 X Intel Xeon CPU 2.80 GHz 4,5 GB 2 GB Table 3.9 ENV4 OS TYPE SOFTWARE Tivoli Workload Scheduler for z/os z/os 1.8 Tivoli Workload Scheduler for z/os Tivoli Workload Scheduler Agent Win 2K3 Server fp2 Java simulator Tivoli Workload Scheduler Agent RHEL 5.1 Tikanga Java simulator Tivoli Workload Scheduler Agent SLES 10 pl 1 Java simulator Table

19 The following information describes the scheduling environment used for each scenario to obtain the desired workload. Scenario1: 100K jobs/1000 agents One group containing 4 job streams with 250 trivial jobs each. The group has a Run Cycle with Every Option enabled. Scenario2: 200K jobs/2000 agents One group containing 8 job streams with 250 trivial jobs each. The group has a Run Cycle with Every Option enabled. Scenario3: 300K jobs/3000 agents One group containing 12 job streams with 250 trivial jobs each. The group has a Run Cycle with Every Option enabled Results With these configurations we were able to run successfully the following scenarios: Scenarios: 1. Deploy jobs on 1000 Tivoli Workload Scheduler Agents. 2. Deploy jobs on 2000 Tivoli Workload Scheduler Agents. 3. Deploy jobs on 3000 Tivoli Workload Scheduler Agents. The results were achieved using an internal Java agent simulator. The scope of these tests was to ensure the backend capability in terms of scalability Tuning for lightweight end-to-end mini-scale To run the above scenarios some Controller parameters were changed according to the different workloads: HTTP Server thread number - CLNTHREADNUM(number of threads 10) The number of threads that are used by the HTTP client task to submit more than one job at the same time. Valid values are from 5 to 100. HTTP Client thread number - SRVTHREADNUM(number of threads 10) The number of threads that are used by the HTTP Server task to process more than one event notified by the z-centric agents at the same time. Valid values are from 2 to 100. Scenario 1: 100K jobs/1000 agents CLNTHREADNUM(30) 14

20 SRVTHREADNUM(30) Scenario 2: 200K jobs/2000 agents CLNTHREADNUM(60) SRVTHREADNUM(60) Scenario 3: 300K jobs/3000 agents. CLNTHREADNUM(100) SRVTHREADNUM(100) 15

21 Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others. xvi

22 Copyright IBM Corporation 2010 This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this publication in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user s responsibility to evaluate and verify the operation of any non-ibm product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this publication. The furnishing of this publication does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd , Shimotsuruma, Yamato-shi Kanagawa Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 2Z4A/ Burnet Road Austin, TX U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this publication and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Information concerning non-ibm products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.