Design and Implementation Best Practices for EMC FAST.X

Size: px
Start display at page:

Download "Design and Implementation Best Practices for EMC FAST.X"

Transcription

1 EMC TECHNICAL NOTES Design and Implementation Best Practices for EMC FAST.X Technical Notes P/N H14568 REV 1.3 May 2016 This FAST.X Technical Notes document contains information on these topics: Table of Contents Executive Summary... 3 Audience... 3 Conventions used in this document... 3 FAST.X and VMAX HYPERMAX OS Components Required by FAST.X... 3 DX directors... 4 edisks... 4 External disk group... 4 Virtual RAID group... 4 Other Important HYPERMAX OS Components... 4 Thin devices (TDEVs)... 4 Data Devices (TDATs)... 4 Benefits of FAST.X... 5 Use cases... 5 Configuring the VMAX3 and SAN for FAST.X... 6 Configuring DX directors... 6 DX director cores... 7 Zoning... 7 Modes of operation External provisioning Incorporation General Rules for External Provisioning and Incorporation Ensuring external data integrity... 15

2 Creating and Presenting Devices in the External Array Handling of Thinly Provisioned External Volumes Replication Considerations FAST Support Determining External Storage Service Level Expectations for FAST Moving Data Between SRPs Support Added to FAST.X FAST.X and Data at Rest Encryption 19 FAST.X system limitations FAST.X Restrictions Software and HYPERMAX OS Version Requirements Supported External Array Platforms Recommended External Volume Sizes FAST.X with CloudArray VMAX3 to CloudArray connectivity VMAX3 configuration considerations CloudArray Configuration considerations FAST.X with Solutions Enabler Getting DX Information and Port WWNs for Zoning Examining the FAST.X environment Confirm the Availability of the External Volumes Configure edisks for External Provisioning Further Examining the Disk Group Configure edisks for Incorporation Creating a Storage Group to Assign Volumes to the Default SRP Creating Thin Volumes for the Default SRP Diagram of the Configured Environment Moving Volumes to an external SRP with EFD Storage Only Local Replication and FAST.X Removing FAST.X Components from an Empty SRP Removing FAST.X Components from an SRP Containing Volumes Appendix A: Terminology and Acronyms Table 1. Terminology Table 2. Acronyms and abbreviations Appendix B: VMAX3 and External EMC Array Configuration Confirming the Solutions Enabler and HYPERMAX OS versions Before Configuring DX directors EMC Symmetrix DMX, VMAX, VMAX EMC XtremIO EMC VNX

3 Executive Summary With substantial increases in the amount of data stored, businesses continue to strive for ways to leverage and extend the value of existing resources, reduce the cost of management, and drive the best performance achievable in the environment. Adding to the challenge is the desire to ensure that data is kept on an appropriate storage tier so that it is available when needed but stored in as cost-effective and environmentally responsible a manner as possible. FAST.X addresses many of these concerns by allowing qualified storage platforms to be used as physical disk space for VMAX3 arrays. This allows enterprises to continue to leverage VMAX3 s availability and reliability along with proven VMAX local and remote replication features while still utilizing existing EMC or third party storage. These features include VMAX3 Service Level Objective (SLO) Provisioning, which gives VMAX3 and FAST.X unparalleled ease-of-use along with proven and robust VMAX3 software and HYPERMAX OS features such as SRDF, SnapVX, and FAST. Audience These Technical Notes are intended for anyone who needs to understand the concept of FAST.X and how it is implemented and configured in the VMAX3 and specific external arrays. This document specifically targets EMC customers, sales, and field technical staff who are designing and implementing a FAST.X solution. Conventions used in this document FAST.X and VMAX3 An ellipsis (...) appearing on a line by itself indicates that unnecessary command output has been removed. Command line syntax, output, and examples appear in the Courier New font. GUI objects that must be clicked on are noted in bold. FAST.X allows an external disk array to provide physical storage for VMAX3 volumes. This implementation required the development of new entities within the VMAX3 that would allow it to attach to external array storage ports and configure external volumes to be used as physical storage. HYPERMAX OS Components Required by FAST.X FAST.X external array connectivity is implemented entirely in HYPERMAX OS and does not require any additional VMAX3 hardware. Connectivity with an external array is established through the same fibre channel I/O modules currently used for configuring FAs for host connectivity and RFs for SRDF connectivity. Instead of running FA or RF emulation, however, the processors run a different type of emulation. 3

4 DX directors DX emulation has been developed that adapts the traditional SCSI Disk Director (DS) emulation model to act on external volumes as though they were physical drives. The fact that a DX, which stands for DS external, is using external logical units, instead of a DS using internal physical disks, is transparent to other director emulations and to the HYPERMAX OS infrastructure. With respect to most non-drive-specific HYPERMAX OS functions, a DX behaves the same as a DS, which is the VMAX3 disk controller that provides connectivity to internal physical dives. Note: A DS is equivalent to a DA, or disk adapter, in previous generation VMAX arrays. edisks An edisk is a logical representation of an external volume when it is added into the VMAX3 configuration. The terms edisk and external spindle both refer to this external volume once it has been placed in an external disk group and a virtual RAID group. External disk group Virtual RAID group External disk groups are virtual disk groups that are created to contain edisks. Exclusive disk group numbers for external disk groups start at 512. External volumes and internal physical spindles cannot be mixed in a disk group. External disk groups are unprotected because external LUNs are protected by RAID protection in the external array no in the VMAX3. An unprotected virtual RAID group gets created for each edisk that gets added to the system. The RAID group is virtual because edisks are not protected locally by the VMAX3 array. Instead, they rely on the local RAID protection provided by the external array. Other Important HYPERMAX OS Components Thin devices (TDEVs) Data Devices (TDATs) When Virtual Provisioning, which is EMC s implementation of thin provisioning, was first released with VMAX storage arrays, two new device types were introduced that support this functionality. Thin devices are the host addressable devices that are part of VMAX3 Virtual Provisioning. They are created with a size but no assigned RAID protection and inherit the RAID protection of the Data devices contained in the pool where they are bound. In VMAX3, all host-addressable devices are thin devices. 4 Data devices are a type of internal VMAX3 device that are dedicated to providing the storage for thin devices in a VMAX3 array. They are configured in HYPERMAX OS as part of adding storage to a VMAX3 array and are configured automatically when an edisk is virtualized into a FAST.X environment. There is a 1:1 relationship between a Data device and an edisk in a FAST.X configuration. Note that the edisk is shown in

5 Figure 1 but not the TDAT. Because of this 1:1 relationship, the Data device is implied when an edisk is shown. Benefits of FAST.X Figure 1. High-level view of a FAST.X environment Use cases Simplifies management of virtualized multi-vendor, or EMC, storage by allowing heterogeneous arrays to be managed by Solutions Enabler and Unisphere for VMAX. Allows data mobility and migration between heterogeneous storage arrays and between heterogeneous arrays and VMAX3. Offers Virtual Provisioning benefits to external arrays. Allows VMAX3 enterprise replication technologies, such as SRDF and SnapVX, to be used to replicate storage that exists on an external array. Extends the value of existing disk arrays by allowing them to be used as an additional, FAST-managed storage tier. Dynamically determines a Service Level Expectation (SLE) for external arrays to align with a Service Level Objective (SLO). FAST.X allows the continued use of external disk arrays, while taking advantage of most VMAX3 HYPERMAX OS features. FAST.X allows organizations to continue to use existing disk arrays as additional storage capacity. Along with this, the data can be managed, 5

6 controlled, and monitored in the same way as native VMAX3 data. Other than with CloudArray in the initial release, almost all of the features supported on VMAX3 devices using internal storage are also supported with FAST.X. Features like FAST, Quality of Service (QoS), and SLO Provisioning, among many others, are available to be used with external storage. FAST.X protects data on external arrays using VMAX3 local and remote replication technologies. Local and remote replication technologies, such as SnapVX, SRDF, and Open Replicator are all supported with VMAX3 devices using external storage. For example, if the goal is to use SRDF to replicate data between an XtremIO and a VNX, FAST.X will support it. FAST.X can migrate data between VMAX3 arrays and external storage as part of a tiering, or asset-management strategy. FAST.X provides a pool of extra storage. Because of the ease of migration between a VMAX3 array and any external array configured in FAST.X environment, an external array could conceivably be used as a temporary repository for data in case of a shortage of available physical disk space in the VMAX3. For example, a VMAX array with all SATA drives could provide spillover for a number of VMAX3 arrays using oversubscribed thin pools. As the production VMAX3 thin pools reach a set threshold, a percentage of the least active allocated capacity could be pushed to the external tier. When additional storage is added to the production VMAX3 array and the thin pool or pools are expanded, the data pushed to the external array could be pulled back to the production VMAX3 array. Configuring the VMAX3 and SAN for FAST.X Configuring DX directors DX directors are configured in dual initiator (DI) pairs like traditional DAs. They are fully redundant like DAs and, when necessary, a failing director fails over to the other, fully functioning director in the DI pair. EMC requires a minimum of four paths to external devices. This means that at least four ports belonging to a single DX dualinitiator pair must be configured. DX DI pairs are configured on different directors on the same engine. For example, the engine shown in Figure 2 has two directors, each which contain four I/O Modules with four ports each. Because it is valid to add DX emulation to both director 1 and director 2 and use any two ports on each of those directors, it is possible to create a valid FAST.X DX configuration with two ports that physically reside on the same I/O Module. For example, using Director 1, ports 4 and 7, which are on the same I/O Module along with Director 2, ports 25 and 31, which are not, will pass FAST.X s pathing compliance check. This configuration is allowed even though an I/O Module replacement could affect both ports 4 and 7 requiring that all paths to external devices through Director 1 fail over to Director 2. 6 A better choice for the two ports from Director 1 would be ports 4 and 9. If that was the configuration, any of the four front-end (FE) fibre channel I/O Modules could be

7 replaced without requiring a fail over to that director s DI partner. When a DX fails over, all edisks will maintain access to the external volumes without interruption, however, the number of paths to the external volumes will be reduced by two. This could cause a potentially significant impact on performance. Recovering from a DI failover requires manual intervention from EMC Customer Service, but is nondisruptive. DX director cores Figure 2. Single engine with 8 Fibre Channel I/O modules If converting FA ports to DX ports, any previously assigned devices must be unmapped and unmasked and the FA ports must be removed from any port groups. The number of processor cores assigned to the DX directors depends upon configuration and profile. A DX heavy core allocation is available for configuring high performance external arrays such as a VMAX array with flash drives or an XtremIO. Note: DX directors and their core allocation are not user-configurable. EMC Customer Service must create and configure them. Zoning The zoning examples provided below allow the servicing of the components within a FAST.X environment without incurring data unavailability and are required in order for the configuration to be supported. The potential service activities include: Cable changes and individual FC port servicing VMAX director and I/O modules External array controller replacement External array firmware upgrade SAN fabric servicing Proper zoning also ensures that a failing switch or storage controller won t cause a DX to fail over, which requires manual recovery. 7

8 Figures 3 through 5 below show examples of how to honor these requirements in single and dual-fabric environments that represent common configurations. These zoning requirements are in addition to existing connectivity requirements consisting of two, physically independent SCSI I-T (Initiator-Target) nexus per DX for each external volume to be configured through a DX pair. These base connectivity requirements are checked by HYPERMAX OS during the edisk configuration process. If the zoning and the external array storage controller volume assignments do not pass FAST.X s compliance check, attempts to configure edisks will fail. When configuring zoning for FAST.X, using a single-target per single-initiator (1:1) zoning scheme is preferred. If the FC switch zone count limitation has been reached, it is possible to use single-target per multiple-initiator (1:Many) zoning. IMPORTANT: With non-vmax3 and non-alua external arrays, remote controllers must not be zoned to two ports on any single DX director. Failure to follow this rule can lead to data unavailability during servicing. Single fabric with two external storage ports Single fabric connectivity is supported, though it does not provide the redundancy of a dual fabric configuration. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with one port being used for FAST.X This FAST.X configuration requires four zones. 8

9 Figure 3. Single fabric zoning Dual fabric with two external storage ports Though single-fabric connectivity is supported, best practice for redundancy is achieved by using dual fabrics. One DX initiator port from each DX director pair must connect to one fabric, while the other DX initiator port connects to the other fabric. The LUNs must be reachable from at least one storage port on at least two external storage controllers or directors. Also, single-initiator zoning is recommended. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with one port being used for FAST.X This FAST.X configuration across a dual-switch fabric with two external storage ports requires four zones (two per fabric): 9

10 Figure 4. Dual-fabric zoning Note: Figures three and four show the logical, not physical, connections. In both diagrams there is a single physical connection from each DX port to the switch(es) for a total of four. There are only two physical connections, one for each external storage port, from the switch(es) to the external arrays. Dual fabric with four external storage ports Best practice for redundancy with dual fabrics is achieved by using four external array ports. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with two ports being used for FAST.X This FAST.X configuration across a dual-switch fabric with four external storage ports requires four zones (two per fabric): 10

11 Direct-attach configurations Configuring external LUNs Figure 5. Expanded dual-fabric zoning Note: As with figures 3 and 4, figure 5 shows the logical connections between the ports and the fabric. However, because the external array controllers each have two ports being used, the numbers of logical and physical connections in figure 5 are identical. Direct-attach arbitrated loop (FC-AL) configurations are not supported with FAST.X. External arrays must be connected to the DX ports through a fibre channel switch. In order to achieve maximum redundancy, all external volumes must be available on all external storage controller ports that are being configured for FAST.X. For redundancy, up to four paths may be configured to external volumes. These paths are configured round robin. If an external volume is not reachable through all paths in the FAST.X configuration, attempting to virtualize the volume as an edisk will fail. Distance between the VMAX and the external array Sharing of DX and storage ports EMC requires that the external array be located within the same data center as the VMAX3 array. If the data center is spread out across multiple floors in a single building, the external array and VMAX3 array can be on different floors. Both DX ports and external storage ports can be shared. 11

12 Modes of operation External provisioning DX ports can be zoned to multiple sets of storage ports on external arrays. This means that multiple external arrays can be connected to a single set of DX ports as long as the configurations are compliant with FAST.X requirements. Storage ports on an external array can also be shared between hosts and DX initiators or between DX initiators from multiple VMAX3 arrays. Devices available on the external array s storage ports must be accessible to a single FAST.X configuration or by hosts, but not both. If an EMC array is providing external storage, VMAX3 volumes can be mapped to the FA and masked to the WWNs of the DX ports on which they will be available. For third-party arrays, the native method of segmenting LUNs on a storage port can be used in the same way that LUN masking is used with a VMAX system. FAST.X has two modes of operation, depending on whether the external Logical Unit (or LU) is to be used as raw storage space or has data that must be preserved and accessed through a VMAX3 thin device. The devices on the external array used by FAST.X as external storage are host-addressable volumes that are normally presented from the external array to HBAs for direct host access. With FAST.X, they are presented to the DX initiators instead. External Provisioning: Allows the user to access LUs existing on external storage as raw capacity for new VMAX3 devices. These devices are called externally provisioned devices. Incorporation: Allows the user to preserve existing data on external LUNs and access it through VMAX3 volumes. These devices are called incorporated devices. Note: Incorporation is supported with the 5977 Q Service Release and later versions of HYPERMAX OS. When using FAST.X to configure an external LU, HYPERMAX OS creates an external disk group and a thin pool and configures the external LU as an edisk, which is added to the external disk group. External disk groups are separate from disk groups containing internal physical disks and start at disk group number 512. Because RAID protection is provided by the external array, edisks are added to unprotected virtual RAID groups. It also creates a data pool and a Data device (or TDAT), for each edisk that is configured in FAST.X. There is a 1:1:1 relationship between the external volume, the edisk, and the TDAT. VMAX3 host-addressable thin volumes can then be created from the Storage Resource Pool (SRP) that is associated with the data pool and external disk group. External provisioning should only be used with external volumes that contain no data or unwanted data. External volumes are reformatted as part of the edisk configuration process; therefore, any data residing on the volume prior to adding it as an edisk will be inaccessible. 12

13 Figure 6. External Provisioning Incorporation Incorporation is used when data on an external LUN must be preserved and accessed through a VMAX3 thin device. Like with external provisioning, external disk groups for Incorporation start at disk group number 512 and edisks are added to unprotected virtual RAID groups with the protection provided by the external array. External LUNs that are being added through either standard or VP encapsulation can be either thick or thin on the external array. If the external LUNs are thin, they can be fully allocated, partially allocated, or unallocated. When incorporating an external LU, HYPERMAX OS creates an external disk group and a thin pool and configures the external LU as an edisk, which is added to the external disk group. It also creates a data pool and a Data device (or TDAT), for each edisk that is configured in FAST.X. A VMAX3 thin device is created as well, allowing access to the data that has been preserved on the external LUN. There is a 1:1:1:1 relationship between the external volume, the edisk, the TDAT, and the VMAX3 thin volume. IMPORTANT: Once an external LUN has been incorporated, its data can only be accessed through the VMAX3 thin LUN. There is no method for un-incorporating an external LUN and preserving its data. 13

14 Figure 7. Incorporation General Rules for External Provisioning and Incorporation 14 Up to five SRPs are qualified per VMAX3 system. Adding an SRP can be accomplished online, but must be done by EMC Customer Service. A corresponding pool is created automatically as part of the process of adding the disk group. All edisks from the same external array that are configured in any given SRP are placed in the same disk group and pool. If capacity from multiple external arrays is configured in the same SRP, a separate disk group and pool is created for devices from each of the arrays. It is also best practice for all of the external volumes to have the same capacity. This is a recommendation and is not enforced by HYPERMAX OS. There is one Data device (TDAT) configured per edisk. The creation of the Data devices and the associated RAID groups and data pool is completed as part of adding an edisk. EMC Manufacturing does not pre-configure FAST.X in the factory. Some elements of a FAST.X configuration require a customer service engagement but can be done online at any time after the deployment of the system providing that the VMAX3 and its cache have been correctly sized. All FAST.X objects can be removed online providing they are in the proper state: External edisks need to be drained and inactive. The external disk group must be empty. The DX directors must not have edisks mapped to them. The SRP must not contain any disk group.

15 Note: When all edisks in a disk group are deleted, the disk group and pool are removed automatically. Ensuring external data integrity FAST.X uses a basic CRC mechanism to detect data corruption caused by procedural errors such as: Directly restoring data to an external LU that is virtualized as an edisk using the external array s replication capabilities Allowing direct host access to an external LU that is virtualized as an edisk Restoring from a backup directly to an external LU that is virtualized as an edisk As an external LU is initialized as an edisk or as data is written to the edisk, CRC information is written to VMAX3 cache. This CRC information is then checked upon subsequent reads to confirm that the external LUN has not been altered outside of the VMAX3 system s control. The protection mechanism requires a slight increase in memory requirements over standard local disk volumes. Once the data has been read into cache, it is protected with the standard VMAX block-level CRC error checking based on the industry standard T10 Data Integrity Field (DIF) block. The FAST.X CRC mechanism only applies to back-end reads and writes. Creating and Presenting Devices in the External Array In FAST.X connectivity, the DX director port is the FC initiator just as a host bus adapter (HBA) is the initiator when a host is connected to a storage array. To the external array, the DX directors act like HBAs from a Linux host. Logical Units (LUs) or volumes in an external array are created and made available to the DX directors by a storage administrator in the same way they are created and presented for Linux host access through an HBA. In other words, the normal procedure to create and assign volumes to the storage controllers for host access must be followed for the devices on the external array that will be virtualized as FAST.X edisks. Appendix B contains instructions for presenting external volumes from EMC storage for DX access. When a non-emc array is being used for external storage, refer to the relevant third party storage array documentation on the array vendor s website for correct procedures. Handling of Thinly Provisioned External Volumes Supported external arrays can vary greatly in their capabilities. Some thinly provision and compress their logical units. Because of this, it is possible for external storage to be consumed at an unpredictable rate and for the array to run out of available space which causes writes to tracks allocated (either newly or at any prior time) on the VMAX3 to fail. This can happen if the user fails to properly monitor over subscription or if a new data pattern from the host ends up compressing at a much lower rate than forecast. 15

16 The DX directors identify the Out of Capacity condition of a pool when its writes fail with the SCSI check condition DATA PROTECT/SPACE ALLOCATION FAILED WRITE PROTECT (07/27/00). HYPERMAX OS then protects its cache and other non-externally provisioned applications by taking the following actions: All allocations to the pool are stopped. If the SRP containing the out of space pool does not contain any free space, a new Out of Remote Capacity (ORC) TDAT ready state is set. This state is monitored by the FAs, which will fail host writes to allocated tracks. A background task in the VMAX3 monitors pools that can no longer accept allocations and is responsible for restoring write/allocation activity to a pool when it again has available capacity. FAST maintains 1% free space in the pool, so when the usable capacity drops it begins to demote capacity to ensure this minimum free space exists. The ORC state is cleared by the DX directors when the first successful destage of data occurs to the TDAT once more capacity is available in the pool. If other pools in the SRP contain free space, the specific extent group (42 contiguous tracks) containing the track whose write failed is reallocated into a different pool and triggers FAST to offload the appropriate data. 16

17 Replication Considerations Design and Implementation Best Practices for FAST.X All local replication functionality is supported on VMAX3 volumes that are part of a FAST.X configuration. SnapVX snapshots of these devices are provisioned externally and linked targets are provisioned independently. All remote replication functionality with SRDF and Open Replicator is supported on VMAX3 volumes that are part of a FAST.X configuration. FAST Support FAST movement between internal and external storage is fully supported. Because FAST movement is always contained within an SRP, external storage must share an SRP with internal storage for FAST data movement between the VMAX3 and the external array. Determining External Storage Service Level Expectations for FAST In order to rank the response time capabilities of a particular type of drive, FAST uses Service Level Expectations (SLEs), which correspond to the response time capabilities of the disks that are supported with VMAX3. SLE values for supported internal drives and for XtremIO and CloudArray external volumes are known and are hard coded in HYPERMAX OS. Because the response time capabilities of an external array s disks can vary greatly, a method to determine the SLE for an external array volume is needed. FAST supports six different Service Level Objectives that can be assigned to disk groups in the VMAX3. The SLOs are Diamond, Platinum, Gold, Silver, Bronze, and Optimized. For each SLO there is an SLE envelope defined. This SLE envelope determines the range of disk types that can be used within the SRP and defines the preferred drive technology type for new allocations from host writes along with the highest and lowest performing disk technology that the data is allowed on. This is not strictly enforced because an out of capacity condition in a pool may require that the VMAX3 allocate capacity outside of the defined range rather than fail a host write that requires extent allocation. FAST detects the type of drive technology for each disk group in the array, including FAST.X disk groups. For each technology found, the following information is gathered: Unique drive technology ID Disk type (EFD, 15K, 10K, 7K, External) Manufacturer Capacity in bytes Product name Disk RPM (internal disks only) 17

18 FAST also collects raw statistics by monitoring I/O to the back-end drives as well as FAST.X connected external volumes. This allows FAST to profile and build a real time model for the edisks. The raw statistics gathered include: Number of reads Number of writes Read I/O rate Write I/O rate Read time Write time These I/O statistics are translated into statistical measures called workload characteristics. The derived characteristics include: I/Os per second (IOPS) Read percentage Read I/O size Write I/O size Observed response time The process of data collection and statistics calculation occurs every ten minutes so that HYPERMAX OS can produce a value allowing the edisks to be accurately ranked within the SRP. While this is occurring, the SLE value is set at the default of 40ms. To allow the ranking of the edisks, a Pool State Model is built. This model contains information that indicates if the ranking of the external storage has completed, and, if it has, what the ranking is. While the ranking of the external storage takes place, its value can be in one of the following three phases: Loading To establish the performance baseline, FAST selects existing extents from storage groups associated with the SRP that have the Gold, Silver, Bronze or Optimized SLO and loads the pool to 15% of its usable capacity. This decreases the chance that all I/O to the pool will be serviced by cache. Once this capacity point has been reached, the pool state transitions to the Profiling phase. Profiling After the pool has reached the profiling state, it remains there for 12 hours. During this time FAST collects performance data to determine the most probable underlying drive technology. After the 12 hour profiling period expires, the dominant response time mode determines the final SLE and classifies it: Flash like (2ms) 15k like (8ms) 18

19 Moving Data Between SRPs Support Added to FAST.X 10k like (12ms) 7k like (24ms) Design and Implementation Best Practices for FAST.X When profiling is completed, the state then transitions to the Ready phase. Ready Profiling has finished and the SLE has been determined. Once an edisk has a defined SLE and is in a Ready state, it can participate in SLO movements within the SRP. Note that the state model only allows transitions in one direction. For example, once the state is Profiling it cannot go back to Loading, even if the allocated capacity becomes less than the required capacity point. Data in a FAST.X environment can be moved between SRPs while the application is online with no decrease in performance or availability. This is accomplished using Solutions Enabler or Unisphere to move a storage group from its current SRP to a new SRP. Individual devices can also be moved between SRPs. This is accomplished by moving devices to a storage group associated with a different SRP. The following capabilities were not available with FTS on VMAX2, but have been added to FAST.X: The DX directors use SPC-3 LBP (Logical Block Provisioning) when supported by the host operating system. This allows external LUs to be thinly provisioned. Both unmap and write same/unmap SCSI commands are supported by DX directors. This allows previously used, thin provisioned capacity to be reclaimed on external storage. Round robin multipathing is now supported on up to four ITL paths per edisk. Optimized Read Miss (ORM) is supported. FAST.X and Data at Rest Encryption (D@RE) FAST.X system limitations Data at Rest Encryption (D@RE) may be enabled on a VMAX3 array that contains external storage in a FAST.X configuration. The VMAX3 running FAST.X, however, does not encrypt data being written to external storage. If encryption on external storage is required, is must be provided by the external array itself. The following general limitations apply to FAST.X environments: The maximum external capacity is determined by VMAX3 cache 19

20 FAST.X Restrictions Up to 2048 external volumes per engine can be virtualized as edisks edisks is also the system limit. The maximum number of external volumes includes ProtectPoint volumes if configured on the system. The maximum number of logical paths to each external SCSI Logical Unit is 4, with all paths potentially active concurrently. The maximum capacity of a single external LU is 64 TiB. CloudArray must be configured in its own SRP. DX ports must be fibre channel ports from 8 Gb/s or 16 Gb/s I/O modules. Up to 512 external ports can be configured per DX initiator port. Up to 512 external disk groups are supported. The following general limitations apply to FAST.X environments running 5977 Q Service Release and later versions of HYPERMAX OS: A maximum of 2048 external volumes per engine can be virtualized as edisks. A maximum of 16,384 external volumes per VMAX3 array can be virtualized as edisks. Support for switched fabric (FC-SW) connectivity only Open systems only (including IBM i) Third-party tools are necessary to perform array management operations on non-emc external arrays Software and HYPERMAX OS Version Requirements FAST.X requires the following host and array software versions: HYPERMAX OS 5977 with HYPERMAX OS Q Service Release Solutions Enabler 8.1 or higher Incorporation requires the following host and array software versions: HYPERMAX OS 5977 with HYPERMAX OS Q Service Release and later versions Solutions Enabler 8.2 or higher Supported External Array Platforms 20

21 For details on the external arrays that are supported, see the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page: Speak with an EMC customer representative to request support for arrays that do not currently appear on the matrix. Recommended External Volume Sizes FAST.X with CloudArray The following are the recommended external volume sizes for external provisioning for all arrays other than CloudArray. The recommended sizes are based on the total required externally provisioned capacity: 100 GB external volumes for virtualizing up to 200 TB 200 GB external thin volumes for virtualizing up to 400 TB 300 GB external thin volumes for virtualizing up to 600 TB Configuring a CloudArray as external storage involves specific considerations that are not required with other storage arrays. VMAX3 to CloudArray connectivity The general FAST.X requirement is to map each external LUN through two ports on two different external array storage controllers. Because a CloudArray appliance contains only a single storage controller, that requirement is amended. For CloudArray connectivity, two CloudArray ports should be configured and zoned to each of the minimum four DX ports. VMAX3 configuration considerations The following are configuration considerations specific to an external CloudArray appliance: CloudArray capacity must be configured into its own SRP. Multiple CloudArray appliances can have their capacity virtualized in the same SRP, but if an SRP has any CloudArray capacity in it, it must be the only type of storage in this SRP. This requirement is mandatory, but is not enforced by HYPERMAX OS or VMAX3 management software. No local or remote replication (including SRDF, TimeFinder, and SnapVX) is allowed using VMAX3 volumes with capacity provisioned from a CloudArray SRP. This restriction is mandatory, but is not enforced by HYPERMAX OS or VMAX3 management software. 21

22 The cumulative front-end throughput limit of all storage groups provisioned to any given CloudArray appliance is 400 MB/s. CloudArray Configuration considerations The following are configuration considerations specific to an external CloudArray appliance: The CloudArray appliance supporting fibre channel connectivity and qualified for FAST.X comes in two licenses. One has 20TB of CloudArray cache and one has 40TB. The maximum qualified capacity of these appliances is 120 TB and 240 TB, respectively. The CloudArray appliance used for FAST.X must be dedicated to FAST.X. 5 caches of 4 TB each should be configured for the 20 TB license and 10 caches of 4TB should be configured for the 40 TB license. The maximum cache to cloud capacity ratio that is qualified is 6:1. Each CloudArray volume should be 4 TiB and should only be expanded by a multiple of that value. There are a minimum number of CloudArray volumes that are required for FAST.X. This minimum depends on the DX configuration in the VMAX3 array. There must be two CloudArray volumes for each VMAX3 engine in the system. For example, this means that on a single VMAX3 engine, the minimum will be 2 volumes; on an eight engines VMAX3 system, it will be 16 volumes (a minimum of 64 TiB 16*4TiB virtualized). Maximum capacity is reached with 30 volumes virtualized for the 20TB license and 60 volumes for the 40 TB license. Once the minimum number of volume has been virtualized, any numbers of 4 TiB volumes can be added up to the maximum allowed by the license. Volumes should be allocated to the cache in a round robin fashion: The first 5 volumes (10 volumes for the 40 TB) end up with 1:1 cache ratio. The next 5 (20 for the 40 TB license) will get 2:1, and so on, until a ratio of 6:1 is reached, matching the 120 TiB or 240 TiB maximum licensed capacity. FAST.X with Solutions Enabler The following are examples of how to configure, update, and manage a FAST.X environment using Solutions Enabler (SYMCLI). The examples used in these technical notes are for illustrative purposes and do not necessarily represent a FAST.X environment configured for production workloads. 22

23 Notes: The symconfigure command is used to configure and modify the FAST.X environment. In most of the command examples, the -cmd option is used, followed by the command syntax in quotes. As with all symconfigure commands requiring configuration input, the f option can be used, followed by a path to a command file containing the syntax shown in the examples. For more information on symconfigure see the EMC Solutions Enabler Array Management V8.1 CLI User Guide which is available on emc.com. This document was developed in a shared lab environment. Things like storage group contents, disk group names and numbers and pool names and numbers may change between sections of the test plan. The command output was gathered using a Linux host connected to a FAST.X environment containing an XtremIO as the external array. The command output seen while performing steps shown may vary slightly if other types of hosts and external arrays are used in the environment. Getting DX Information and Port WWNs for Zoning # symcfg discover -sid 0041 Before configuring edisks: Configure the DX directors and assign ports to the emulations. Complete the zoning. Present the external volumes on the external array ports. Following the initial configuration of DX directors by EMC, run the symcfg discover command. Attempting discovery of Symmetrix This operation may take up to a few minutes. Please be patient... The symcfg command lists the DX directors. In this example, there are four directors configured across two engines in the VMAX3. The output also shows the number of cores and ports assigned to each as well as the online or offline status. # symcfg list -DX all -sid 41 Symmetrix ID: (Local) S Y M M E T R I X D I R E C T O R S Ident Type Engine Cores Ports Status DX-1H EDISK Online DX-2H EDISK Online DX-3H EDISK Online DX-4H EDISK Online 23

24 The symsan command is used with the -sanports option to validate connectivity to an external storage array. In this example, there are four ports on two DX directors (01H:06, 01H:07, 02H:06, and 02H:07) zoned to four storage controller ports on an XtremIO. This is indicated by the fact that each remote port WWN is unique. # symsan list -sanports -DX all -port all -sid 41 Symmetrix ID: Flags Num DIR:P I Vendor Array LUNs Remote Port WWN H:07. EMC XtremIO FNM FF3D H:29. EMC XtremIO FNM FF3D H:07. EMC XtremIO FNM FF5D55AD 02H:29. EMC XtremIO FNM FF5D55AC 03H:07. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A AD 03H:29. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A AD 04H:07. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A AD 04H:29. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A AD # symcfg list -DX 1H -v -sid 41 Symmetrix ID: (Local) Time Zone : EDT Note: The output of symsan commands may return output from some external arrays, like the CloudArray, with a truncated WWN or serial number indicated by an asterisk (*) at the end of the field. To display the entire WWN or serial number, use the -detail option. Use the symcfg list command to display details about the DX directors and the port WWNs required for zoning: Product Model : VMAX400K Symmetrix ID : Microcode Version (Number) : 5977 ( ) Microcode Registered Build : 0 Microcode Date : Microcode Patch Date : Microcode Patch Level : 660 Symmwin Version : 651 Enginuity Build Version : Service Processor Time Offset : - 01:00:38 Director Identification: DX-1H Director Type Director Status : EDISK : Online Director Symbolic Number : 01H Director Numeric Number : 113 Director Engine Number : 1 Director Slot Number : 1 Number of Director Cores : 7 Number of Director Ports : 2 24

25 Director Port: 7 WWN Port Name : A407 Director Port Status : Online Negotiated Speed (Gb/Second) : 8 Director Port Speed (Gb/Second) : 8 Director Port: 29 WWN Port Name : A41D Director Port Status : Online Negotiated Speed (Gb/Second) : 8 Director Port Speed (Gb/Second) : 8 25

26 Examining the FAST.X environment Once the connectivity for FAST.X is complete, verify that the DX directors are available and that there is connectivity to external volumes. Confirm the Availability of the External Volumes The symsan command verifies that volumes are available on external storage when the -sanluns option is used with an external port WWN. There are five XtremIO volumes that are masked to the VMAX3 DXs and are available to be configured as edisks. # symsan list -dir 1H -p 7 -sanluns -wwn FF3D2743 -sid 41 Symmetrix ID: Remote Port WWN: FF3D2743 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN H:07 RW... F N/A 514F0C55EBA H:07 RW... F N/A 514F0C55EBA H:07 RW... F N/A 514F0C55EBA H:07 RW... F N/A 514F0C55EBA H:07 RW... F N/A 514F0C55EBA00005 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. Configure edisks for External Provisioning The volumes that are available on the XtremIO array can be configured as edisks for FAST.X. The configuration of the edisks also creates the required disk group, pool, and Data devices (TDATs). There are two SRPs configured on the system, one for CloudArray as well as the DEFAULT_SRP, which contains the other disk groups in the system. # symcfg list -srp -sid 41 -detail STORAGE RESOURCE POOLS Symmetrix ID : C A P A C I T Y Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) CloudArray_SRP DEFAULT_SRP FX Total

27 Legend: Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used Prior to adding edisks, there are no external disk groups except the default encapsulated disk group called *ENCAPSDG*. This group is used for devices that are encapsulated for ProtectPoint, which is not configured on this system. The other three disk groups are internal disk groups containing FC, SATA, and EFD drives. # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: Disk Group Disk Hyper Capacity Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) DISK_GROUP_ IF DISK_GROUP_ IS DISK_GROUP_ IE *ENCAPSDG* 0 -- N/A N/A N/A EXT_GROUP_513 4 X- N/A N/A Any Total Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: There are also pools for the two internal drive types as well as a pool for ProtectPoint encapsulated devices called *ENCAPSPOOL*. There is no pool yet for the TDATs that will be created from the XtremIO external volumes. The configuration operation to add the edisks creates the required disk group, pool, and DATA devices (TDATs). S Y M M E T R I X P O O L S Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) DG1_FBA15K TFF-EI 2-Way Mir DG2_FBA7_2 TSF-EI RAID-6(6+2) DG3_FBA_F TEF-EI RAID-5(3+1) DG513_FBA T-F-EX Unprotected *ENCAPSPOOL* T---D- Unknown Total Tracks Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin 27

28 28 (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A (S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A The symconfigure command configures the edisks and uses the device s WWN specified in the command syntax (the WWN is from the output of symsan list - sanluns). Because the command requires specification of five WWNs, either five separate commands or a very long command would need to be run from the command line. This example uses a command file. Note: When the parameter encapsulate_data is set to NO, any existing data on the external volume will be destroyed. # cat /cmd_files/edisk_wwns add external_disk wwn=514f0c55eba00001, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00002, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00003, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00004, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00005, encapsulate_data=no srp=default_srp; # symconfigure -sid 41 -f /cmd_files/edisk_wwns commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 069 steps...executing. Step 005 of 069 steps...executing. Step 010 of 069 steps...executing. Step 014 of 069 steps...executing. Step 018 of 072 steps...executing. Step 020 of 072 steps...executing. Step 021 of 072 steps...executing. Step 024 of 072 steps...executing. Step 029 of 072 steps...executing. Step 032 of 072 steps...executing. Step 032 of 072 steps...executing. Step 043 of 072 steps...executing. Step 043 of 072 steps...executing. Step 045 of 203 steps...executing. Step 187 of 214 steps...executing. Step 187 of 214 steps...executing. Step 197 of 214 steps...executing. Step 202 of 214 steps...executing. Step 204 of 214 steps...executing. Step 211 of 214 steps...executing. Step 211 of 214 steps...executing. Step 214 of 214 steps...executing. Local: COMMIT...Done.

29 New symdevs: FF8D7:FF8DB [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed. Five new DATA devices (FF8D7:FF8DB) are created along with a disk group (EXT_GROUP_514) and pool (DG514_FBA). The DATA devices are enabled in the thin pool. # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: Disk Group Disk Hyper Capacity Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) DISK_GROUP_ IF DISK_GROUP_ IS DISK_GROUP_ IE *ENCAPSDG* 0 -- N/A N/A N/A EXT_GROUP_513 4 X- N/A N/A Any EXT_GROUP_514 5 X- N/A N/A Any Total Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: S Y M M E T R I X P O O L S Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) DG1_FBA15K TFF-EI 2-Way Mir DG2_FBA7_2 TSF-EI RAID-6(6+2) DG3_FBA_F TEF-EI RAID-5(3+1) DG513_FBA T-F-EX Unprotected *ENCAPSPOOL* T---D- Unknown DG514_FBA T-F-EX Unprotected Total Tracks Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A 29

30 (S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A # symcfg show -pool DG514_FBA -detail -thin -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8DA Enabled FF8DB Enabled Tracks } No Thin Devices Bound to Device Pool DG514_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG514_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound 30

31 Further Examining the Disk Group More information about the disk group and the edisks that populate it are gathered using symdisk commands. Listing the disk group shows general information about the group and the edisks in the group, including showing the primary DX ownership of each of the five edisks. # symdisk list -sid 41 -disk_group 514 Symmetrix ID : Disks Selected : 5 Disk Group : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Disk Capacity(MB) Ident Int TID Grp Vendor Type Hypr Total Free DX-1H EMC N/A DX-2H EMC N/A DX-1H EMC N/A DX-2H EMC N/A DX-1H EMC N/A Total Adding -v to the command lists each of the edisks in the disk group and gives more detail about each, including the edisk spindle IDs ( ) and the WWNs of the corresponding external LUNs. # symdisk list -sid 41 -disk_group 514 -v Symmetrix ID : Disks Selected : 5 Disk Group : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Director : DX-1H Interface Target ID Spindle ID : 8004 External WWN : 514F0C55EBA00001 External Array ID : FNM External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID : EMC 31

32 Product ID Product Revision Serial ID : XtremIO Disk Blocks : Block Size : 512 Total Disk Capacity (MB) : Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-2H Interface Target ID Spindle ID : 8005 External WWN : 514F0C55EBA00002 External Array ID : FNM External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID Product ID Product Revision Serial ID : EMC : XtremIO Disk Blocks : Block Size : 512 Total Disk Capacity (MB) : Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-1H Interface Target ID Spindle ID : 8006 External WWN : 514F0C55EBA00003 External Array ID : FNM External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor 32 Vendor ID Product ID : EMC : XtremIO

33 Product Revision Serial ID Disk Blocks : Block Size : 512 Total Disk Capacity (MB) : Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-2H Interface Target ID Spindle ID : 8007 External WWN : 514F0C55EBA00004 External Array ID : FNM External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID Product ID Product Revision Serial ID : EMC : XtremIO Disk Blocks : Block Size : 512 Total Disk Capacity (MB) : Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-1H Interface Target ID Spindle ID : 8008 External WWN : 514F0C55EBA00005 External Array ID : FNM External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID : EMC 33

34 Product ID Product Revision Serial ID : XtremIO Disk Blocks : Block Size : 512 Total Disk Capacity (MB) : Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal 34 The following options show all of the external spindles and the paths to each that are active and the paths that are for failover. This also shows the four edisks configured on DX-3H and DX-4H ( ) which are configured for a separate FAST.X with CloudArray environment: # symdisk -sid 41 list -external -spindle -path -detail Symmetrix ID : Flags Spindle A DIR:P Remote Port WWN X 03H:007 57cc95a ad X 03H:029 57cc95a ad. 04H:029 57cc95a ad. 04H:007 57cc95a ad 8001 X 04H:029 57cc95a ad X 04H:007 57cc95a ad. 03H:007 57cc95a ad. 03H:029 57cc95a ad 8002 X 03H:007 57cc95a ad X 03H:029 57cc95a ad. 04H:029 57cc95a ad. 04H:007 57cc95a ad 8003 X 04H:029 57cc95a ad X 04H:007 57cc95a ad. 03H:007 57cc95a ad. 03H:029 57cc95a ad 8004 X 01H: ff3d2743 X 01H: ff3d H: ff5d55ad. 02H: ff5d55ac 8005 X 02H: ff5d55ad X 02H: ff5d55ac. 01H: ff3d H: ff3d X 01H: ff3d2743 X 01H: ff3d H: ff5d55ad. 02H: ff5d55ac 8007 X 02H: ff5d55ad X 02H: ff5d55ac. 01H: ff3d H: ff3d X 01H: ff3d2743 X 01H: ff3d H: ff5d55ad. 02H: ff5d55ac

35 Legend: (A)ctive path: X = Active,. = Failover Configure edisks for Incorporation C:\>syminq Starting with the Q12016 HYPERMAX OS Service Release, data that exists on external volumes can be preserved while configuring edisks. This mode of operation is called Incorporation. In this example, an external VNX LUN containing host data is incorporated. When the incorporation operation runs, a VMAX3 thin device that is equal in size to the edisk is created along with the TDAT on the VMAX3 array. The thin LUN enables hosts to access to the incorporated data that exists on the external LUN. Note: Once the external LUN is incorporated, the resulting thin device is available to use in the same way that an externally provisioned thin device is. All features that are supported with FAST.X are supported with both types of devices and all examples and comments shown apply to both, unless noted. This Windows host is accessing four VNX devices natively, meaning that this host is connected directly to a VNX FC front-end storage port. Device Product Device Name Type Vendor ID Rev Ser Num Cap (KB) \\.\PHYSICALDRIVE0 VMware Virtual disk 1.0 N/A \\.\PHYSICALDRIVE1 GK EMC SYMMETRIX A \\.\PHYSICALDRIVE2 GK EMC SYMMETRIX B \\.\PHYSICALDRIVE3 GK EMC SYMMETRIX C \\.\PHYSICALDRIVE4 GK EMC SYMMETRIX D \\.\PHYSICALDRIVE5 GK EMC SYMMETRIX E \\.\PHYSICALDRIVE6 GK EMC SYMMETRIX F \\.\PHYSICALDRIVE7 GK EMC SYMMETRIX A \\.\PHYSICALDRIVE8 GK EMC SYMMETRIX B \\.\PHYSICALDRIVE9 GK EMC SYMMETRIX \\.\PHYSICALDRIVE10 GK EMC SYMMETRIX \\.\PHYSICALDRIVE11 GK EMC SYMMETRIX \\.\PHYSICALDRIVE12 GK EMC SYMMETRIX \\.\PHYSICALDRIVE13 EMC SYMMETRIX \\.\PHYSICALDRIVE14 EMC SYMMETRIX \\.\PHYSICALDRIVE15 EMC SYMMETRIX \\.\PHYSICALDRIVE16 EMC SYMMETRIX \\.\PHYSICALDRIVE17 EMC SYMMETRIX \\.\PHYSICALDRIVE18 EMC SYMMETRIX \\.\PHYSICALDRIVE19 EMC SYMMETRIX \\.\PHYSICALDRIVE20 EMC SYMMETRIX \\.\PHYSICALDRIVE21 DGC VRAID F15F \\.\PHYSICALDRIVE22 DGC VRAID F15F \\.\PHYSICALDRIVE23 DGC VRAID F15F \\.\PHYSICALDRIVE24 DGC VRAID A56F15F

36 There are file systems created on the four volumes, which are mounted and have been assigned drive letters: Data has been written to each of the volumes: 36

37 After writing the host data, the VNX devices have been unmasked from the host and presented as external LUNs to the DX directors on the VMAX3. C:\>symsan list -sanports -DX all -port all -sid 32 Symmetrix ID: Flags Num DIR:P I Vendor Array LUNs Remote Port WWN H:07. EMC CLARiiON APM E H:31. EMC CLARiiON APM C36E H:07. EMC CLARiiON APM E H:31. EMC CLARiiON APM E40812 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. The external LUNs from the VNX are available and can be incorporated. C:\>symsan list -dir 1H -p 7 -sanluns -wwn E sid 32 Symmetrix ID: Remote Port WWN: E00812 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN

38 01H:07 RW... F A00CCE619EE3FF0E511 01H:07 RW... F A00D0E619EE3FF0E511 01H:07 RW... F A00CAE619EE3FF0E511 01H:07 RW... F A00CEE619EE3FF0E511 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. The symconfigure command to incoprporate the external LUNs is run either from the command line or by calling a command file: C:\>type ext_wwns.txt add external_disk wwn= a00cce619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn= a00d0e619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn= a00cae619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn= a00cee619ee3ff0e511, encapsulate_data=no keep_data=yes; 38 C:\ symconfigure -sid 32 -f c:\ext_wwns.txt commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 009 of 075 steps...executing. Step 013 of 075 steps...executing. Step 018 of 080 steps...executing. Step 022 of 080 steps...executing. Step 023 of 080 steps...executing. Step 031 of 080 steps...executing. Step 223 of 239 steps...executing. Step 228 of 239 steps...executing. Step 233 of 239 steps...executing. Step 236 of 239 steps...executing. Step 239 of 239 steps...executing. Local: COMMIT...Done. New symdevs: 00059:0005C [TDEVs] New symdevs: FFF6C:FFF6F [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed.

39 There are now four new thin VMAX3 devices (059:05C) that will allow the host to access the data on the external VNX LUNs. The VMAX thin devices are masked to the host and the native VNX devices have been removed. C:\Program Files (x86)\emc\symcli\bin>syminq Device Product Device Name Type Vendor ID Rev Ser Num Cap (KB) \\.\PHYSICALDRIVE0 VMware Virtual disk 1.0 N/A \\.\PHYSICALDRIVE1 GK EMC SYMMETRIX A \\.\PHYSICALDRIVE2 GK EMC SYMMETRIX B \\.\PHYSICALDRIVE3 GK EMC SYMMETRIX C \\.\PHYSICALDRIVE4 GK EMC SYMMETRIX D \\.\PHYSICALDRIVE5 GK EMC SYMMETRIX E \\.\PHYSICALDRIVE6 GK EMC SYMMETRIX F \\.\PHYSICALDRIVE7 GK EMC SYMMETRIX A \\.\PHYSICALDRIVE8 GK EMC SYMMETRIX B \\.\PHYSICALDRIVE9 GK EMC SYMMETRIX \\.\PHYSICALDRIVE10 GK EMC SYMMETRIX \\.\PHYSICALDRIVE11 GK EMC SYMMETRIX \\.\PHYSICALDRIVE12 GK EMC SYMMETRIX \\.\PHYSICALDRIVE13 EMC SYMMETRIX \\.\PHYSICALDRIVE14 EMC SYMMETRIX \\.\PHYSICALDRIVE15 EMC SYMMETRIX \\.\PHYSICALDRIVE16 EMC SYMMETRIX \\.\PHYSICALDRIVE17 EMC SYMMETRIX \\.\PHYSICALDRIVE18 EMC SYMMETRIX \\.\PHYSICALDRIVE19 EMC SYMMETRIX \\.\PHYSICALDRIVE20 EMC SYMMETRIX \\.\PHYSICALDRIVE21 EMC SYMMETRIX \\.\PHYSICALDRIVE22 EMC SYMMETRIX A \\.\PHYSICALDRIVE23 EMC SYMMETRIX B \\.\PHYSICALDRIVE24 EMC SYMMETRIX C The devices are available and simply need to be brought online by right clicking each disk and choosing Online. 39

40 When all four volumes are brought online, they are available with the same volume names and drive letters as they were when the host was accessing the VNX volumes natively through the VNX storage ports. The data written to the devices when they were directly accessible by the host has been preserved and is available through the VMAX3 thin devices. 40

41 The incorporated VMAX3 thin devices can be used with all local and remote HYPERMAX OS replication features and can take advantage of all capabilities of the VMAX3 array. Creating a Storage Group to Assign Volumes to the Default SRP In the VMAX3, storage groups are used to mask devices to hosts. They also assign volumes to an SRP and assign SLOs and workload types to devices. When creating thin volumes for host use, volumes can be created for later use and left unassigned to a storage group or they can be assigned to a storage group that has already been created. In the test environment in use here, a storage group called lcseb149_sg is created for host lcseb149 in the default SRP. It is then added to an existing parent storage group (BETA_CLUSTER) as a child group. This masks the volumes created in the next step when they are added to that storage group and allows the hosts to discover the devices. Because no SLO is explicitly chosen, by default, the "Optimized" SLO is assigned. The SLO name appears as <none> if the Optimized SLO is not specifically assigned, but the Optimized SLO is used. Note: Different environments may require different masking steps. # symsg -sid 41 create lcseb149_sg -srp DEFAULT_SRP # symsg show lcseb149_sg -sid 41 41

42 42 Name: lcseb149_sg Symmetrix ID : Last updated at : Tue Jul 28 19:22: Masking Views : No FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 0 Storage Group Names Number of Gatekeepers : 0 # symsg -sg BETA_CLUSTER -sid 41 add sg lcseb149_sg # symsg show BETA_CLUSTER -sid 41 Name: BETA_CLUSTER Symmetrix ID : Last updated at : Tue Jul 28 19:27: Masking Views : Yes FAST Managed : No SLO Name : <none> Workload : <none> SRP Name : <none> Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 6 Storage Group Names : bc_gks (IsChild) Rich_B137 (IsChild) Andy_B149 (IsChild) b127_gks (IsChild) lcseb149_sg (IsChild) Number of Gatekeepers : 18 Devices (18): { Sym Device Cap Dev Pdev Name Config Attr Sts (MB) N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW \\.\PHYSICALDRIVE1 TDEV (GK) RW A \\.\PHYSICALDRIVE2 TDEV (GK) RW B \\.\PHYSICALDRIVE3 TDEV (GK) RW 6

43 0001C N/A TDEV (GK) RW D N/A TDEV (GK) RW E N/A TDEV (GK) RW F N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW N/A TDEV (GK) RW 6 } Creating Thin Volumes for the Default SRP The symconfigure command creates the devices and adds them to the storage group. In this example, two 200 GB devices (0133 and 0135) are created and added to lcseb149_sg. # symconfigure -sid 41 -cmd "create dev count=2, size=200 GB, emulation=fba, config=tdev, sg=lcseb149_sg;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...started. Committing configuration changes...queued. COMMIT requesting required resources...obtained. Step 006 of 009 steps...executing. Step 009 of 009 steps...executing. Local: COMMIT...Done. Adding devices to Storage Group...Done. New symdevs: 00133:00134 [TDEVs] Terminating the configuration change session...done. The configuration change session has successfully completed. # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg Symmetrix ID : Last updated at : Thu Jul 23 16:38: Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { 43

44 Sym Device Cap Dev Pdev Name Config Attr Sts (MB) N/A TDEV RW N/A TDEV RW } The host can now discover devices 133 and 134. Diagram of the Configured Environment The following diagram shows the FAST.X environment after the commands run in the previous examples. It shows the FAST.X entities and their relationship to each other and the arrays in general. Note: Disk Group 513, which appears in the previous CLI output, is in its own SRP for FAST.X with CloudArray and is not shown in the diagram. Figure 8. FAST.X Environment 44

45 Moving Volumes to an external SRP with EFD Storage Only In certain conditions, all storage tiers in an SRP are available to be used for any data in that SRP regardless of the chosen SLO. In the DEFAULT_SRP in this configuration, there are SATA, Fibre Channel, and EFD drives, along with external EFDs from an XtremIO array. Restricting the storage that a storage group uses to only EFD devices in all conditions is not possible. For example, the Diamond SLO restricts data to EFD devices only, but only as long as there is free capacity in those disk groups. If there is no capacity left in any EFD disk groups in the SRP, but there is capacity from other drive pools (SATA or FC), data is placed on spinning disks rather than allow a write to fail. The other consideration is that any EFD pool may be used when the Diamond SLO is chosen. This means that data will likely be placed on both internal and external EFD devices. To restrict data to external storage only, place the external devices in their own SRP. Here, storage from an all EFD array (XtremIO) is used. The symsg command moves volumes simply and easily between SRPs by moving thin volumes between storage groups. An empty, external SRP, named XtremIO_SRP has been added to the array with a bin file change, which is required to create additional SRPs. # symcfg list -srp -sid 41 -detail STORAGE RESOURCE POOLS Symmetrix ID : C A P A C I T Y Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) CloudArray_SRP DEFAULT_SRP FX XtremIO_SRP Total Legend: Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used The SRP can be populated in the same way as in the previous tests. In this case, five new entries are added to the original command file and the earlier entries are commented out. The new entries will add the edisks to the new SRP created for XtremIO only. # more /cmd_files/edisk_wwns #add external_disk wwn=514f0c55eba00001, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00002, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00003, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00004, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00005, encapsulate_data=no srp=default_srp; 45

46 add external_disk wwn=514f0c55eba00007, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba00008, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba00009, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba0000a, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba0000b, encapsulate_data=no srp=xtremio_srp; 46 # symconfigure -sid 41 -f /cmd_files/edisk_wwns commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 070 steps...executing. Step 009 of 070 steps...executing. Step 014 of 070 steps...executing. Step 017 of 073 steps...executing. Step 020 of 073 steps...executing. Step 020 of 073 steps...executing. Step 032 of 073 steps...executing. Step 043 of 073 steps...executing. Step 044 of 073 steps...executing. Step 044 of 073 steps...executing. Step 046 of 209 steps...executing. Step 049 of 209 steps...executing. Step 049 of 209 steps...executing. Step 193 of 220 steps...executing. Step 194 of 220 steps...executing. Step 202 of 220 steps...executing. Step 210 of 220 steps...executing. Step 216 of 220 steps...executing. Step 217 of 220 steps...executing. Step 217 of 220 steps...executing. Local: COMMIT...Done. New symdevs: FF8D2:FF8D6 [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed. # symcfg list -srp -sid 41 -detail The XtremIO SRP, disk group, and pool are now populated. STORAGE RESOURCE POOLS Symmetrix ID : C A P A C I T Y Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) CloudArray_SRP DEFAULT_SRP FX XtremIO_SRP Total Legend:

47 Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: Disk Group Disk Hyper Capacity Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) DISK_GROUP_ IF DISK_GROUP_ IS DISK_GROUP_ IE *ENCAPSDG* 0 -- N/A N/A N/A EXT_GROUP_514 5 X- N/A N/A Any EXT_GROUP_515 4 X- N/A N/A Any EXT_GROUP_516 5 X- N/A N/A Any Total Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: S Y M M E T R I X P O O L S Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) DG1_FBA15K TFF-EI 2-Way Mir DG2_FBA7_2 TSF-EI RAID-6(6+2) DG3_FBA_F TEF-EI RAID-5(3+1) DG515_FBA T-F-EX Unprotected *ENCAPSPOOL* T---D- Unknown DG514_FBA T-F-EX Unprotected DG516_FBA T-F-EX Unprotected Total Tracks Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A (S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A 47

48 # symcfg show -pool DG516_FBA -detail -thin -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } 48 No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound An empty Storage Group can be created just as in the previous section but is assigned to the XtremIO_SRP instead of the DEFAULT_SRP. # symsg -sid 41 create lcseb149_xio_sg -srp XtremIO_SRP # symsg -sid 41 -sg BETA_CLUSTER add sg lcseb149_xio_sg

49 # symaccess -sid 41 show lcseb149_xio_sg -type storage Symmetrix ID : Storage Group Name : lcseb149_xio_sg Last update time : 01:47:15 PM on Tue Jul 28,2015 Group last update time : 01:47:15 PM on Tue Jul 28,2015 Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : None Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symaccess -sid 41 show BETA_CLUSTER -type storage Symmetrix ID : Storage Group Name : BETA_CLUSTER Last update time : 01:47:15 PM on Tue Jul 28,2015 Group last update time : 01:47:15 PM on Tue Jul 28,2015 Number of Storage Groups : 7 Storage Group Names : bc_gks (IsChild) Rich_B137 (IsChild) Andy_B149 (IsChild) b127_gks (IsChild) lcseb149_sg (IsChild) lcseb149_xio_sg (IsChild) Devices : 00013: : :00134 Masking View Names { BETA_CLUSTER } # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg At this point, there are two thin devices from the previous test mapped to host lcseb149 by assigning them to the lcseb149_sg storage group. Symmetrix ID : Last updated at : Tue Jul 28 19:34: Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 49

50 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { Sym Device Cap Dev Pdev Name Config Attr Sts (MB) N/A TDEV RW N/A TDEV RW } Host data has been written to the two devices which has allocated tracks in the thin pool for external disk group 514. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 57 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8DA Enabled FF8DB Enabled Tracks } 50 No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2):

51 { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Design and Implementation Best Practices for FAST.X Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound Because the Optimized SLO was used, data on devices and can exist on any of the storage in the default SRP. If the goal is to have the extents for devices and on XtremIO storage only, the thin volumes and all their extents must be moved to the XtremIO_SRP that contains only edisks created from XtremIO volumes. Before moving the devices, the lcseb_xio_sg storage group is empty. # symsg show lcseb149_xio_sg -sid 41 Name: lcseb149_xio_sg Symmetrix ID : Last updated at : Tue Jul 28 19:40: Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : XtremIO_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 The pool for the external disk group 516 has no thin devices and, therefore, no track allocations. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 51

52 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound The symsg move or moveall command is used to move the devices from the storage group using the DEFAULT_SRP to the storage group using the XtremIO_SRP. # symsg -sg lcseb149_sg -sid 41 moveall lcseb149_xio_sg # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg Volumes and are no longer in the lcseb149_sg storage group. They have been moved to the lcseb149_xio_sg. 52 Symmetrix ID : Last updated at : Tue Jul 28 20:22: Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None

53 Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Design and Implementation Best Practices for FAST.X # symsg show lcseb149_xio_sg -sid 41 Name: lcseb149_xio_sg Symmetrix ID : Last updated at : Tue Jul 28 20:22: Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : XtremIO_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { Sym Device Cap Dev Pdev Name Config Attr Sts (MB) N/A TDEV RW N/A TDEV RW } Once the volumes are reassigned to the XtremIO storage group, FAST begins to move the data to the XtremIO_SRP. The data movement can be observed by monitoring the tracks in the thin pools. The number of tracks that remain in the DEFAULT_SRP (DG514_FBA) show what is left to be moved, while the number in the XtremIO_SRP (DG516_FBA) shows what has already moved. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks :

54 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 57 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8DA Enabled FF8DB Enabled Tracks } 54 No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5

55 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 3 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound FAST continues to move the data in the background. The devices are still available for host reads and writes as the data moves. New host writes all go to the XtremIO_SRP. # symcfg show -pool DG514_FBA -thin -detail -thin -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG514_FBA Pool Type : Thin 55

56 Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 49 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8DA Enabled FF8DB Enabled Tracks } No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -thin -sid 41 Symmetrix ID:

57 Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 17 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound 57

58 When all tracks have moved, the migration is complete and all data on and uses external storage from the XtremIO array. The thin devices no longer appear in the DG514_FBA thin pool and have 100% of their tracks in DG516_FBA. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8DA Enabled FF8DB Enabled Tracks } 58 No Thin Devices Bound to Device Pool DG514_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG514_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID:

59 Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 79 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, 59

60 Local Replication and FAST.X D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound Performing local replication against externally provisioned storage is no different than performing replication against VMAX3 volumes using internal storage. SnapVX can create point in time snapshots that do not require target volumes and only consume additional space when the source volume is updated. These snapshots share backend track allocation with the source volumes, meaning that a regular, target-less snapshot only consumes space from the SRP that the source volumes belong to. In this example, taking a snap of the lcseb149_xio_sg storage group creates a point in time copy that uses space for changed tracks from the XtremIO_SRP only. Once a point in time copy is taken, it can be linked to and copied to other devices or the SnapVX session can be terminated if the point in time snap is no longer needed. # symsnapvx -sid 41 -sg lcseb149_xio_sg -name XIO_Only_Snap establish -nop Establish operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Establish...Started. Polling for Establish...Done. Polling for Activate...Started. Polling for Activate...Done. Establish operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx list -sg lcseb149_xio_sg -sid 41 Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : (Microcode Version: 5977) Sym Num Flgs Dev Snapshot Name Gens FLRG Last Snapshot Timestamp XIO_Only_Snap 1... Wed Jul 29 14:48: XIO_Only_Snap 1... Wed Jul 29 14:48: Flgs: (F)ailed : X = Failed,. = No Failure (L)ink : X = Link Exists,. = No Link Exists (R)estore : X = Restore Active,. = No Restore Active (G)CM : X = GCM,. = Non-GCM # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name XIO_Only_Snap terminate -nop Terminate operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Terminate...Started. Polling for Terminate...Done. Terminate operation successfully executed for the storage group lcseb149_xio_sg 60 Clones can also be created by linking and copying source devices to target devices. The target devices can be in the same SRP or a different SRP from the source devices.

61 In this example two new devices (0012E and 0012F) are created and placed in the lcseb149_sg storage group uses an Optimized SLO in the DEFAULT_SRP. This means that their allocations can exist on any disk group in that SRP. Performing a link and copy between the devices in lcseb149_xio_sg (00133 and 00134) and lcseb149_sg (0012E and 0012F) copies the data from the XtremIO_SRP to the DEFAULT_SRP. # symconfigure -sid 41 -cmd "create dev count=2 size=200 GB, emulation=fba, config=tdev, sg=lcseb149_sg;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...started. Committing configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 022 steps...executing. Step 007 of 022 steps...executing. Step 007 of 022 steps...executing. Step 011 of 022 steps...executing. Step 016 of 022 steps...executing. Step 016 of 022 steps...executing. Step 017 of 022 steps...executing. Step 019 of 022 steps...executing. Step 022 of 022 steps...executing. Local: COMMIT...Done. Adding devices to Storage Group...Done. New symdevs: 0012E:0012F [TDEVs] Terminating the configuration change session...done. The configuration change session has successfully completed. # symaccess show lcseb149_sg -type storage -sid 41 Symmetrix ID : Storage Group Name : lcseb149_sg Last update time : 02:40:59 PM on Wed Jul 29,2015 Group last update time : 02:40:59 PM on Wed Jul 29,2015 Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : 0012E:0012F Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symaccess show lcseb149_xio_sg -type storage -sid 41 Symmetrix ID : Storage Group Name : lcseb149_xio_sg Last update time : 08:22:14 PM on Tue Jul 28,2015 Group last update time : 08:22:14 PM on Tue Jul 28,

62 Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : 00133:00134 Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symsnapvx -sid 41 -sg lcseb149_xio_sg -name Copy_to_DEFAULT_SRP establish -nop Establish operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Establish...Started. Polling for Establish...Done. Polling for Activate...Started. Polling for Activate...Done. Establish operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx -sid 41 -sg lcseb149_xio_sg link -snapshot_name Copy_to_DEFAULT_SRP -copy -lnsg lcseb149_sg -nop Link operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Link...Started. Polling for Link...Done. Link operation successfully executed for the storage group lcseb149_xio_sg After performing the link with the -copy option, the data begins copying from and to 0012E and 0012F. # symsnapvx list -sid 41 -sg lcseb149_xio_sg -linked -detail Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : (Microcode Version: 5977) Sym Link Flgs Remaining Done Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp (Tracks) (%) Copy_to_DEFAULT_SRP E.I.. Wed Jul 29 18:12: Copy_to_DEFAULT_SRP F.I.. Wed Jul 29 18:12: Flgs: (F)ailed : F = Force Failed, X = Failed,. = No Failure (C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged,. = NoCopy Link (M)odified : X = Modified Target Data,. = Not Modified (D)efined : X = All Tracks Defined,. = Define in progress The copy is now complete and the data has been copied to the devices in the DEFAULT_SRP. 62

63 # symsnapvx list -sid 41 -sg lcseb149_xio_sg -linked -detail Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : (Microcode Version: 5977) Design and Implementation Best Practices for FAST.X Sym Link Flgs Remaining Done Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp (Tracks) (%) Copy_to_DEFAULT_SRP E.D.X Wed Jul 29 18:12: Copy_to_DEFAULT_SRP F.D.X Wed Jul 29 18:12: Flgs: (F)ailed : F = Force Failed, X = Failed,. = No Failure (C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged,. = NoCopy Link (M)odified : X = Modified Target Data,. = Not Modified (D)efined : X = All Tracks Defined,. = Define in progress After the copy is complete, the devices are unlinked and the session terminated. # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name Copy_to_DEFAULT_SRP unlink -lnsg lcseb149_sg -nop Unlink operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Unlink...Started. Polling for Unlink...Done. Unlink operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name Copy_to_DEFAULT_SRP terminate -nop Terminate operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Terminate...Started. Polling for Terminate...Done. Terminate operation successfully executed for the storage group lcseb149_xio_sg For more information on local replication operations, see the VMAX3 Local Replication Technical Notes available on emc.com: Removing FAST.X Components from an Empty SRP This section removes the FAST.X components from the XtremIO_SRP following the data migration. Before removing the FAST.X entities, the thin devices are removed from the storage group and the allocated tracks belonging to them are freed. # symsg -sg lcseb149_xio_sg -sid 41 rmall # symdev -sid 41 -devs 00133:00134 free -nop 63

64 'Free Start' operation succeeded for devices in set of ranges. The tracks being freed are watched by viewing the thin pool details. The number of pool allocated tracks declines until there are no more remaining. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : # of Thin Device Tracks : # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 59 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } 64 No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) Tracks

65 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound When the free operation on the TDEVs completes, the thin pool contains no allocated tracks and the thin devices have been removed. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: Symmetrix ID : Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled FF8D Enabled Tracks } No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: 65

66 Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound The edisks can now be removed from disk group 516. Use symconfigure to drain the devices by adding the drain commands to a device file or drain them individually from the command line. Note: All edisks may need to be drained depending on how the thin pool was used. If the drain operation against all of the devices fails, drain them individually. Devices that are already drained return an error that they are already in the requested state. # symdisk list -spindle -external -sid 41 Symmetrix ID : Disks Selected : 14 Disk Capacity(MB) Spindle Grp Dir Vendor Type Hypr Total Free H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A H EMC N/A A H EMC N/A B H EMC N/A C H EMC N/A D H EMC N/A Totals # cat /cmd_files/drain start drain on external_disk spid=8009; start drain on external_disk spid=800a; start drain on external_disk spid=800b; start drain on external_disk spid=800c; start drain on external_disk spid=800d; # symconfigure -sid 41 -f /cmd_files/drain commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. 66

67 If the drain command fails, drain the devices that require it by specifying the individual device or devices at the command line. # symconfigure -sid 41 -cmd "start drain on external_disk spid=800d;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. # cat remove remove external_disk spid=8009; remove external_disk spid=800a; remove external_disk spid=800b; remove external_disk spid=800c; remove external_disk spid=800d; Remove the edisks after draining all of the devices that require it. # symconfigure -sid 41 -f /cmd_files/remove commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 008 of 070 steps...executing. Step 011 of 070 steps...executing. Step 014 of 070 steps...executing. Step 016 of 069 steps...executing. Step 017 of 069 steps...executing. Step 025 of 069 steps...executing. Step 026 of 069 steps...executing. Step 028 of 069 steps...executing. Step 166 of 190 steps...executing. Step 166 of 190 steps...executing. Step 169 of 190 steps...executing. Step 171 of 190 steps...executing. Step 172 of 190 steps...executing. Step 173 of 190 steps...executing. Step 176 of 190 steps...executing. Step 178 of 190 steps...executing. Step 181 of 190 steps...executing. Step 187 of 190 steps...executing. Step 187 of 190 steps...executing. Step 190 of 190 steps...executing. Local: COMMIT...Done. Terminating the configuration change session...done. The configuration change session has successfully completed. 67

68 68 Removing the edisks also removes the pool and the disk group. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: The requested thin pool does not exist -- cannot perform the operation # symdisk list -sid 41 -dskgrp_summary Symmetrix ID: Disk Group Disk Hyper Capacity Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) DISK_GROUP_ IF DISK_GROUP_ IS DISK_GROUP_ IE *ENCAPSDG* 0 -- N/A N/A N/A EXT_GROUP_515 4 X- N/A N/A Any Total Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A Removing FAST.X Components from an SRP Containing Volumes edisks can be drained and removed from an SRP without moving or deleting volumes if there is enough free capacity in the SRP to accept the tracks allocated to those edisks. If that is the case, the first step in removing the edisks is to drain them, which moves all of the allocated tracks to other disks in the SRP. In the case of the edisks in DEFAULT_SRP, which are in disk group 514, only one requires draining. # symconfigure -sid 41 -cmd "start drain on external_disk spid=8008;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. # cat /cmd_files/remove remove external_disk spid=8004; remove external_disk spid=8005; Once that device is drained, remove the edisks:

69 remove external_disk spid=8006; remove external_disk spid=8007; remove external_disk spid=8008; Design and Implementation Best Practices for FAST.X # symconfigure -sid 41 -f /cmd_files/remove commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 009 of 070 steps...executing. Step 012 of 070 steps...executing. Step 014 of 070 steps...executing. Step 016 of 069 steps...executing. Step 019 of 069 steps...executing. Step 026 of 069 steps...executing. Step 028 of 069 steps...executing. Step 030 of 069 steps...executing. Step 039 of 069 steps...executing. Step 040 of 069 steps...executing. Step 040 of 069 steps...executing.i Step 170 of 190 steps...executing. Step 172 of 190 steps...executing. Step 173 of 190 steps...executing. Step 175 of 190 steps...executing. Step 177 of 190 steps...executing. Step 180 of 190 steps...executing. Step 185 of 190 steps...executing. Step 187 of 190 steps...executing. Step 187 of 190 steps...executing. Step 190 of 190 steps...executing. Local: COMMIT...Done. Terminating the configuration change session...done. The configuration change session has successfully completed. When all of the edisks are deleted from a disk group, HYPERMAX OS removes the thin pool and the disk group itself. Both the DG514_FBA thin pool and EXT_GROUP_514 have been deleted. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: The requested thin pool does not exist -- cannot perform the operation 69

70 # symdisk list -sid 41 -dskgrp_summary Symmetrix ID: Disk Group Disk Hyper Capacity Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) DISK_GROUP_ IF DISK_GROUP_ IS DISK_GROUP_ IE *ENCAPSDG* 0 -- N/A N/A N/A EXT_GROUP_515 4 X- N/A N/A Any EXT_GROUP_516 5 X- N/A N/A Any Total Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A 70

71 Appendix A: Terminology and Acronyms Table 1. Terminology Term Definition Device Volume LU LUN VMAX3 device Thin device (TDEV) Data device (TDAT) Thin pool FAST policy Unisphere Drive Disk Disk group Storage group Thin device extent or chunk Extent Group External array External device External WWN DX edisk or External spindle Tier Virtual RAID Group SLO SLE LU, logical volume LU, logical volume A logical unit or logical volume Logical Unit Number assigned to a LU A LU on the VMAX3 array that uses internal or external storage. Virtually provisioned device where storage capacity is supplied from a specified thin pool of storage An internal device that provides storage capacity used by thin devices. A pool of storage from which thin extents are allocated to thin devices Specifies a set of standard tiers, or thin tiers, used by FAST or FAST VP. Specifies, in percentage values, the permitted storage group capacities associated with each tier. VMAX3 GUI management interface Physical disk Physical disk A numbered and named group of internal physical disks attached to DAs or external LUs, available through DX directors. A collection of devices grouped together for common management. The minimum storage capacity allocated from a pool to a thin device. The size of a thin device extent is 1 VMAX3 track (128 KB). Group of 42 contiguous thin device extents. A supported storage array attached to DX directors. A device that is exported from a virtualized external array. The WWN of a device exported from a virtualized external array. A director meant for connecting a VMAX3 array to virtualized external arrays. A virtual external disk that is created when an external device is imported. A collection of physical disks of the same drive technology, combined with a RAID protection type. Unprotected RAID group created for edisks. Service Level Objective. Defines an expected average response time target for an application. Service Level Expectation. Rank of the response time capabilities of a particular type of drive. 71

72 Table 2. Acronyms and abbreviations Acronym or abbreviation Definition LU LUN VP SRDF FAST FAST.X SR SG SRP DG DX SLO SLE EFD Logical Unit Logical Unit Number Virtual Provisioning Symmetrix Remote Data Facility Fully Automated Storage Tiering Fully Automated Storage Tiering - External Service Release Storage Group Storage Resource Pool Disk Group DA external Service Level Objective Service Level Expectation Enterprise Flash Drive Appendix B: VMAX3 and External EMC Array Configuration Before configuring FAST.X update the external array with the latest management software or firmware. For details on the external arrays that are supported, see the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page: Please speak with an EMC customer representative to request support for arrays that do not currently appear on the matrix. Confirming the Solutions Enabler and HYPERMAX OS versions Before beginning to configure FAST.X, the VMAX3 array must be running a GA version of HYPERMAX OS that supports FAST.X, which was introduced with the Q HYPERMAX OS Service Release. If a HYPERMAX OS upgrade is required, follow the appropriate process for loading the latest GA version of 5977 before proceeding. Executing FAST.X commands from the CLI requires Solutions Enabler 8.1 and higher. If necessary, install or upgrade to the required version of software. 72

73 To check the version of Solutions Enabler running on the management host and HYPERMAX OS running on the array, run symconfigure version v from the management host. # symconfigure -version -v -sid 74 Symmetrix CLI (SYMCLI) Version : X (Edit Level: 2050) Built with SYMAPI Version : X (Edit Level: 2050) SYMAPI Run Time Version : X (Edit Level: 2050) Built with Configuration Server Protocol Version : 0x27 Symmetrix ID : Configuration Server Version : Configuration Server Protocol : 0xD05 Configuration Server Date : Before Configuring DX directors # symcfg -sid 74 list -DX all Before setting up of a FAST.X environment, an EMC field technical resource needs to configure DX emulation and assign fibre channel ports for use as DX ports. If the array is newly deployed, EMC personnel will ensure proper sizing for the cache resources, as well as the port-layout configuration. For arrays that have already been deployed and are currently in use, it is necessary to closely examine the existing layout of the array and the port connections being used. Since configuring DX ports requires VMAX Dual Initiator director pairs and four available fibre channel ports, certain configuration changes may be necessary. This may necessitate modification to the existing environment before implementing FAST.X. Prior to EMC configuring DX emulation and port assignment, it is important to perform the following tasks: Unmask all VMAX3 devices from any ports that are to be assigned to DX emulation by removing them from any storage group they are members of For ports that are part of an RDF configuration, remove the RDF devices from the ports or remove the RDF relationship for any devices on the ports. Remove any masking entries related to the director port. This includes removing the WWNs of the ports from any port groups. After the DX emulation and fibre channel ports have been assigned to the directors, list them using symcfg list. In this example, DX emulation is available on four DX directors, each with 2 ports assigned. In this example, the two dual initiator pairs are DX-1H with DX-2H and DX 3H with DX-4H: Symmetrix ID: (Local) S Y M M E T R I X D I R E C T O R S 73

74 Ident Type Engine Cores Ports Status DX-1H EDISK Online DX-2H EDISK Online DX-3H EDISK Online DX-4H EDISK Online EMC Symmetrix DMX, VMAX, VMAX2 The following procedures apply to all supported Symmetrix arrays other than VMAX3. All Symmetrix arrays are symmetric (active/active) storage arrays. 1) Set the correct FA port flags on the external VMAX, VMAX2, or DMX array: DQRS: Disable I/O Queue Reset on SCSI reset SPC2 - SCSI-2 support OS07: SCSI-3 with SCSI 0S-2007 amendment CMSN: Common LUN ID across all initiators UWN: Unique World Wide Name PP: Point-to-point (set for switched fabric connectivity) EAN: Enable Fibre Channel auto-link speed negotiation 2) Run the fibre cables from the DX ports to the switch and from the switch to the FA ports on the external array. 3) Map the external volumes to the FA ports on the external array. If the VCM flag is set, the LUNs must be masked to the initiator s WWN, which, in the case of FAST.X, is the DX port. 4) Zone the DX ports to the FA ports that the external volumes are available on. Create the zones between the DX ports and the FA ports, and activate them. 5) Use the symsan command from Solutions Enabler to confirm that the DX can access the external LUNs on the correct number of paths. EMC support personnel can also generate the symsan report, or the DxSan report, from the main screen of SymmWin (the Configuration Tools menu). # symsan list -sanports -sid XXXX -DX all -port all # symsan -sid XX -dir 1H -p 9 list -sanluns -wwn xxxxxxxxxxxxxxxx (xxxxxxxxxxxxxxxx is a WWN of an external storage port returned by the first command.) 6) The edisks are now be ready to configure using either Solutions Enabler or Unisphere for VMAX. 74

75 EMC XtremIO Design and Implementation Best Practices for FAST.X To the storage controllers and the operating system of the XtremIO array, the VMAX3 appears like an open systems host. Because of this and the fact that XtremIO has very few prerequisite settings for host connections or volume properties, no settings need to be modified on the XtremIO storage controllers or XtremIO volumes for FAST.X. Note: When creating host accessible devices, the setting for logical block size are modified for Solaris and Linux hosts running applications with a 4KB block size. This is only applicable when those hosts are accessing XtremIO volumes directly. This setting is not changed for any FAST.X volume regardless of what hosts or applications are accessing the XtremIO through the VMAX3 and FAST.X. The block size must be left at the default of 512 KB. The following procedures show the steps to present XtremIO storage to DX directors. 1) Choose the volumes that will be mapped for DX access. 2) Create an initiator group on the XtremIO for the DX initiators. Click Add in the Initiator Groups pane. Fill in an appropriate initiator group name and click Add. In the Add Initiator dialog box, give the first initiator an appropriate name, in this case the DX director and port number, and select the corresponding DX WWN from the pull down menu. Click OK. 75

76 Complete this for all DX initiators in the configuration. After adding all initiators, click Finish. Click the first volume, hold the Shift key, and click the last volume to select all volumes. This will add the volumes to the Volumes list in the LUN Mapping Configuration pane. Click Map All and Apply. 76

77 The volumes have been assigned LUNs and are mapped to the storage controllers on the XtremIO. The devices are now available to the VMAX3 through the DX ports. # symsan list -sanports -DX all -port all -sid 41 Symmetrix ID: Flags Num DIR:P I Vendor Array LUNs Remote Port WWN H:07. EMC XtremIO FNM FF3D H:29. EMC XtremIO FNM FF3D H:07. EMC XtremIO FNM FF5D55AD 02H:29. EMC XtremIO FNM FF5D55AC Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. 77

78 # symsan list -dir 1H -p 7 -sanluns -wwn FF3D2743 -sid 41 Symmetrix ID: Remote Port WWN: FF3D2743 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN H:07 RW... F N/A 514F0C55EBA0000C 01H:07 RW... F N/A 514F0C55EBA0000D 01H:07 RW... F N/A 514F0C55EBA0000E 01H:07 RW... F N/A 514F0C55EBA0000F 01H:07 RW... F N/A 514F0C55EBA00010 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. EMC VNX There are a few, simple steps necessarty to present VNX LUNs to a host or, in this case, a VMAX3 array configured for FAST.X. 1) Register the DX initiators in the VNX. Open Unisphere on the array, and select Hosts then Initiators to open the Initiators screen. The WWNs of the DX ports should appear and be ready to register as initiators. 78

79 Select the first WWN, and click Register. Choose CLARiiON/VNX for the Initiator Type and the ALUA setting for the Failover Mode. Enter a name for the VMAX 3 array and add the array s IP address. Click OK, and then click Yes when prompted to confirm and OK to the remaining prompts. Repeat this process for the remaining initiators. 2) After registering the initiators, add them to the appropriate storage group. If they do not already exist, create the volumes and storage group. Click Hosts and then Storage Groups. 79

80 In the Storage Groups screen, select the applicable storage group. This group is called FAST.X and contains five volumes called FAST.X_0 through FAST.X_4. 80 Associate the DX initiators with the storage group. Click Connect Hosts.

81 Select the host name that was assigned to the VMAX3 when the DX initiators were registered from the Available Hosts list. Click the purple arrow to add the host to the Hosts to be Connected list, and click OK. Click Yes and OK to confirm. The LUNs are now visible in the FAST.X environment. The symsan command is run from the management host attached to the VMAX array to confirm that edisks are ready to be configured. The VNX is visible (it is listed as CLARiiON) along with an XtremIO array that is also connected to the DX directors. Both arrays have 5 LUNs available on all 4 paths. To view the individual VNX LUNs, use the symsan command with the sanluns option against any of the DX directors and ports. The details of the 5 VNX LUNs are shown including the LUN WWN. 81

82 The VNX LUNs are now ready to configure as edisks. Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to EMC Online Support ( 82

Technical Note P/N REV A02 June 3, 2010

Technical Note P/N REV A02 June 3, 2010 Best Practices for Nondisruptive Tiering via EMC Symmetrix Virtual LUN Technical Note P/N 300-009-147 REV A02 June 3, 2010 This technical note contains information on these topics: Executive summary...

More information

VMAX3: BEST PRACTICES FOR MIGRATING DATA TO ITS NEW VMAX3 HOME

VMAX3: BEST PRACTICES FOR MIGRATING DATA TO ITS NEW VMAX3 HOME 1 VMAX3: BEST PRACTICES FOR MIGRATING DATA TO ITS NEW VMAX3 HOME MIKE ADAMS 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard to product planning information,

More information

12 juni 2012 Fort Voordorp. What is Performance? Dependencies in Performance? What is expected? Jan Sterk en Jeroen Kleinnibbelink

12 juni 2012 Fort Voordorp. What is Performance? Dependencies in Performance? What is expected? Jan Sterk en Jeroen Kleinnibbelink EMC CUSTOMER UPDATE 12 juni 2012 Fort Voordorp What is Performance? Dependencies in Performance? What is expected? Jan Sterk en Jeroen Kleinnibbelink 1 Performance Impact Hierarchy Request Application

More information

IBM Storwize Family Scaling Capabilities and Value

IBM Storwize Family Scaling Capabilities and Value Technology Insight Paper IBM Storwize Family Scaling Capabilities and Value By Randy Kerns January, 2013 Enabling you to make the best technology decisions IBM Storwize Family Scaling Capabilities and

More information

UNIFIED DATA PROTECTION FOR EFFICIENT DELIVERY OF DIFFERENTIATED SERVICE LEVELS

UNIFIED DATA PROTECTION FOR EFFICIENT DELIVERY OF DIFFERENTIATED SERVICE LEVELS UNIFIED DATA PROTECTION FOR EFFICIENT DELIVERY OF DIFFERENTIATED SERVICE LEVELS Automating data protection with EMC ViPR improves IT efficiency and compliance ABSTRACT This white paper details how EMC

More information

CAPACITY MANAGEMENT GUIDELINES FOR VBLOCK INFRASTRUCTURE PLATFORMS

CAPACITY MANAGEMENT GUIDELINES FOR VBLOCK INFRASTRUCTURE PLATFORMS CAPACITY MANAGEMENT GUIDELINES FOR VBLOCK INFRASTRUCTURE PLATFORMS August 2011 WHITE PAPER 2011 VCE Company, LLC. All rights reserved. 1 Table of Contents Executive Summary... 3 Why is Capacity Management

More information

EMC Navisphere Management Suite

EMC Navisphere Management Suite Data Sheet EMC Navisphere Management Suite Your window into the award-winning EMC CLARiiON storage platform The Big Picture Discover, monitor, and configure all CLARiiON storage systems via a secure, easy-to-use,

More information

DELL EMC STORAGE INTEGRATION WITH SAP LANDSCAPE MANAGEMENT SOFTWARE

DELL EMC STORAGE INTEGRATION WITH SAP LANDSCAPE MANAGEMENT SOFTWARE DELL EMC STORAGE INTEGRATION WITH SAP LANDSCAPE MANAGEMENT SOFTWARE Enabled by Enterprise Storage Integrator for SAP Landscape Management July 2017 Abstract This document describes how the Dell EMC TM

More information

Strategic Snapshot. EMC Drives the High End to New Frontiers

Strategic Snapshot. EMC Drives the High End to New Frontiers Strategic Snapshot EMC Drives the High End to New Frontiers By Joyce Tompsett Becknell The Sageza Group, Inc. July 2005 The Sageza Group, Inc. 32108 Alvarado Blvd #354 Union City, CA 94587 650 390 0700

More information

EMC Solutions Enabler Symmetrix SRDF Family CLI PRODUCT GUIDE. Version 6.2 P/N REV A07

EMC Solutions Enabler Symmetrix SRDF Family CLI PRODUCT GUIDE. Version 6.2 P/N REV A07 EMC Solutions Enabler Symmetrix SRDF Family CLI Version 6.2 PRODUCT GUIDE P/N 300-000-877 REV A07 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2003-2006

More information

Transforming SAP Landscapes and HANA Analytics

Transforming SAP Landscapes and HANA Analytics Transforming SAP Landscapes and HANA Analytics Mohamed Al Basti @awatheel 1 Business Drivers Increase Revenue INCREASE AGILITY Lower Operational Costs Reduce Risk 2 Cloud Transforms IT Infrastructure 1

More information

Tivoli Storage Resource Management

Tivoli Storage Resource Management IBM Software Group Tivoli Storage Resource Management extending the value of virtualization in the on demand world Molly Pui Advisory IT Specialist Tivoli Software, IBM HK Software Group Delivering Business

More information

Moving data successfully: Take 10 for a smooth transition to new storage

Moving data successfully: Take 10 for a smooth transition to new storage EXECUTIVE WHITE PAPER Moving data successfully: Take 10 for a smooth transition to new storage A lot can transpire between the time you take delivery of your new storage and the day it s fully integrated

More information

A Better Way to Run SAP Joakim Zetterblad Director SAP Practice, EMC EMEA

A Better Way to Run SAP Joakim Zetterblad Director SAP Practice, EMC EMEA A Better Way to Run SAP Joakim Zetterblad Director SAP Practice, EMC EMEA 1 The New SAP 5 More SAP Deployed On EMC Storage Than Any Other Vendor In 2010 IDC Source: IDC, Storage User Demand Study, 2011

More information

SAN Migration Using Foreign LUN Import

SAN Migration Using Foreign LUN Import Technical Report SAN Migration Using Foreign LUN Import Michael Peppers, Pradeep Palukuri, NetApp May 2017 TR-4380 Abstract This guide is intended to assist customers, SEs/CSEs, PSEs/PSCs and channel partner

More information

Nimble Storage vs Dell EMC: A Comparison Snapshot

Nimble Storage vs Dell EMC: A Comparison Snapshot Nimble Storage vs Dell EMC: 1056 Baker Road Dexter, MI 48130 t. 734.408.1993 Nimble Storage vs Dell EMC: INTRODUCTION: Founders incorporated Nimble Storage in 2008 with a mission to provide customers with

More information

Increased Informix Awareness Discover Informix microsite launched

Increased Informix Awareness Discover Informix microsite launched Information Management Increased Informix Awareness Discover Informix microsite launched www.ibm.com/discoverinformix 2010 IBM Corporation Informix Panther Early Program Want to be on the cutting-edge

More information

COMPANY PROFILE.

COMPANY PROFILE. COMPANY PROFILE www.itgility.co.za Contact anyone of our consultants: Vision Minesh +27 (0) 83 441 0745 Andre +27 (0) 83 357 5729 Francois +27 (0) 82 579 1705 Jacques +27 (0) 83 357 5659 ITgility is an

More information

Information Lifecycle Management Solution from IBM

Information Lifecycle Management Solution from IBM Information Lifecycle Management Solution from IBM Cost-effectively manage information and leverage its business value throughout its lifecycle Vinod Nair Server & Storage Services IBM Asia Pacific Disclaimers

More information

NVMe: The Key to Unlocking Next-Generation Tier 0 Storage

NVMe: The Key to Unlocking Next-Generation Tier 0 Storage White Paper NVMe: The Key to Unlocking Next-Generation Tier 0 Storage Sponsored by: Dell EMC Eric Burgener April 2018 IDC OPINION Over the past several years, persistent flash storage has changed the game

More information

Oracle Integrates Virtual Tape Storage with Public Cloud Economics

Oracle Integrates Virtual Tape Storage with Public Cloud Economics Technology Insight Paper Oracle Integrates Virtual Tape Storage with Public Cloud Economics By John Webster June, 2016 Enabling you to make the best technology decisions Oracle Integrates Virtual Tape

More information

EMC RecoverPoint. Eternity Chen 陳永恆 Sr. System Engineer

EMC RecoverPoint. Eternity Chen 陳永恆 Sr. System Engineer EMC RecoverPoint Eternity Chen 陳永恆 Sr. System Engineer 1 Data Protection Continuum Availability, Replication, Backup, and Archive Applications Have Different Data Protection Requirements Availability Replication

More information

StorageTek Virtual Storage Manager System 7

StorageTek Virtual Storage Manager System 7 StorageTek Virtual Storage Manager System 7 The way you manage your business-critical data affects your top-line growth and bottom-line efficiency. Your ability to manage and store data simply and reliably

More information

IBM Content Foundation on Cloud

IBM Content Foundation on Cloud Service Description IBM Content Foundation on Cloud This Service Description describes the Cloud Service IBM provides to Client. Client means the company and its authorized users and recipients of the

More information

REQUEST FOR PROPOSAL (RFP)

REQUEST FOR PROPOSAL (RFP) REQUEST FOR PROPOSAL (RFP) MR 150/2015 Supply of Hyper-Converged Server Solution 1. Purpose and Description of Project The Fiji Electricity Authority (FEA) is requesting proposals for the supply of a Hyper-Converged

More information

Data Fabric Solution for Cloud Backup Workflow Guide Using ONTAP Commands

Data Fabric Solution for Cloud Backup Workflow Guide Using ONTAP Commands Data Fabric Solutions Data Fabric Solution for Cloud Backup Workflow Guide Using ONTAP Commands Using ONTAP and AltaVault July 2017 215-12392_A0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

RAW CAPACITY MODEL MEASURED IN TERABYTES OR GIGABYTES (AS SPECIFIED ON THE QUOTE)

RAW CAPACITY MODEL MEASURED IN TERABYTES OR GIGABYTES (AS SPECIFIED ON THE QUOTE) EMC SOFTWARE USE RIGHTS EMC software products ( Software ) are licensed by EMC to customers who order directly from EMC ( Direct End-Users ) under a signature bearing agreement between EMC and the Direct

More information

Stewart Bazneh. IBM Storwize V National Gallery, Dublin (17 th November) 2010 IBM Corporation

Stewart Bazneh. IBM Storwize V National Gallery, Dublin (17 th November) 2010 IBM Corporation Stewart Bazneh IBM Storwize V7000 - National Gallery, Dublin (17 th November) 2010 IBM Corporation What if there was a Midrange Storage System that 2010 IBM Corporation Grow as your needs grow Combines

More information

Nimble Storage vs Nutanix: A Comparison Snapshot

Nimble Storage vs Nutanix: A Comparison Snapshot Nimble Storage vs Nutanix: A Comparison Snapshot 1056 Baker Road Dexter, MI 48130 t. 734.408.1993 Nimble Storage vs Nutanix: INTRODUCTION: Founders incorporated Nimble Storage in 2008 with a mission to

More information

Quota and Space Management Best Practices

Quota and Space Management Best Practices Quota and Space Management Best Practices Dell Compellent FS8600 Network-Attached Storage (NAS) FluidFS System Engineering April 2015 Revisions Revision Date Author Description A April 2015 Jonathan Shalev

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for file, block, and object storage MCx multi-core optimization unlocks the power of flash

More information

IBM xseries 430. Versatile, scalable workload management. Provides unmatched flexibility with an Intel architecture and open systems foundation

IBM xseries 430. Versatile, scalable workload management. Provides unmatched flexibility with an Intel architecture and open systems foundation Versatile, scalable workload management IBM xseries 430 With Intel technology at its core and support for multiple applications across multiple operating systems, the xseries 430 enables customers to run

More information

Oracle VM Server for SPARC Datacenter Ready. Stefan Hinker & Elke Freymann

Oracle VM Server for SPARC Datacenter Ready. Stefan Hinker & Elke Freymann Oracle VM Server for SPARC Datacenter Ready Stefan Hinker & Elke Freymann Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes

More information

ENTERPRISE HYBRID CLOUD 4.1

ENTERPRISE HYBRID CLOUD 4.1 Solution Guide ENTERPRISE HYBRID CLOUD 4.1 EMC Solutions Abstract This guide provides an overview of the management and monitoring functionality of the. Enterprise Hybrid Cloud enables IT organizations

More information

FlashStack For Oracle RAC

FlashStack For Oracle RAC FlashStack For Oracle RAC May 2, 2017 Mayur Dewaikar, Product Management, Pure Storage John McAbel, Product Management, Cisco What s Top of Mind for Oracle Teams? 1 2 3 4 Consolidation Scale & Speed Simplification

More information

SunGard: Cloud Provider Capabilities

SunGard: Cloud Provider Capabilities SunGard: Cloud Provider Capabilities Production and Recovery Solutions for Mid-Sized Enterprises www.sungardas.com Agenda Our Mission Use Cases Cloud Strategy Why SunGard 2 Our Mission Enable mid-sized

More information

Epicor Cloud ERP Services Specification Single Tenant SaaS and Single Tenant Hosting Services (Updated July 31, 2017)

Epicor Cloud ERP Services Specification Single Tenant SaaS and Single Tenant Hosting Services (Updated July 31, 2017) Epicor Cloud ERP Services Specification Single Tenant SaaS and Single Tenant Hosting Services (Updated July 31, 2017) GENERAL TERMS & INFORMATION A. GENERAL TERMS & DEFINITIONS 1. This Services Specification

More information

Real Life Challenges in Today s Storage World

Real Life Challenges in Today s Storage World Real Life Challenges in Today s Storage World EMC Proven Professional Knowledge Sharing September, 2007 Kiran Ghag Senior Systems Administrator Kiran@kiranghag.com Page 1 of 9 Table of Contents Introduction...

More information

2015 IBM Corporation

2015 IBM Corporation 2015 IBM Corporation Marco Garibaldi IBM Pre-Sales Technical Support Prestazioni estreme, accelerazione applicativa,velocità ed efficienza per generare valore dai dati 2015 IBM Corporation Trend nelle

More information

EMC IT s Replatform Proof of Concept to an Open, Scalable Platform

EMC IT s Replatform Proof of Concept to an Open, Scalable Platform Applied Technology Abstract This white paper illustrates EMC IT s replatform POC from a legacy infrastructure (Sun and Solaris) to an open platform (x86 and Linux), expandable infrastructure (consolidated

More information

Data Domain Cloud Tier for Dell EMC XC Series Hyper-Converged Appliances Solutions Guide

Data Domain Cloud Tier for Dell EMC XC Series Hyper-Converged Appliances Solutions Guide Data Domain Cloud Tier for Dell EMC XC Series Hyper-Converged Appliances Solutions Guide Integration of Dell EMC enterprise Data Domain Cloud Tier technology with XC Series. Dell EMC Engineering August

More information

HP Cloud Maps for rapid provisioning of infrastructure and applications

HP Cloud Maps for rapid provisioning of infrastructure and applications Technical white paper HP Cloud Maps for rapid provisioning of infrastructure and applications Table of contents Executive summary 2 Introduction 2 What is an HP Cloud Map? 3 HP Cloud Map components 3 Enabling

More information

Copyright 2016 EMC Corporation. All rights reserved.

Copyright 2016 EMC Corporation. All rights reserved. 1 UNITY: NEXT-GENERATION MANAGEMENT PARADIGM SHIFT SUSAN SHARPE 2 PROTECTION AND TRUST Security/Governance Encryption Data Protection Services/Support FLASH Reduce Costs (# Of Drives, Power, Floor Space.

More information

Carahsoft End-User Computing Solutions Services

Carahsoft End-User Computing Solutions Services Carahsoft End-User Computing Solutions Services Service Description Horizon View Managed Services Gold Package Managed Services Packages Options # of Desktops to be Managed Desktop Type Duration of Services

More information

CONVERGED INFRASTRUCTRE POUR HANA ET SAP

CONVERGED INFRASTRUCTRE POUR HANA ET SAP #SAPWEEK EMC CONVERGED PLATEFORMS CONVERGED INFRASTRUCTRE POUR ET SAP Guillaume FIN SAP Champion EMC Nicolas VIALE varchitecte VCE PARIS, LE 7 APRIL 2016 SAP CLOUD AUTOMATION STRATEGY BUILDING BLOCKS Hybrid

More information

VILLAGE OF VERNON HILLS INVITATION FOR BIDDER S PROPOSAL

VILLAGE OF VERNON HILLS INVITATION FOR BIDDER S PROPOSAL VILLAGE OF VERNON HILLS INVITATION FOR BIDDER S PROPOSAL Bid Conditions Page 1 of 2 1. Invitation to Bid. The Village of Vernon Hills, 290 Evergreen Drive, Vernon Hills, IL ( The Village ) invites Bidder

More information

TECHNICAL GUIDE. DataStream. Coho Data SRM Integration Guide. Version February

TECHNICAL GUIDE. DataStream. Coho Data SRM Integration Guide. Version February TECHNICAL GUIDE DataStream Coho Data SRM Integration Guide Version 2.9.0.1 February 2017 www.cohodata.com TABLE OF CONTENTS Introduction 3 Intended Audience 3 Key Components 3 Disaster Recovery Objectives

More information

Spectrum Control Capacity Planning

Spectrum Control Capacity Planning Spectrum Control Capacity Planning Scott McPeek mcpeek@us.ibm.com Washington Systems Center Bryan Odom odombry@us.ibm.com Software Advanced Technogoly (SWAT) Copyright IBM Corporation 2017. Capacity Planning

More information

LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION

LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION March 2009 0 LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION DataDirect Shadow - version 7.1.3 - Web Services Performance Benchmarks Shadow v7 ziip Exploitation Benchmarks 1 BUSINESS

More information

vsphere with Operations Management and vcenter Operations VMware vforum, 2014 Mehmet Çolakoğlu 2014 VMware Inc. All rights reserved.

vsphere with Operations Management and vcenter Operations VMware vforum, 2014 Mehmet Çolakoğlu 2014 VMware Inc. All rights reserved. vsphere with Operations Management and vcenter Operations VMware vforum, 2014 Mehmet Çolakoğlu 2014 VMware Inc. All rights reserved. What s on the agenda? vsphere with Operations Management Overview What

More information

REVIEWER S GUIDE Storage Resource Monitor

REVIEWER S GUIDE Storage Resource Monitor REVIEWER S GUIDE Storage Resource Monitor page 1 About this Document Welcome to SolarWinds Storage Resource Monitor (SRM) Reviewer s Guide. This guide presents information that will help you evaluate SolarWinds

More information

Quantum Artico Active Archive Appliance

Quantum Artico Active Archive Appliance Enterprise Strategy Group Getting to the bigger truth. ESG Lab Validation Quantum Artico Active Archive Appliance Simple Tiered Archive Storage for Complex Workflows By Tony Palmer, Senior ESG Lab Analyst

More information

SolutionBuilder & Guided Solution Sizing. Winning Together

SolutionBuilder & Guided Solution Sizing. Winning Together SolutionBuilder & Guided Solution Sizing Winning Together March 7, 2013 Agenda SolutionBuilder Introduction Guided Solution Sizing (GSS) Introduction How to Access SolutionBuilder and GSS Guided Solution

More information

COMPARE VMWARE. Business Continuity and Security. vsphere with Operations Management Enterprise Plus. vsphere Enterprise Plus Edition

COMPARE VMWARE. Business Continuity and Security. vsphere with Operations Management Enterprise Plus. vsphere Enterprise Plus Edition COMPARE VMWARE vsphere EDITIONS Business Continuity and Security vmotion Enables live migration of virtual machines with no disruption to users or loss of service, eliminating the need to schedule application

More information

ERP SYSTEM IN VIRTUALIZED PRODUCTION ENVIRONMENT

ERP SYSTEM IN VIRTUALIZED PRODUCTION ENVIRONMENT DOI: 10.1515/SBEEF-2016-0018 ERP SYSTEM IN VIRTUALIZED PRODUCTION ENVIRONMENT D. C. SPOIALĂ 1, H.M. SILAGHI 1, V. SPOIALĂ 1, A. CACUCI 2 1 Department of Control Systems Engineering and Management, Faculty

More information

Easily Create Flexible, Custom Chargeback or Showback Reports for Storage and Resource Usage Using OnCommand Insight

Easily Create Flexible, Custom Chargeback or Showback Reports for Storage and Resource Usage Using OnCommand Insight Technical Report Easily Create Flexible, Custom Chargeback or Showback Reports for Storage and Resource Usage Using OnCommand Insight Dave Collins, NetApp Technical Marketing Engineer July 2012 Contents...

More information

W H I T E P A P E R S t o r a g e S o l u t i o n s f o r E n terprise-ready SharePoint Deployments: Addressing Operational Challenges

W H I T E P A P E R S t o r a g e S o l u t i o n s f o r E n terprise-ready SharePoint Deployments: Addressing Operational Challenges W H I T E P A P E R S t o r a g e S o l u t i o n s f o r E n terprise-ready SharePoint Deployments: Addressing Operational Challenges Sponsored by: NetApp James Baker Richard L. Villars March 2009 Kathleen

More information

INTELLECTUAL PROPERTY MANAGEMENT ENTERPRISE ESCROW BEST PRACTICES REPORT

INTELLECTUAL PROPERTY MANAGEMENT ENTERPRISE ESCROW BEST PRACTICES REPORT INTELLECTUAL PROPERTY MANAGEMENT ENTERPRISE ESCROW BEST PRACTICES REPORT What is Mission Critical to You? Before you acquire mission-critical technology from a third-party software vendor, take a few minutes

More information

ContinuityPatrol. An intelligent Service Availability Management (isam) Suite VISIBILITY I ACCOUNTABILITY I ORCHESTRATION I AUTOMATION

ContinuityPatrol. An intelligent Service Availability Management (isam) Suite VISIBILITY I ACCOUNTABILITY I ORCHESTRATION I AUTOMATION An intelligent Service Availability Management (isam) Suite VISIBILITY I ACCOUNTABILITY I ORCHESTRATION I AUTOMATION Overview Continuity Patrol enables Real-Time Enterprise Visibility for Intelligent Business

More information

IBM Data Mobility Services Softek LDMF

IBM Data Mobility Services Softek LDMF Helping provide nondisruptive dataset-level migrations for the mainframe environment IBM Data Mobility Services Softek LDMF With Softek LDMF software, datasets that reside on multiple smaller-capacity

More information

Successfully Planning and Executing Large-Scale Cloud and Data Center Migration Projects

Successfully Planning and Executing Large-Scale Cloud and Data Center Migration Projects White Paper PlateSpin Migrate PlateSpin Transformation Manager PlateSpin Migration Factory Successfully Planning and Executing Large-Scale Cloud and Data Center Migration Projects Updated for PlateSpin

More information

Total Support for SAP HANA Appliances

Total Support for SAP HANA Appliances Statement of Work for Services 1. Scope of Work Total Support for SAP HANA Appliances IBM will provide the services specified in this Statement of Work: "IBM Total Solution Support for SAP In- Memory Appliances

More information

Tivoli Now IBM Corporation

Tivoli Now IBM Corporation 1 Automating the provisioning of storage with TPM, TPC and IT Service Management Greg Van Hise Storage Management Architecture gvanhise@us.ibm.com Agenda The storage provisioning problem Storage provisioning

More information

Microsoft FastTrack For Azure Service Level Description

Microsoft FastTrack For Azure Service Level Description ef Microsoft FastTrack For Azure Service Level Description 2017 Microsoft. All rights reserved. 1 Contents Microsoft FastTrack for Azure... 3 Eligible Solutions... 3 FastTrack for Azure Process Overview...

More information

The IBM and Oracle alliance. Power architecture

The IBM and Oracle alliance. Power architecture IBM Power Systems, IBM PowerVM and Oracle offerings: a winning combination The smart virtualization platform for IBM AIX, Linux and IBM i clients using Oracle solutions Fostering smart innovation through

More information

Enterprise Call Recorder

Enterprise Call Recorder Enterprise Call Recorder Installation and Setup Guide Algo ECR Version 2.3 Document #:ECR-SV-02 sales@algosolutions.com support@algosolutions.com www.algosolutions.com About this Manual This User Guide

More information

A Framework Approach to Ensuring Application Recovery Readiness. White Paper

A Framework Approach to Ensuring Application Recovery Readiness. White Paper A Framework Approach to Ensuring Application Recovery Readiness White Paper White Paper A Framework Approach to Ensuring Application Recovery Readiness. Sanovi's DR Management Suite (Sanovi DRM ) is the

More information

Get The Best Out Of Oracle Scheduler

Get The Best Out Of Oracle Scheduler Get The Best Out Of Oracle Scheduler Vira Goorah Oracle America Redwood Shores CA Introduction Automating the business process is a key factor in reducing IT operating expenses. The need for an effective

More information

Oracle Enterprise Manager. 1 Where To Find Installation And Upgrade Documentation

Oracle Enterprise Manager. 1 Where To Find Installation And Upgrade Documentation Oracle Enterprise Manager Cloud Control Release Notes 13c Release 1 for Oracle Solaris on x86-64 (64-bit) E69464-03 April 2016 Oracle Enterprise Manager Cloud Control 13c Release 1 is a management solution

More information

Integrated Service Management

Integrated Service Management Integrated Service Management for Power servers As the world gets smarter, demands on the infrastructure will grow Smart traffic systems Smart Intelligent food oil field technologies systems Smart water

More information

NetVue Integrated Management System

NetVue Integrated Management System NetVue Integrated Management System Network & Bandwidth Management Overview The NetVue Integrated Management System (IMS) is our powerful network management system with advanced monitoring and diagnostic

More information

ANY SURVEILLANCE, ANYWHERE, ANYTIME DDN Storage Powers Next Generation Video Surveillance Infrastructure

ANY SURVEILLANCE, ANYWHERE, ANYTIME DDN Storage Powers Next Generation Video Surveillance Infrastructure WHITEPAPER ANY SURVEILLANCE, ANYWHERE, ANYTIME DDN Storage Powers Next Generation Video Surveillance Infrastructure INTRODUCTION Over the past decade, the world has seen tremendous growth in the use of

More information

ORACLE S PEOPLESOFT HRMS 9.1 FP2 SELF-SERVICE

ORACLE S PEOPLESOFT HRMS 9.1 FP2 SELF-SERVICE O RACLE E NTERPRISE B ENCHMARK R EV. 1.1 ORACLE S PEOPLESOFT HRMS 9.1 FP2 SELF-SERVICE USING ORACLE DB 11g FOR LINUX ON CISCO UCS B460 M4 AND B200 M3 Servers As a global leader in e-business applications,

More information

CA Network Automation

CA Network Automation PRODUCT SHEET: CA Network Automation agility made possible CA Network Automation Help reduce risk and improve IT efficiency by automating network configuration and change management. Overview Traditionally,

More information

SOA Management with Integrated solution from SAP and Sonoa Systems

SOA Management with Integrated solution from SAP and Sonoa Systems SOA Management with Integrated solution from SAP and Sonoa Systems A report from SAP Co-Innovation Lab Sonoa Systems: Ravi Chandra, Co-Founder & VP of Engineering Kishore Sannidhanam, Business Development

More information

SAP Public Budget Formulation 8.1

SAP Public Budget Formulation 8.1 Sizing Guide Document Version: 1.0 2013-09-30 CUSTOMER Typographic Conventions Type Style Example Example EXAMPLE Example Example EXAMPLE Description Words or characters quoted from the screen.

More information

Choosing Between Private and Public Clouds: How to Defend Which Workload Goes Where

Choosing Between Private and Public Clouds: How to Defend Which Workload Goes Where Choosing Between Private and Public Clouds: How to Defend Which Workload Goes Where Why are you here? We ll venture five guesses as to why you are reading this document. You want to: A. Find answers about

More information

A Cloud Migration Checklist

A Cloud Migration Checklist A Cloud Migration Checklist WHITE PAPER A Cloud Migration Checklist» 2 Migrating Workloads to Public Cloud According to a recent JP Morgan survey of more than 200 CIOs of large enterprises, 16.2% of workloads

More information

Demand Management User Guide. Release

Demand Management User Guide. Release Demand Management User Guide Release 14.2.00 This Documentation, which includes embedded help systems and electronically distributed materials (hereinafter referred to as the Documentation ), is for your

More information

Blade Servers for Small Enterprises

Blade Servers for Small Enterprises About this research note: Product Comparison notes provide a detailed, head-to-head, analytical comparison of products in a given market in order to simplify the selection process. Blade Servers for Small

More information

ECS AND DATA DOMAIN CLOUD TIER ARCHITECTURE GUIDE

ECS AND DATA DOMAIN CLOUD TIER ARCHITECTURE GUIDE H16169 ECS and Data Domain Cloud Tier Architecture Guide ECS AND DATA DOMAIN CLOUD TIER ARCHITECTURE GUIDE A whitepaper describing the combined ECS/Data Domain capability of providing a resilient and effective

More information

DELL EMC Isilon & ECS for Healthcare

DELL EMC Isilon & ECS for Healthcare Sosiaali- ja Terveydenhuollon ATK-päivät: Tiedon pitkäaikaissäilytys terveydenhuollon alalla IT:n näkökanta tallentamiseen DELL EMC Isilon & ECS for Healthcare Lauri Koivisto Dell EMC Isilon Regional Territory

More information

How Much Will Serialization Really Cost? AN INTRODUCTION TO THE TOTAL COST OF OWNERSHIP

How Much Will Serialization Really Cost? AN INTRODUCTION TO THE TOTAL COST OF OWNERSHIP How Much Will Serialization Really Cost? AN INTRODUCTION TO THE TOTAL COST OF OWNERSHIP TABLE OF CONTENTS Introduction Breaking down the whole iceberg What is Total Cost of Ownership? Acquisition is Just

More information

DEFINING THE ROI FOR MEDICAL IMAGE ARCHIVING

DEFINING THE ROI FOR MEDICAL IMAGE ARCHIVING WHITEPAPER DEFINING THE ROI FOR MEDICAL IMAGE ARCHIVING Advances in medical imaging have increased the critical role archiving plays in the treatment of patients, and IT decision makers are under more

More information

Central Management Server (CMS) for SMA

Central Management Server (CMS) for SMA Central Management Server (CMS) for SMA Powerful virtual machine for appliance management, resilience and reporting SonicWall Central Management Server (CMS) provides organizations, distributed enterprises

More information

Table of Contents HOL CMP

Table of Contents HOL CMP Table of Contents Lab Overview - - vrealize Business for Cloud - Getting Started... 2 Lab Guidance... 3 Module 1 - Computing the Cost of your Private Cloud (30 Minutes)... 9 Introduction... 10 Overview

More information

Ensure Your Servers Can Support All the Benefits of Virtualization and Private Cloud The State of Server Virtualization... 8

Ensure Your Servers Can Support All the Benefits of Virtualization and Private Cloud The State of Server Virtualization... 8 ... 4 The State of Server Virtualization... 8 Virtualization Comfort Level SQL Server... 12 Case in Point SAP... 14 Virtualization The Server Platform Really Matters... 18 The New Family of Intel-based

More information

Data Archiving. The First Step Toward Managing the Information Lifecycle. Best practices for SAP ILM to improve performance, compliance and cost

Data Archiving. The First Step Toward Managing the Information Lifecycle. Best practices for SAP ILM to improve performance, compliance and cost Data Archiving The First Step Toward Managing the Information Lifecycle Best practices for SAP ILM to improve performance, compliance and cost Copyright 2018 Dolphin Enterprise Solutions Corporation (dba

More information

ClearPath Plus Libra Model 690 Server

ClearPath Plus Libra Model 690 Server ClearPath Plus Libra Model 690 Server Specification Sheet Introduction The ClearPath Plus Libra Model 690 server is the most versatile, powerful and secure MCP based system we ve ever built. This server

More information

An Overview of the AWS Cloud Adoption Framework

An Overview of the AWS Cloud Adoption Framework An Overview of the AWS Cloud Adoption Framework Version 2 February 2017 2017, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes

More information

ENABLING GLOBAL HADOOP WITH DELL EMC S ELASTIC CLOUD STORAGE (ECS)

ENABLING GLOBAL HADOOP WITH DELL EMC S ELASTIC CLOUD STORAGE (ECS) ENABLING GLOBAL HADOOP WITH DELL EMC S ELASTIC CLOUD STORAGE (ECS) Hadoop Storage-as-a-Service ABSTRACT This White Paper illustrates how Dell EMC Elastic Cloud Storage (ECS ) can be used to streamline

More information

Licensing and Pricing Guide

Licensing and Pricing Guide Microsoft Dynamics CRM Online Licensing and Pricing Guide Microsoft Dynamics CRM Online July 2016 Microsoft Dynamics CRM Online Licensing Guide May 2016 Page 1 Contents What s New in this Edition... 3

More information

Storage Workload Analysis

Storage Workload Analysis Storage Workload Analysis Performance analytics enables smarter storage acquisitions and reduces deployment risk WHITEPAPER Introduction Understanding the real application I/O workload profile has traditionally

More information

Managed Services. Service Description West Swamp Road, Suite 301 Doylestown, Pa P

Managed Services. Service Description West Swamp Road, Suite 301 Doylestown, Pa P 4259 West Swamp Road, Suite 301 Doylestown, Pa 18902 www.contourds.com P 484-235-5143 Index Introduction...1 Standard Service Package Management Capabilities...2 Standard Reports...2 Top 10 Reports...2

More information

Managing Data Warehouse Growth in the New Era of Big Data

Managing Data Warehouse Growth in the New Era of Big Data Managing Data Warehouse Growth in the New Era of Big Data Colin White President, BI Research December 5, 2012 Sponsor 2 Speakers Colin White President, BI Research Vineet Goel Product Manager, IBM InfoSphere

More information

Disk Library for mainframe

Disk Library for mainframe Disk Library for mainframe Version 4.5.0 Physical Planning Guide for DLm8100 and DLm2100 302-003-496 REV 03 Copyright 2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2017 Dell

More information

This document highlights the major changes for Release 17.0 of Oracle Retail Customer Engagement Cloud Services.

This document highlights the major changes for Release 17.0 of Oracle Retail Customer Engagement Cloud Services. Oracle Retail Customer Engagement Cloud Services Release Notes Release 17.0 December 2017 This document highlights the major changes for Release 17.0 of Oracle Retail Customer Engagement Cloud Services.

More information

Sizing SAP Central Process Scheduling 8.0 by Redwood

Sizing SAP Central Process Scheduling 8.0 by Redwood Sizing SAP Central Process Scheduling 8.0 by Redwood Released for SAP Customers and Partners January 2012 Copyright 2012 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted

More information

[Header]: Demystifying Oracle Bare Metal Cloud Services

[Header]: Demystifying Oracle Bare Metal Cloud Services [Header]: Demystifying Oracle Bare Metal Cloud Services [Deck]: The benefits and capabilities of Oracle s next-gen IaaS By Umair Mansoob Introduction As many organizations look to the cloud as a way to

More information

IBM PowerHA SystemMirror for Linux delivers highavailability solution for Linux distributions on IBM Power Systems servers

IBM PowerHA SystemMirror for Linux delivers highavailability solution for Linux distributions on IBM Power Systems servers IBM United States Software Announcement 217-521, dated December 5, 2017 IBM PowerHA SystemMirror for Linux delivers highavailability solution for Linux distributions on IBM Power Systems servers Table

More information