Design and Implementation Best Practices for EMC FAST.X

Similar documents
Technical Note P/N REV A02 June 3, 2010

VMAX3: BEST PRACTICES FOR MIGRATING DATA TO ITS NEW VMAX3 HOME

12 juni 2012 Fort Voordorp. What is Performance? Dependencies in Performance? What is expected? Jan Sterk en Jeroen Kleinnibbelink

IBM Storwize Family Scaling Capabilities and Value

UNIFIED DATA PROTECTION FOR EFFICIENT DELIVERY OF DIFFERENTIATED SERVICE LEVELS

CAPACITY MANAGEMENT GUIDELINES FOR VBLOCK INFRASTRUCTURE PLATFORMS

EMC Navisphere Management Suite

DELL EMC STORAGE INTEGRATION WITH SAP LANDSCAPE MANAGEMENT SOFTWARE

Strategic Snapshot. EMC Drives the High End to New Frontiers

EMC Solutions Enabler Symmetrix SRDF Family CLI PRODUCT GUIDE. Version 6.2 P/N REV A07

Transforming SAP Landscapes and HANA Analytics

Tivoli Storage Resource Management

Moving data successfully: Take 10 for a smooth transition to new storage

A Better Way to Run SAP Joakim Zetterblad Director SAP Practice, EMC EMEA

SAN Migration Using Foreign LUN Import

Nimble Storage vs Dell EMC: A Comparison Snapshot

Increased Informix Awareness Discover Informix microsite launched

COMPANY PROFILE.

Information Lifecycle Management Solution from IBM

NVMe: The Key to Unlocking Next-Generation Tier 0 Storage

Oracle Integrates Virtual Tape Storage with Public Cloud Economics

EMC RecoverPoint. Eternity Chen 陳永恆 Sr. System Engineer

StorageTek Virtual Storage Manager System 7

IBM Content Foundation on Cloud

REQUEST FOR PROPOSAL (RFP)

Data Fabric Solution for Cloud Backup Workflow Guide Using ONTAP Commands

RAW CAPACITY MODEL MEASURED IN TERABYTES OR GIGABYTES (AS SPECIFIED ON THE QUOTE)

Stewart Bazneh. IBM Storwize V National Gallery, Dublin (17 th November) 2010 IBM Corporation

Nimble Storage vs Nutanix: A Comparison Snapshot

Quota and Space Management Best Practices

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

IBM xseries 430. Versatile, scalable workload management. Provides unmatched flexibility with an Intel architecture and open systems foundation

Oracle VM Server for SPARC Datacenter Ready. Stefan Hinker & Elke Freymann

ENTERPRISE HYBRID CLOUD 4.1

FlashStack For Oracle RAC

SunGard: Cloud Provider Capabilities

Epicor Cloud ERP Services Specification Single Tenant SaaS and Single Tenant Hosting Services (Updated July 31, 2017)

Real Life Challenges in Today s Storage World

2015 IBM Corporation

EMC IT s Replatform Proof of Concept to an Open, Scalable Platform

Data Domain Cloud Tier for Dell EMC XC Series Hyper-Converged Appliances Solutions Guide

HP Cloud Maps for rapid provisioning of infrastructure and applications

Copyright 2016 EMC Corporation. All rights reserved.

Carahsoft End-User Computing Solutions Services

CONVERGED INFRASTRUCTRE POUR HANA ET SAP

VILLAGE OF VERNON HILLS INVITATION FOR BIDDER S PROPOSAL

TECHNICAL GUIDE. DataStream. Coho Data SRM Integration Guide. Version February

Spectrum Control Capacity Planning

LOWERING MAINFRAME TCO THROUGH ziip SPECIALTY ENGINE EXPLOITATION

vsphere with Operations Management and vcenter Operations VMware vforum, 2014 Mehmet Çolakoğlu 2014 VMware Inc. All rights reserved.

REVIEWER S GUIDE Storage Resource Monitor

Quantum Artico Active Archive Appliance

SolutionBuilder & Guided Solution Sizing. Winning Together

COMPARE VMWARE. Business Continuity and Security. vsphere with Operations Management Enterprise Plus. vsphere Enterprise Plus Edition

ERP SYSTEM IN VIRTUALIZED PRODUCTION ENVIRONMENT

Easily Create Flexible, Custom Chargeback or Showback Reports for Storage and Resource Usage Using OnCommand Insight

W H I T E P A P E R S t o r a g e S o l u t i o n s f o r E n terprise-ready SharePoint Deployments: Addressing Operational Challenges

INTELLECTUAL PROPERTY MANAGEMENT ENTERPRISE ESCROW BEST PRACTICES REPORT

ContinuityPatrol. An intelligent Service Availability Management (isam) Suite VISIBILITY I ACCOUNTABILITY I ORCHESTRATION I AUTOMATION

IBM Data Mobility Services Softek LDMF

Successfully Planning and Executing Large-Scale Cloud and Data Center Migration Projects

Total Support for SAP HANA Appliances

Tivoli Now IBM Corporation

Microsoft FastTrack For Azure Service Level Description

The IBM and Oracle alliance. Power architecture

Enterprise Call Recorder

A Framework Approach to Ensuring Application Recovery Readiness. White Paper

Get The Best Out Of Oracle Scheduler

Oracle Enterprise Manager. 1 Where To Find Installation And Upgrade Documentation

Integrated Service Management

NetVue Integrated Management System

ANY SURVEILLANCE, ANYWHERE, ANYTIME DDN Storage Powers Next Generation Video Surveillance Infrastructure

ORACLE S PEOPLESOFT HRMS 9.1 FP2 SELF-SERVICE

CA Network Automation

SOA Management with Integrated solution from SAP and Sonoa Systems

SAP Public Budget Formulation 8.1

Choosing Between Private and Public Clouds: How to Defend Which Workload Goes Where

A Cloud Migration Checklist

Demand Management User Guide. Release

Blade Servers for Small Enterprises

ECS AND DATA DOMAIN CLOUD TIER ARCHITECTURE GUIDE

DELL EMC Isilon & ECS for Healthcare

How Much Will Serialization Really Cost? AN INTRODUCTION TO THE TOTAL COST OF OWNERSHIP

DEFINING THE ROI FOR MEDICAL IMAGE ARCHIVING

Central Management Server (CMS) for SMA

Table of Contents HOL CMP

Ensure Your Servers Can Support All the Benefits of Virtualization and Private Cloud The State of Server Virtualization... 8

Data Archiving. The First Step Toward Managing the Information Lifecycle. Best practices for SAP ILM to improve performance, compliance and cost

ClearPath Plus Libra Model 690 Server

An Overview of the AWS Cloud Adoption Framework

ENABLING GLOBAL HADOOP WITH DELL EMC S ELASTIC CLOUD STORAGE (ECS)

Licensing and Pricing Guide

Storage Workload Analysis

Managed Services. Service Description West Swamp Road, Suite 301 Doylestown, Pa P

Managing Data Warehouse Growth in the New Era of Big Data

Disk Library for mainframe

This document highlights the major changes for Release 17.0 of Oracle Retail Customer Engagement Cloud Services.

Sizing SAP Central Process Scheduling 8.0 by Redwood

[Header]: Demystifying Oracle Bare Metal Cloud Services

IBM PowerHA SystemMirror for Linux delivers highavailability solution for Linux distributions on IBM Power Systems servers

Transcription:

EMC TECHNICAL NOTES Design and Implementation Best Practices for EMC FAST.X Technical Notes P/N H14568 REV 1.3 May 2016 This FAST.X Technical Notes document contains information on these topics: Table of Contents Executive Summary... 3 Audience... 3 Conventions used in this document... 3 FAST.X and VMAX3... 3 HYPERMAX OS Components Required by FAST.X... 3 DX directors... 4 edisks... 4 External disk group... 4 Virtual RAID group... 4 Other Important HYPERMAX OS Components... 4 Thin devices (TDEVs)... 4 Data Devices (TDATs)... 4 Benefits of FAST.X... 5 Use cases... 5 Configuring the VMAX3 and SAN for FAST.X... 6 Configuring DX directors... 6 DX director cores... 7 Zoning... 7 Modes of operation... 12 External provisioning... 12 Incorporation... 13 General Rules for External Provisioning and Incorporation... 14 Ensuring external data integrity... 15

Creating and Presenting Devices in the External Array... 15 Handling of Thinly Provisioned External Volumes... 15 Replication Considerations... 17 FAST Support... 17 Determining External Storage Service Level Expectations for FAST... 17 Moving Data Between SRPs... 19 Support Added to FAST.X... 19 FAST.X and Data at Rest Encryption (D@RE)... 19 FAST.X system limitations... 19 FAST.X Restrictions... 20 Software and HYPERMAX OS Version Requirements... 20 Supported External Array Platforms... 20 Recommended External Volume Sizes... 21 FAST.X with CloudArray... 21 VMAX3 to CloudArray connectivity... 21 VMAX3 configuration considerations... 21 CloudArray Configuration considerations... 22 FAST.X with Solutions Enabler... 22 Getting DX Information and Port WWNs for Zoning... 23 Examining the FAST.X environment... 26 Confirm the Availability of the External Volumes... 26 Configure edisks for External Provisioning... 26 Further Examining the Disk Group... 31 Configure edisks for Incorporation... 35 Creating a Storage Group to Assign Volumes to the Default SRP... 41 Creating Thin Volumes for the Default SRP... 43 Diagram of the Configured Environment... 44 Moving Volumes to an external SRP with EFD Storage Only... 45 Local Replication and FAST.X... 60 Removing FAST.X Components from an Empty SRP... 63 Removing FAST.X Components from an SRP Containing Volumes... 68 Appendix A: Terminology and Acronyms... 71 Table 1. Terminology... 71 Table 2. Acronyms and abbreviations... 72 Appendix B: VMAX3 and External EMC Array Configuration... 72 Confirming the Solutions Enabler and HYPERMAX OS versions... 72 Before Configuring DX directors... 73 EMC Symmetrix DMX, VMAX, VMAX2... 74 EMC XtremIO... 75 EMC VNX... 78 2

Executive Summary With substantial increases in the amount of data stored, businesses continue to strive for ways to leverage and extend the value of existing resources, reduce the cost of management, and drive the best performance achievable in the environment. Adding to the challenge is the desire to ensure that data is kept on an appropriate storage tier so that it is available when needed but stored in as cost-effective and environmentally responsible a manner as possible. FAST.X addresses many of these concerns by allowing qualified storage platforms to be used as physical disk space for VMAX3 arrays. This allows enterprises to continue to leverage VMAX3 s availability and reliability along with proven VMAX local and remote replication features while still utilizing existing EMC or third party storage. These features include VMAX3 Service Level Objective (SLO) Provisioning, which gives VMAX3 and FAST.X unparalleled ease-of-use along with proven and robust VMAX3 software and HYPERMAX OS features such as SRDF, SnapVX, and FAST. Audience These Technical Notes are intended for anyone who needs to understand the concept of FAST.X and how it is implemented and configured in the VMAX3 and specific external arrays. This document specifically targets EMC customers, sales, and field technical staff who are designing and implementing a FAST.X solution. Conventions used in this document FAST.X and VMAX3 An ellipsis (...) appearing on a line by itself indicates that unnecessary command output has been removed. Command line syntax, output, and examples appear in the Courier New font. GUI objects that must be clicked on are noted in bold. FAST.X allows an external disk array to provide physical storage for VMAX3 volumes. This implementation required the development of new entities within the VMAX3 that would allow it to attach to external array storage ports and configure external volumes to be used as physical storage. HYPERMAX OS Components Required by FAST.X FAST.X external array connectivity is implemented entirely in HYPERMAX OS and does not require any additional VMAX3 hardware. Connectivity with an external array is established through the same fibre channel I/O modules currently used for configuring FAs for host connectivity and RFs for SRDF connectivity. Instead of running FA or RF emulation, however, the processors run a different type of emulation. 3

DX directors DX emulation has been developed that adapts the traditional SCSI Disk Director (DS) emulation model to act on external volumes as though they were physical drives. The fact that a DX, which stands for DS external, is using external logical units, instead of a DS using internal physical disks, is transparent to other director emulations and to the HYPERMAX OS infrastructure. With respect to most non-drive-specific HYPERMAX OS functions, a DX behaves the same as a DS, which is the VMAX3 disk controller that provides connectivity to internal physical dives. Note: A DS is equivalent to a DA, or disk adapter, in previous generation VMAX arrays. edisks An edisk is a logical representation of an external volume when it is added into the VMAX3 configuration. The terms edisk and external spindle both refer to this external volume once it has been placed in an external disk group and a virtual RAID group. External disk group Virtual RAID group External disk groups are virtual disk groups that are created to contain edisks. Exclusive disk group numbers for external disk groups start at 512. External volumes and internal physical spindles cannot be mixed in a disk group. External disk groups are unprotected because external LUNs are protected by RAID protection in the external array no in the VMAX3. An unprotected virtual RAID group gets created for each edisk that gets added to the system. The RAID group is virtual because edisks are not protected locally by the VMAX3 array. Instead, they rely on the local RAID protection provided by the external array. Other Important HYPERMAX OS Components Thin devices (TDEVs) Data Devices (TDATs) When Virtual Provisioning, which is EMC s implementation of thin provisioning, was first released with VMAX storage arrays, two new device types were introduced that support this functionality. Thin devices are the host addressable devices that are part of VMAX3 Virtual Provisioning. They are created with a size but no assigned RAID protection and inherit the RAID protection of the Data devices contained in the pool where they are bound. In VMAX3, all host-addressable devices are thin devices. 4 Data devices are a type of internal VMAX3 device that are dedicated to providing the storage for thin devices in a VMAX3 array. They are configured in HYPERMAX OS as part of adding storage to a VMAX3 array and are configured automatically when an edisk is virtualized into a FAST.X environment. There is a 1:1 relationship between a Data device and an edisk in a FAST.X configuration. Note that the edisk is shown in

Figure 1 but not the TDAT. Because of this 1:1 relationship, the Data device is implied when an edisk is shown. Benefits of FAST.X Figure 1. High-level view of a FAST.X environment Use cases Simplifies management of virtualized multi-vendor, or EMC, storage by allowing heterogeneous arrays to be managed by Solutions Enabler and Unisphere for VMAX. Allows data mobility and migration between heterogeneous storage arrays and between heterogeneous arrays and VMAX3. Offers Virtual Provisioning benefits to external arrays. Allows VMAX3 enterprise replication technologies, such as SRDF and SnapVX, to be used to replicate storage that exists on an external array. Extends the value of existing disk arrays by allowing them to be used as an additional, FAST-managed storage tier. Dynamically determines a Service Level Expectation (SLE) for external arrays to align with a Service Level Objective (SLO). FAST.X allows the continued use of external disk arrays, while taking advantage of most VMAX3 HYPERMAX OS features. FAST.X allows organizations to continue to use existing disk arrays as additional storage capacity. Along with this, the data can be managed, 5

controlled, and monitored in the same way as native VMAX3 data. Other than with CloudArray in the initial release, almost all of the features supported on VMAX3 devices using internal storage are also supported with FAST.X. Features like FAST, Quality of Service (QoS), and SLO Provisioning, among many others, are available to be used with external storage. FAST.X protects data on external arrays using VMAX3 local and remote replication technologies. Local and remote replication technologies, such as SnapVX, SRDF, and Open Replicator are all supported with VMAX3 devices using external storage. For example, if the goal is to use SRDF to replicate data between an XtremIO and a VNX, FAST.X will support it. FAST.X can migrate data between VMAX3 arrays and external storage as part of a tiering, or asset-management strategy. FAST.X provides a pool of extra storage. Because of the ease of migration between a VMAX3 array and any external array configured in FAST.X environment, an external array could conceivably be used as a temporary repository for data in case of a shortage of available physical disk space in the VMAX3. For example, a VMAX array with all SATA drives could provide spillover for a number of VMAX3 arrays using oversubscribed thin pools. As the production VMAX3 thin pools reach a set threshold, a percentage of the least active allocated capacity could be pushed to the external tier. When additional storage is added to the production VMAX3 array and the thin pool or pools are expanded, the data pushed to the external array could be pulled back to the production VMAX3 array. Configuring the VMAX3 and SAN for FAST.X Configuring DX directors DX directors are configured in dual initiator (DI) pairs like traditional DAs. They are fully redundant like DAs and, when necessary, a failing director fails over to the other, fully functioning director in the DI pair. EMC requires a minimum of four paths to external devices. This means that at least four ports belonging to a single DX dualinitiator pair must be configured. DX DI pairs are configured on different directors on the same engine. For example, the engine shown in Figure 2 has two directors, each which contain four I/O Modules with four ports each. Because it is valid to add DX emulation to both director 1 and director 2 and use any two ports on each of those directors, it is possible to create a valid FAST.X DX configuration with two ports that physically reside on the same I/O Module. For example, using Director 1, ports 4 and 7, which are on the same I/O Module along with Director 2, ports 25 and 31, which are not, will pass FAST.X s pathing compliance check. This configuration is allowed even though an I/O Module replacement could affect both ports 4 and 7 requiring that all paths to external devices through Director 1 fail over to Director 2. 6 A better choice for the two ports from Director 1 would be ports 4 and 9. If that was the configuration, any of the four front-end (FE) fibre channel I/O Modules could be

replaced without requiring a fail over to that director s DI partner. When a DX fails over, all edisks will maintain access to the external volumes without interruption, however, the number of paths to the external volumes will be reduced by two. This could cause a potentially significant impact on performance. Recovering from a DI failover requires manual intervention from EMC Customer Service, but is nondisruptive. DX director cores Figure 2. Single engine with 8 Fibre Channel I/O modules If converting FA ports to DX ports, any previously assigned devices must be unmapped and unmasked and the FA ports must be removed from any port groups. The number of processor cores assigned to the DX directors depends upon configuration and profile. A DX heavy core allocation is available for configuring high performance external arrays such as a VMAX array with flash drives or an XtremIO. Note: DX directors and their core allocation are not user-configurable. EMC Customer Service must create and configure them. Zoning The zoning examples provided below allow the servicing of the components within a FAST.X environment without incurring data unavailability and are required in order for the configuration to be supported. The potential service activities include: Cable changes and individual FC port servicing VMAX director and I/O modules External array controller replacement External array firmware upgrade SAN fabric servicing Proper zoning also ensures that a failing switch or storage controller won t cause a DX to fail over, which requires manual recovery. 7

Figures 3 through 5 below show examples of how to honor these requirements in single and dual-fabric environments that represent common configurations. These zoning requirements are in addition to existing connectivity requirements consisting of two, physically independent SCSI I-T (Initiator-Target) nexus per DX for each external volume to be configured through a DX pair. These base connectivity requirements are checked by HYPERMAX OS during the edisk configuration process. If the zoning and the external array storage controller volume assignments do not pass FAST.X s compliance check, attempts to configure edisks will fail. When configuring zoning for FAST.X, using a single-target per single-initiator (1:1) zoning scheme is preferred. If the FC switch zone count limitation has been reached, it is possible to use single-target per multiple-initiator (1:Many) zoning. IMPORTANT: With non-vmax3 and non-alua external arrays, remote controllers must not be zoned to two ports on any single DX director. Failure to follow this rule can lead to data unavailability during servicing. Single fabric with two external storage ports Single fabric connectivity is supported, though it does not provide the redundancy of a dual fabric configuration. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with one port being used for FAST.X This FAST.X configuration requires four zones. 8

Figure 3. Single fabric zoning Dual fabric with two external storage ports Though single-fabric connectivity is supported, best practice for redundancy is achieved by using dual fabrics. One DX initiator port from each DX director pair must connect to one fabric, while the other DX initiator port connects to the other fabric. The LUNs must be reachable from at least one storage port on at least two external storage controllers or directors. Also, single-initiator zoning is recommended. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with one port being used for FAST.X This FAST.X configuration across a dual-switch fabric with two external storage ports requires four zones (two per fabric): 9

Figure 4. Dual-fabric zoning Note: Figures three and four show the logical, not physical, connections. In both diagrams there is a single physical connection from each DX port to the switch(es) for a total of four. There are only two physical connections, one for each external storage port, from the switch(es) to the external arrays. Dual fabric with four external storage ports Best practice for redundancy with dual fabrics is achieved by using four external array ports. As an example, take a FAST.X environment configured as follows: A VMAX3 array with DX emulation running on directors 1 and 2 An external array with two storage controllers, each with two ports being used for FAST.X This FAST.X configuration across a dual-switch fabric with four external storage ports requires four zones (two per fabric): 10

Direct-attach configurations Configuring external LUNs Figure 5. Expanded dual-fabric zoning Note: As with figures 3 and 4, figure 5 shows the logical connections between the ports and the fabric. However, because the external array controllers each have two ports being used, the numbers of logical and physical connections in figure 5 are identical. Direct-attach arbitrated loop (FC-AL) configurations are not supported with FAST.X. External arrays must be connected to the DX ports through a fibre channel switch. In order to achieve maximum redundancy, all external volumes must be available on all external storage controller ports that are being configured for FAST.X. For redundancy, up to four paths may be configured to external volumes. These paths are configured round robin. If an external volume is not reachable through all paths in the FAST.X configuration, attempting to virtualize the volume as an edisk will fail. Distance between the VMAX and the external array Sharing of DX and storage ports EMC requires that the external array be located within the same data center as the VMAX3 array. If the data center is spread out across multiple floors in a single building, the external array and VMAX3 array can be on different floors. Both DX ports and external storage ports can be shared. 11

Modes of operation External provisioning DX ports can be zoned to multiple sets of storage ports on external arrays. This means that multiple external arrays can be connected to a single set of DX ports as long as the configurations are compliant with FAST.X requirements. Storage ports on an external array can also be shared between hosts and DX initiators or between DX initiators from multiple VMAX3 arrays. Devices available on the external array s storage ports must be accessible to a single FAST.X configuration or by hosts, but not both. If an EMC array is providing external storage, VMAX3 volumes can be mapped to the FA and masked to the WWNs of the DX ports on which they will be available. For third-party arrays, the native method of segmenting LUNs on a storage port can be used in the same way that LUN masking is used with a VMAX system. FAST.X has two modes of operation, depending on whether the external Logical Unit (or LU) is to be used as raw storage space or has data that must be preserved and accessed through a VMAX3 thin device. The devices on the external array used by FAST.X as external storage are host-addressable volumes that are normally presented from the external array to HBAs for direct host access. With FAST.X, they are presented to the DX initiators instead. External Provisioning: Allows the user to access LUs existing on external storage as raw capacity for new VMAX3 devices. These devices are called externally provisioned devices. Incorporation: Allows the user to preserve existing data on external LUNs and access it through VMAX3 volumes. These devices are called incorporated devices. Note: Incorporation is supported with the 5977 Q1 2016 Service Release and later versions of HYPERMAX OS. When using FAST.X to configure an external LU, HYPERMAX OS creates an external disk group and a thin pool and configures the external LU as an edisk, which is added to the external disk group. External disk groups are separate from disk groups containing internal physical disks and start at disk group number 512. Because RAID protection is provided by the external array, edisks are added to unprotected virtual RAID groups. It also creates a data pool and a Data device (or TDAT), for each edisk that is configured in FAST.X. There is a 1:1:1 relationship between the external volume, the edisk, and the TDAT. VMAX3 host-addressable thin volumes can then be created from the Storage Resource Pool (SRP) that is associated with the data pool and external disk group. External provisioning should only be used with external volumes that contain no data or unwanted data. External volumes are reformatted as part of the edisk configuration process; therefore, any data residing on the volume prior to adding it as an edisk will be inaccessible. 12

Figure 6. External Provisioning Incorporation Incorporation is used when data on an external LUN must be preserved and accessed through a VMAX3 thin device. Like with external provisioning, external disk groups for Incorporation start at disk group number 512 and edisks are added to unprotected virtual RAID groups with the protection provided by the external array. External LUNs that are being added through either standard or VP encapsulation can be either thick or thin on the external array. If the external LUNs are thin, they can be fully allocated, partially allocated, or unallocated. When incorporating an external LU, HYPERMAX OS creates an external disk group and a thin pool and configures the external LU as an edisk, which is added to the external disk group. It also creates a data pool and a Data device (or TDAT), for each edisk that is configured in FAST.X. A VMAX3 thin device is created as well, allowing access to the data that has been preserved on the external LUN. There is a 1:1:1:1 relationship between the external volume, the edisk, the TDAT, and the VMAX3 thin volume. IMPORTANT: Once an external LUN has been incorporated, its data can only be accessed through the VMAX3 thin LUN. There is no method for un-incorporating an external LUN and preserving its data. 13

Figure 7. Incorporation General Rules for External Provisioning and Incorporation 14 Up to five SRPs are qualified per VMAX3 system. Adding an SRP can be accomplished online, but must be done by EMC Customer Service. A corresponding pool is created automatically as part of the process of adding the disk group. All edisks from the same external array that are configured in any given SRP are placed in the same disk group and pool. If capacity from multiple external arrays is configured in the same SRP, a separate disk group and pool is created for devices from each of the arrays. It is also best practice for all of the external volumes to have the same capacity. This is a recommendation and is not enforced by HYPERMAX OS. There is one Data device (TDAT) configured per edisk. The creation of the Data devices and the associated RAID groups and data pool is completed as part of adding an edisk. EMC Manufacturing does not pre-configure FAST.X in the factory. Some elements of a FAST.X configuration require a customer service engagement but can be done online at any time after the deployment of the system providing that the VMAX3 and its cache have been correctly sized. All FAST.X objects can be removed online providing they are in the proper state: External edisks need to be drained and inactive. The external disk group must be empty. The DX directors must not have edisks mapped to them. The SRP must not contain any disk group.

Note: When all edisks in a disk group are deleted, the disk group and pool are removed automatically. Ensuring external data integrity FAST.X uses a basic CRC mechanism to detect data corruption caused by procedural errors such as: Directly restoring data to an external LU that is virtualized as an edisk using the external array s replication capabilities Allowing direct host access to an external LU that is virtualized as an edisk Restoring from a backup directly to an external LU that is virtualized as an edisk As an external LU is initialized as an edisk or as data is written to the edisk, CRC information is written to VMAX3 cache. This CRC information is then checked upon subsequent reads to confirm that the external LUN has not been altered outside of the VMAX3 system s control. The protection mechanism requires a slight increase in memory requirements over standard local disk volumes. Once the data has been read into cache, it is protected with the standard VMAX block-level CRC error checking based on the industry standard T10 Data Integrity Field (DIF) block. The FAST.X CRC mechanism only applies to back-end reads and writes. Creating and Presenting Devices in the External Array In FAST.X connectivity, the DX director port is the FC initiator just as a host bus adapter (HBA) is the initiator when a host is connected to a storage array. To the external array, the DX directors act like HBAs from a Linux host. Logical Units (LUs) or volumes in an external array are created and made available to the DX directors by a storage administrator in the same way they are created and presented for Linux host access through an HBA. In other words, the normal procedure to create and assign volumes to the storage controllers for host access must be followed for the devices on the external array that will be virtualized as FAST.X edisks. Appendix B contains instructions for presenting external volumes from EMC storage for DX access. When a non-emc array is being used for external storage, refer to the relevant third party storage array documentation on the array vendor s website for correct procedures. Handling of Thinly Provisioned External Volumes Supported external arrays can vary greatly in their capabilities. Some thinly provision and compress their logical units. Because of this, it is possible for external storage to be consumed at an unpredictable rate and for the array to run out of available space which causes writes to tracks allocated (either newly or at any prior time) on the VMAX3 to fail. This can happen if the user fails to properly monitor over subscription or if a new data pattern from the host ends up compressing at a much lower rate than forecast. 15

The DX directors identify the Out of Capacity condition of a pool when its writes fail with the SCSI check condition DATA PROTECT/SPACE ALLOCATION FAILED WRITE PROTECT (07/27/00). HYPERMAX OS then protects its cache and other non-externally provisioned applications by taking the following actions: All allocations to the pool are stopped. If the SRP containing the out of space pool does not contain any free space, a new Out of Remote Capacity (ORC) TDAT ready state is set. This state is monitored by the FAs, which will fail host writes to allocated tracks. A background task in the VMAX3 monitors pools that can no longer accept allocations and is responsible for restoring write/allocation activity to a pool when it again has available capacity. FAST maintains 1% free space in the pool, so when the usable capacity drops it begins to demote capacity to ensure this minimum free space exists. The ORC state is cleared by the DX directors when the first successful destage of data occurs to the TDAT once more capacity is available in the pool. If other pools in the SRP contain free space, the specific extent group (42 contiguous tracks) containing the track whose write failed is reallocated into a different pool and triggers FAST to offload the appropriate data. 16

Replication Considerations Design and Implementation Best Practices for FAST.X All local replication functionality is supported on VMAX3 volumes that are part of a FAST.X configuration. SnapVX snapshots of these devices are provisioned externally and linked targets are provisioned independently. All remote replication functionality with SRDF and Open Replicator is supported on VMAX3 volumes that are part of a FAST.X configuration. FAST Support FAST movement between internal and external storage is fully supported. Because FAST movement is always contained within an SRP, external storage must share an SRP with internal storage for FAST data movement between the VMAX3 and the external array. Determining External Storage Service Level Expectations for FAST In order to rank the response time capabilities of a particular type of drive, FAST uses Service Level Expectations (SLEs), which correspond to the response time capabilities of the disks that are supported with VMAX3. SLE values for supported internal drives and for XtremIO and CloudArray external volumes are known and are hard coded in HYPERMAX OS. Because the response time capabilities of an external array s disks can vary greatly, a method to determine the SLE for an external array volume is needed. FAST supports six different Service Level Objectives that can be assigned to disk groups in the VMAX3. The SLOs are Diamond, Platinum, Gold, Silver, Bronze, and Optimized. For each SLO there is an SLE envelope defined. This SLE envelope determines the range of disk types that can be used within the SRP and defines the preferred drive technology type for new allocations from host writes along with the highest and lowest performing disk technology that the data is allowed on. This is not strictly enforced because an out of capacity condition in a pool may require that the VMAX3 allocate capacity outside of the defined range rather than fail a host write that requires extent allocation. FAST detects the type of drive technology for each disk group in the array, including FAST.X disk groups. For each technology found, the following information is gathered: Unique drive technology ID Disk type (EFD, 15K, 10K, 7K, External) Manufacturer Capacity in bytes Product name Disk RPM (internal disks only) 17

FAST also collects raw statistics by monitoring I/O to the back-end drives as well as FAST.X connected external volumes. This allows FAST to profile and build a real time model for the edisks. The raw statistics gathered include: Number of reads Number of writes Read I/O rate Write I/O rate Read time Write time These I/O statistics are translated into statistical measures called workload characteristics. The derived characteristics include: I/Os per second (IOPS) Read percentage Read I/O size Write I/O size Observed response time The process of data collection and statistics calculation occurs every ten minutes so that HYPERMAX OS can produce a value allowing the edisks to be accurately ranked within the SRP. While this is occurring, the SLE value is set at the default of 40ms. To allow the ranking of the edisks, a Pool State Model is built. This model contains information that indicates if the ranking of the external storage has completed, and, if it has, what the ranking is. While the ranking of the external storage takes place, its value can be in one of the following three phases: Loading To establish the performance baseline, FAST selects existing extents from storage groups associated with the SRP that have the Gold, Silver, Bronze or Optimized SLO and loads the pool to 15% of its usable capacity. This decreases the chance that all I/O to the pool will be serviced by cache. Once this capacity point has been reached, the pool state transitions to the Profiling phase. Profiling After the pool has reached the profiling state, it remains there for 12 hours. During this time FAST collects performance data to determine the most probable underlying drive technology. After the 12 hour profiling period expires, the dominant response time mode determines the final SLE and classifies it: Flash like (2ms) 15k like (8ms) 18

Moving Data Between SRPs Support Added to FAST.X 10k like (12ms) 7k like (24ms) Design and Implementation Best Practices for FAST.X When profiling is completed, the state then transitions to the Ready phase. Ready Profiling has finished and the SLE has been determined. Once an edisk has a defined SLE and is in a Ready state, it can participate in SLO movements within the SRP. Note that the state model only allows transitions in one direction. For example, once the state is Profiling it cannot go back to Loading, even if the allocated capacity becomes less than the required capacity point. Data in a FAST.X environment can be moved between SRPs while the application is online with no decrease in performance or availability. This is accomplished using Solutions Enabler or Unisphere to move a storage group from its current SRP to a new SRP. Individual devices can also be moved between SRPs. This is accomplished by moving devices to a storage group associated with a different SRP. The following capabilities were not available with FTS on VMAX2, but have been added to FAST.X: The DX directors use SPC-3 LBP (Logical Block Provisioning) when supported by the host operating system. This allows external LUs to be thinly provisioned. Both unmap and write same/unmap SCSI commands are supported by DX directors. This allows previously used, thin provisioned capacity to be reclaimed on external storage. Round robin multipathing is now supported on up to four ITL paths per edisk. Optimized Read Miss (ORM) is supported. FAST.X and Data at Rest Encryption (D@RE) FAST.X system limitations Data at Rest Encryption (D@RE) may be enabled on a VMAX3 array that contains external storage in a FAST.X configuration. The VMAX3 running FAST.X, however, does not encrypt data being written to external storage. If encryption on external storage is required, is must be provided by the external array itself. The following general limitations apply to FAST.X environments: The maximum external capacity is determined by VMAX3 cache 19

FAST.X Restrictions Up to 2048 external volumes per engine can be virtualized as edisks. 2048 edisks is also the system limit. The maximum number of external volumes includes ProtectPoint volumes if configured on the system. The maximum number of logical paths to each external SCSI Logical Unit is 4, with all paths potentially active concurrently. The maximum capacity of a single external LU is 64 TiB. CloudArray must be configured in its own SRP. DX ports must be fibre channel ports from 8 Gb/s or 16 Gb/s I/O modules. Up to 512 external ports can be configured per DX initiator port. Up to 512 external disk groups are supported. The following general limitations apply to FAST.X environments running 5977 Q1 2016 Service Release and later versions of HYPERMAX OS: A maximum of 2048 external volumes per engine can be virtualized as edisks. A maximum of 16,384 external volumes per VMAX3 array can be virtualized as edisks. Support for switched fabric (FC-SW) connectivity only Open systems only (including IBM i) Third-party tools are necessary to perform array management operations on non-emc external arrays Software and HYPERMAX OS Version Requirements FAST.X requires the following host and array software versions: HYPERMAX OS 5977 with HYPERMAX OS Q3 2015 Service Release Solutions Enabler 8.1 or higher Incorporation requires the following host and array software versions: HYPERMAX OS 5977 with HYPERMAX OS Q1 2016 Service Release and later versions Solutions Enabler 8.2 or higher Supported External Array Platforms 20

For details on the external arrays that are supported, see the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page: https://elabnavigator.emc.com Speak with an EMC customer representative to request support for arrays that do not currently appear on the matrix. Recommended External Volume Sizes FAST.X with CloudArray The following are the recommended external volume sizes for external provisioning for all arrays other than CloudArray. The recommended sizes are based on the total required externally provisioned capacity: 100 GB external volumes for virtualizing up to 200 TB 200 GB external thin volumes for virtualizing up to 400 TB 300 GB external thin volumes for virtualizing up to 600 TB Configuring a CloudArray as external storage involves specific considerations that are not required with other storage arrays. VMAX3 to CloudArray connectivity The general FAST.X requirement is to map each external LUN through two ports on two different external array storage controllers. Because a CloudArray appliance contains only a single storage controller, that requirement is amended. For CloudArray connectivity, two CloudArray ports should be configured and zoned to each of the minimum four DX ports. VMAX3 configuration considerations The following are configuration considerations specific to an external CloudArray appliance: CloudArray capacity must be configured into its own SRP. Multiple CloudArray appliances can have their capacity virtualized in the same SRP, but if an SRP has any CloudArray capacity in it, it must be the only type of storage in this SRP. This requirement is mandatory, but is not enforced by HYPERMAX OS or VMAX3 management software. No local or remote replication (including SRDF, TimeFinder, and SnapVX) is allowed using VMAX3 volumes with capacity provisioned from a CloudArray SRP. This restriction is mandatory, but is not enforced by HYPERMAX OS or VMAX3 management software. 21

The cumulative front-end throughput limit of all storage groups provisioned to any given CloudArray appliance is 400 MB/s. CloudArray Configuration considerations The following are configuration considerations specific to an external CloudArray appliance: The CloudArray appliance supporting fibre channel connectivity and qualified for FAST.X comes in two licenses. One has 20TB of CloudArray cache and one has 40TB. The maximum qualified capacity of these appliances is 120 TB and 240 TB, respectively. The CloudArray appliance used for FAST.X must be dedicated to FAST.X. 5 caches of 4 TB each should be configured for the 20 TB license and 10 caches of 4TB should be configured for the 40 TB license. The maximum cache to cloud capacity ratio that is qualified is 6:1. Each CloudArray volume should be 4 TiB and should only be expanded by a multiple of that value. There are a minimum number of CloudArray volumes that are required for FAST.X. This minimum depends on the DX configuration in the VMAX3 array. There must be two CloudArray volumes for each VMAX3 engine in the system. For example, this means that on a single VMAX3 engine, the minimum will be 2 volumes; on an eight engines VMAX3 system, it will be 16 volumes (a minimum of 64 TiB 16*4TiB virtualized). Maximum capacity is reached with 30 volumes virtualized for the 20TB license and 60 volumes for the 40 TB license. Once the minimum number of volume has been virtualized, any numbers of 4 TiB volumes can be added up to the maximum allowed by the license. Volumes should be allocated to the cache in a round robin fashion: The first 5 volumes (10 volumes for the 40 TB) end up with 1:1 cache ratio. The next 5 (20 for the 40 TB license) will get 2:1, and so on, until a ratio of 6:1 is reached, matching the 120 TiB or 240 TiB maximum licensed capacity. FAST.X with Solutions Enabler The following are examples of how to configure, update, and manage a FAST.X environment using Solutions Enabler (SYMCLI). The examples used in these technical notes are for illustrative purposes and do not necessarily represent a FAST.X environment configured for production workloads. 22

Notes: The symconfigure command is used to configure and modify the FAST.X environment. In most of the command examples, the -cmd option is used, followed by the command syntax in quotes. As with all symconfigure commands requiring configuration input, the f option can be used, followed by a path to a command file containing the syntax shown in the examples. For more information on symconfigure see the EMC Solutions Enabler Array Management V8.1 CLI User Guide which is available on emc.com. This document was developed in a shared lab environment. Things like storage group contents, disk group names and numbers and pool names and numbers may change between sections of the test plan. The command output was gathered using a Linux host connected to a FAST.X environment containing an XtremIO as the external array. The command output seen while performing steps shown may vary slightly if other types of hosts and external arrays are used in the environment. Getting DX Information and Port WWNs for Zoning # symcfg discover -sid 0041 Before configuring edisks: Configure the DX directors and assign ports to the emulations. Complete the zoning. Present the external volumes on the external array ports. Following the initial configuration of DX directors by EMC, run the symcfg discover command. Attempting discovery of Symmetrix 000197200041 This operation may take up to a few minutes. Please be patient... The symcfg command lists the DX directors. In this example, there are four directors configured across two engines in the VMAX3. The output also shows the number of cores and ports assigned to each as well as the online or offline status. # symcfg list -DX all -sid 41 Symmetrix ID: 000197200041 (Local) S Y M M E T R I X D I R E C T O R S Ident Type Engine Cores Ports Status ----- ------------ ------ ----- ----- ------ DX-1H EDISK 1 7 2 Online DX-2H EDISK 1 6 2 Online DX-3H EDISK 2 6 2 Online DX-4H EDISK 2 7 2 Online 23

The symsan command is used with the -sanports option to validate connectivity to an external storage array. In this example, there are four ports on two DX directors (01H:06, 01H:07, 02H:06, and 02H:07) zoned to four storage controller ports on an XtremIO. This is indicated by the fact that each remote port WWN is unique. # symsan list -sanports -DX all -port all -sid 41 Symmetrix ID: 000197200041 Flags Num DIR:P I Vendor Array LUNs Remote Port WWN ------ ----- -------------- ---------------- ---- ---------------- 01H:07. EMC XtremIO FNM00151501047 5 21000024FF3D2743 01H:29. EMC XtremIO FNM00151501047 5 21000024FF3D2742 02H:07. EMC XtremIO FNM00151501047 5 21000024FF5D55AD 02H:29. EMC XtremIO FNM00151501047 5 21000024FF5D55AC 03H:07. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A0004094AD 03H:29. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A0002094AD 04H:07. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A0008094AD 04H:29. EMC CloudArray 16e2e9095b5d26c* 28 57CC95A0006094AD # symcfg list -DX 1H -v -sid 41 Symmetrix ID: 000197200041 (Local) Time Zone : EDT Note: The output of symsan commands may return output from some external arrays, like the CloudArray, with a truncated WWN or serial number indicated by an asterisk (*) at the end of the field. To display the entire WWN or serial number, use the -detail option. Use the symcfg list command to display details about the DX directors and the port WWNs required for zoning: Product Model : VMAX400K Symmetrix ID : 000197200041 Microcode Version (Number) : 5977 (17590000) Microcode Registered Build : 0 Microcode Date : 06.23.2015 Microcode Patch Date : 06.23.2015 Microcode Patch Level : 660 Symmwin Version : 651 Enginuity Build Version : 5977.660.651 Service Processor Time Offset : - 01:00:38 Director Identification: DX-1H Director Type Director Status : EDISK : Online Director Symbolic Number : 01H Director Numeric Number : 113 Director Engine Number : 1 Director Slot Number : 1 Number of Director Cores : 7 Number of Director Ports : 2 24

Director Port: 7 WWN Port Name : 500009737800A407 Director Port Status : Online Negotiated Speed (Gb/Second) : 8 Director Port Speed (Gb/Second) : 8 Director Port: 29 WWN Port Name : 500009737800A41D Director Port Status : Online Negotiated Speed (Gb/Second) : 8 Director Port Speed (Gb/Second) : 8 25

Examining the FAST.X environment Once the connectivity for FAST.X is complete, verify that the DX directors are available and that there is connectivity to external volumes. Confirm the Availability of the External Volumes The symsan command verifies that volumes are available on external storage when the -sanluns option is used with an external port WWN. There are five XtremIO volumes that are masked to the VMAX3 DXs and are available to be configured as edisks. # symsan list -dir 1H -p 7 -sanluns -wwn 21000024FF3D2743 -sid 41 Symmetrix ID: 000197200041 Remote Port WWN: 21000024FF3D2743 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN ------ -- ------- ----- ---------- ----- ----- -------------------------------- 01H:07 RW... F.. 512 102400 0 N/A 514F0C55EBA00001 01H:07 RW... F.. 512 102400 1 N/A 514F0C55EBA00002 01H:07 RW... F.. 512 102400 2 N/A 514F0C55EBA00003 01H:07 RW... F.. 512 102400 3 N/A 514F0C55EBA00004 01H:07 RW... F.. 512 102400 4 N/A 514F0C55EBA00005 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. Configure edisks for External Provisioning The volumes that are available on the XtremIO array can be configured as edisks for FAST.X. The configuration of the edisks also creates the required disk group, pool, and Data devices (TDATs). There are two SRPs configured on the system, one for CloudArray as well as the DEFAULT_SRP, which contains the other disk groups in the system. # symcfg list -srp -sid 41 -detail STORAGE RESOURCE POOLS Symmetrix ID : 000197200041 C A P A C I T Y -------------------------------- --- ------------------------------------------------ Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) -------------------------------- --- ---------- ---------- ---------- ---------- ---- CloudArray_SRP.. 16384.0 0.0 16384.0 0.0 0 DEFAULT_SRP FX 74357.5 1441.5 72916.0 234351.3 31 ---------- ---------- ---------- ---------- ---- Total 90741.5 1441.5 89300.0 234351.3 258 26

Legend: Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used Prior to adding edisks, there are no external disk groups except the default encapsulated disk group called *ENCAPSDG*. This group is used for devices that are encapsulated for ProtectPoint, which is not configured on this system. The other three disk groups are internal disk groups containing FC, SATA, and EFD drives. # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: 000197200041 Disk Group Disk Hyper Capacity ----------------------- ---------------------- ------ --------------------- Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) ----------------------- ---------------------- ------ --------------------- 1 DISK_GROUP_001 207 IF 15000 278972 17436 57747178 0 2 DISK_GROUP_002 78 IS 7200 953367 59585 74362641 0 3 DISK_GROUP_003 33 IE 0 190673 11917 6292223 0 512 *ENCAPSDG* 0 -- N/A N/A N/A 0 0 513 EXT_GROUP_513 4 X- N/A N/A Any 819200 200 ---------- ---------- Total 139221242 200 Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: 000197200041 There are also pools for the two internal drive types as well as a pool for ProtectPoint encapsulated devices called *ENCAPSPOOL*. There is no pool yet for the TDATs that will be created from the XtremIO external volumes. The configuration operation to add the edisks creates the required disk group, pool, and DATA devices (TDATs). S Y M M E T R I X P O O L S --------------------------------------------------------------------------- Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) ------------ ------ ------------ ---------- ---------- ---------- ---- ---- DG1_FBA15K TFF-EI 2-Way Mir 217546560 213426268 4120292 1 0 DG2_FBA7_2 TSF-EI RAID-6(6+2) 360460800 360460800 0 0 0 DG3_FBA_F TEF-EI RAID-5(3+1) 27034560 20592832 6441728 23 0 DG513_FBA T-F-EX Unprotected 6552000 5304917 1247083 19 0 *ENCAPSPOOL* T---D- Unknown 0 0 0 0 0 Total ---------- ---------- ---------- ---- ---- Tracks 611593920 599784817 11809103 1 0 Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin 27

28 (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A (S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A The symconfigure command configures the edisks and uses the device s WWN specified in the command syntax (the WWN is from the output of symsan list - sanluns). Because the command requires specification of five WWNs, either five separate commands or a very long command would need to be run from the command line. This example uses a command file. Note: When the parameter encapsulate_data is set to NO, any existing data on the external volume will be destroyed. # cat /cmd_files/edisk_wwns add external_disk wwn=514f0c55eba00001, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00002, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00003, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00004, encapsulate_data=no srp=default_srp; add external_disk wwn=514f0c55eba00005, encapsulate_data=no srp=default_srp; # symconfigure -sid 41 -f /cmd_files/edisk_wwns commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 069 steps...executing. Step 005 of 069 steps...executing. Step 010 of 069 steps...executing. Step 014 of 069 steps...executing. Step 018 of 072 steps...executing. Step 020 of 072 steps...executing. Step 021 of 072 steps...executing. Step 024 of 072 steps...executing. Step 029 of 072 steps...executing. Step 032 of 072 steps...executing. Step 032 of 072 steps...executing. Step 043 of 072 steps...executing. Step 043 of 072 steps...executing. Step 045 of 203 steps...executing. Step 187 of 214 steps...executing. Step 187 of 214 steps...executing. Step 197 of 214 steps...executing. Step 202 of 214 steps...executing. Step 204 of 214 steps...executing. Step 211 of 214 steps...executing. Step 211 of 214 steps...executing. Step 214 of 214 steps...executing. Local: COMMIT...Done.

New symdevs: FF8D7:FF8DB [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed. Five new DATA devices (FF8D7:FF8DB) are created along with a disk group (EXT_GROUP_514) and pool (DG514_FBA). The DATA devices are enabled in the thin pool. # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: 000197200041 Disk Group Disk Hyper Capacity ----------------------- ---------------------- ------ --------------------- Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) ----------------------- ---------------------- ------ --------------------- 1 DISK_GROUP_001 207 IF 15000 278972 17436 57747178 0 2 DISK_GROUP_002 78 IS 7200 953367 59585 74362641 0 3 DISK_GROUP_003 33 IE 0 190673 11917 6292223 0 512 *ENCAPSDG* 0 -- N/A N/A N/A 0 0 513 EXT_GROUP_513 4 X- N/A N/A Any 819200 200 514 EXT_GROUP_514 5 X- N/A N/A Any 512000 125 ---------- ---------- Total 139733242 325 Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: 000197200041 S Y M M E T R I X P O O L S --------------------------------------------------------------------------- Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) ------------ ------ ------------ ---------- ---------- ---------- ---- ---- DG1_FBA15K TFF-EI 2-Way Mir 217546560 213040551 4506009 2 0 DG2_FBA7_2 TSF-EI RAID-6(6+2) 360460800 360460800 0 0 0 DG3_FBA_F TEF-EI RAID-5(3+1) 27034560 20592832 6441728 23 0 DG513_FBA T-F-EX Unprotected 6552000 5690634 861366 13 0 *ENCAPSPOOL* T---D- Unknown 0 0 0 0 0 DG514_FBA T-F-EX Unprotected 4095000 4095000 0 0 0 Total ---------- ---------- ---------- ---- ---- Tracks 615688920 603879817 11809103 1 0 Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A 29

(S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A # symcfg show -pool DG514_FBA -detail -thin -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D7 819000 0 819000 0. Enabled FF8D8 819000 0 819000 0. Enabled FF8D9 819000 0 819000 0. Enabled FF8DA 819000 0 819000 0. Enabled FF8DB 819000 0 819000 0. Enabled ---------- ---------- ---------- ---- Tracks 4095000 0 4095000 0 } No Thin Devices Bound to Device Pool DG514_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG514_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound 30

Further Examining the Disk Group More information about the disk group and the edisks that populate it are gathered using symdisk commands. Listing the disk group shows general information about the group and the edisks in the group, including showing the primary DX ownership of each of the five edisks. # symdisk list -sid 41 -disk_group 514 Symmetrix ID : 000197200041 Disks Selected : 5 Disk Group : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Disk Capacity(MB) Ident Int TID Grp Vendor Type Hypr Total Free ------ --- --- ---- ---------- ---------- ---- ---------- ---------- DX-1H - - 514 EMC N/A 1 102400 25 DX-2H - - 514 EMC N/A 1 102400 25 DX-1H - - 514 EMC N/A 1 102400 25 DX-2H - - 514 EMC N/A 1 102400 25 DX-1H - - 514 EMC N/A 1 102400 25 ---------- ---------- Total 512000 125 Adding -v to the command lists each of the edisks in the disk group and gives more detail about each, including the edisk spindle IDs (8004-8008) and the WWNs of the corresponding external LUNs. # symdisk list -sid 41 -disk_group 514 -v Symmetrix ID : 000197200041 Disks Selected : 5 Disk Group : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Director : DX-1H Interface Target ID Spindle ID : 8004 External WWN : 514F0C55EBA00001 External Array ID : FNM00151501047 External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID : EMC 31

Product ID Product Revision Serial ID : XtremIO Disk Blocks : 209715200 Block Size : 512 Total Disk Capacity (MB) : 102400 Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-2H Interface Target ID Spindle ID : 8005 External WWN : 514F0C55EBA00002 External Array ID : FNM00151501047 External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID Product ID Product Revision Serial ID : EMC : XtremIO Disk Blocks : 209715200 Block Size : 512 Total Disk Capacity (MB) : 102400 Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-1H Interface Target ID Spindle ID : 8006 External WWN : 514F0C55EBA00003 External Array ID : FNM00151501047 External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor 32 Vendor ID Product ID : EMC : XtremIO

Product Revision Serial ID Disk Blocks : 209715200 Block Size : 512 Total Disk Capacity (MB) : 102400 Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-2H Interface Target ID Spindle ID : 8007 External WWN : 514F0C55EBA00004 External Array ID : FNM00151501047 External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID Product ID Product Revision Serial ID : EMC : XtremIO Disk Blocks : 209715200 Block Size : 512 Total Disk Capacity (MB) : 102400 Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal Director : DX-1H Interface Target ID Spindle ID : 8008 External WWN : 514F0C55EBA00005 External Array ID : FNM00151501047 External Device Name Disk Group Number : 514 Disk Group Name : EXT_GROUP_514 Disk Location : External Technology Speed (RPM) Form Factor Vendor ID : EMC 33

Product ID Product Revision Serial ID : XtremIO Disk Blocks : 209715200 Block Size : 512 Total Disk Capacity (MB) : 102400 Free Disk Capacity (MB) : 25 Rated Disk Capacity (GB) Hyper Size (MB) : Any Hyper Count : 1 Spare Disk Spare Coverage Encapsulated Disk Service State : False : Normal 34 The following options show all of the external spindles and the paths to each that are active and the paths that are for failover. This also shows the four edisks configured on DX-3H and DX-4H (8000-8003) which are configured for a separate FAST.X with CloudArray environment: # symdisk -sid 41 list -external -spindle -path -detail Symmetrix ID : 000197200041 Flags Spindle A DIR:P Remote Port WWN -------- ----- ------- -------------------------------- 8000 X 03H:007 57cc95a0004094ad X 03H:029 57cc95a0002094ad. 04H:029 57cc95a0006094ad. 04H:007 57cc95a0008094ad 8001 X 04H:029 57cc95a0006094ad X 04H:007 57cc95a0008094ad. 03H:007 57cc95a0004094ad. 03H:029 57cc95a0002094ad 8002 X 03H:007 57cc95a0004094ad X 03H:029 57cc95a0002094ad. 04H:029 57cc95a0006094ad. 04H:007 57cc95a0008094ad 8003 X 04H:029 57cc95a0006094ad X 04H:007 57cc95a0008094ad. 03H:007 57cc95a0004094ad. 03H:029 57cc95a0002094ad 8004 X 01H:007 21000024ff3d2743 X 01H:029 21000024ff3d2742. 02H:007 21000024ff5d55ad. 02H:029 21000024ff5d55ac 8005 X 02H:007 21000024ff5d55ad X 02H:029 21000024ff5d55ac. 01H:007 21000024ff3d2743. 01H:029 21000024ff3d2742 8006 X 01H:007 21000024ff3d2743 X 01H:029 21000024ff3d2742. 02H:007 21000024ff5d55ad. 02H:029 21000024ff5d55ac 8007 X 02H:007 21000024ff5d55ad X 02H:029 21000024ff5d55ac. 01H:007 21000024ff3d2743. 01H:029 21000024ff3d2742 8008 X 01H:007 21000024ff3d2743 X 01H:029 21000024ff3d2742. 02H:007 21000024ff5d55ad. 02H:029 21000024ff5d55ac

Legend: (A)ctive path: X = Active,. = Failover Configure edisks for Incorporation C:\>syminq Starting with the Q12016 HYPERMAX OS Service Release, data that exists on external volumes can be preserved while configuring edisks. This mode of operation is called Incorporation. In this example, an external VNX LUN containing host data is incorporated. When the incorporation operation runs, a VMAX3 thin device that is equal in size to the edisk is created along with the TDAT on the VMAX3 array. The thin LUN enables hosts to access to the incorporated data that exists on the external LUN. Note: Once the external LUN is incorporated, the resulting thin device is available to use in the same way that an externally provisioned thin device is. All features that are supported with FAST.X are supported with both types of devices and all examples and comments shown apply to both, unless noted. This Windows host is accessing four VNX devices natively, meaning that this host is connected directly to a VNX FC front-end storage port. Device Product Device -------------------------- --------------------------- --------------------------- Name Type Vendor ID Rev Ser Num Cap (KB) -------------------------- --------------------------- --------------------------- \\.\PHYSICALDRIVE0 VMware Virtual disk 1.0 N/A 157286400 \\.\PHYSICALDRIVE1 GK EMC SYMMETRIX 5876 840002A000 2880 \\.\PHYSICALDRIVE2 GK EMC SYMMETRIX 5876 840002B000 2880 \\.\PHYSICALDRIVE3 GK EMC SYMMETRIX 5876 840002C000 2880 \\.\PHYSICALDRIVE4 GK EMC SYMMETRIX 5876 840002D000 2880 \\.\PHYSICALDRIVE5 GK EMC SYMMETRIX 5876 840002E000 2880 \\.\PHYSICALDRIVE6 GK EMC SYMMETRIX 5876 840002F000 2880 \\.\PHYSICALDRIVE7 GK EMC SYMMETRIX 5977 320002A000 5760 \\.\PHYSICALDRIVE8 GK EMC SYMMETRIX 5977 320002B000 5760 \\.\PHYSICALDRIVE9 GK EMC SYMMETRIX 5977 3200024000 5760 \\.\PHYSICALDRIVE10 GK EMC SYMMETRIX 5977 3200025000 5760 \\.\PHYSICALDRIVE11 GK EMC SYMMETRIX 5977 3200026000 5760 \\.\PHYSICALDRIVE12 GK EMC SYMMETRIX 5977 3200027000 5760 \\.\PHYSICALDRIVE13 EMC SYMMETRIX 5876 8400150000 104857920 \\.\PHYSICALDRIVE14 EMC SYMMETRIX 5876 8400151000 104857920 \\.\PHYSICALDRIVE15 EMC SYMMETRIX 5876 8400152000 104857920 \\.\PHYSICALDRIVE16 EMC SYMMETRIX 5876 8400153000 104857920 \\.\PHYSICALDRIVE17 EMC SYMMETRIX 5977 3200055000 104858880 \\.\PHYSICALDRIVE18 EMC SYMMETRIX 5977 3200056000 104858880 \\.\PHYSICALDRIVE19 EMC SYMMETRIX 5977 3200057000 104858880 \\.\PHYSICALDRIVE20 EMC SYMMETRIX 5977 3200058000 104858880 \\.\PHYSICALDRIVE21 DGC VRAID 0533 2756F15F 52428800 \\.\PHYSICALDRIVE22 DGC VRAID 0533 2856F15F 52428800 \\.\PHYSICALDRIVE23 DGC VRAID 0533 2956F15F 52428800 \\.\PHYSICALDRIVE24 DGC VRAID 0533 2A56F15F 52428800 35

There are file systems created on the four volumes, which are mounted and have been assigned drive letters: Data has been written to each of the volumes: 36

After writing the host data, the VNX devices have been unmasked from the host and presented as external LUNs to the DX directors on the VMAX3. C:\>symsan list -sanports -DX all -port all -sid 32 Symmetrix ID: 000196701632 Flags Num DIR:P I Vendor Array LUNs Remote Port WWN ------ ----- -------------- ---------------- ---- ---------------- 01H:07. EMC CLARiiON APM00144513926 4 5006016436E00812 01H:31. EMC CLARiiON APM00144513926 4 5006016C36E00812 02H:07. EMC CLARiiON APM00144513926 4 5006016836E40812 02H:31. EMC CLARiiON APM00144513926 4 5006016036E40812 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. The external LUNs from the VNX are available and can be incorporated. C:\>symsan list -dir 1H -p 7 -sanluns -wwn 5006016436E00812 -sid 32 Symmetrix ID: 000196701632 Remote Port WWN: 5006016436E00812 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN ------ -- ------- ----- ---------- ----- ----- -------------------------------- 37

01H:07 RW... F.. 512 51200 0 00040 6006016037903A00CCE619EE3FF0E511 01H:07 RW... F.. 512 51200 1 00042 6006016037903A00D0E619EE3FF0E511 01H:07 RW... F.. 512 51200 2 00039 6006016037903A00CAE619EE3FF0E511 01H:07 RW... F.. 512 51200 3 00041 6006016037903A00CEE619EE3FF0E511 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. The symconfigure command to incoprporate the external LUNs is run either from the command line or by calling a command file: C:\>type ext_wwns.txt add external_disk wwn=6006016037903a00cce619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn=6006016037903a00d0e619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn=6006016037903a00cae619ee3ff0e511, encapsulate_data=no keep_data=yes; add external_disk wwn=6006016037903a00cee619ee3ff0e511, encapsulate_data=no keep_data=yes; 38 C:\ symconfigure -sid 32 -f c:\ext_wwns.txt commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000196701632 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 009 of 075 steps...executing. Step 013 of 075 steps...executing. Step 018 of 080 steps...executing. Step 022 of 080 steps...executing. Step 023 of 080 steps...executing. Step 031 of 080 steps...executing. Step 223 of 239 steps...executing. Step 228 of 239 steps...executing. Step 233 of 239 steps...executing. Step 236 of 239 steps...executing. Step 239 of 239 steps...executing. Local: COMMIT...Done. New symdevs: 00059:0005C [TDEVs] New symdevs: FFF6C:FFF6F [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed.

There are now four new thin VMAX3 devices (059:05C) that will allow the host to access the data on the external VNX LUNs. The VMAX thin devices are masked to the host and the native VNX devices have been removed. C:\Program Files (x86)\emc\symcli\bin>syminq Device Product Device -------------------------- --------------------------- --------------------------- Name Type Vendor ID Rev Ser Num Cap (KB) -------------------------- --------------------------- --------------------------- \\.\PHYSICALDRIVE0 VMware Virtual disk 1.0 N/A 157286400 \\.\PHYSICALDRIVE1 GK EMC SYMMETRIX 5876 840002A000 2880 \\.\PHYSICALDRIVE2 GK EMC SYMMETRIX 5876 840002B000 2880 \\.\PHYSICALDRIVE3 GK EMC SYMMETRIX 5876 840002C000 2880 \\.\PHYSICALDRIVE4 GK EMC SYMMETRIX 5876 840002D000 2880 \\.\PHYSICALDRIVE5 GK EMC SYMMETRIX 5876 840002E000 2880 \\.\PHYSICALDRIVE6 GK EMC SYMMETRIX 5876 840002F000 2880 \\.\PHYSICALDRIVE7 GK EMC SYMMETRIX 5977 320002A000 5760 \\.\PHYSICALDRIVE8 GK EMC SYMMETRIX 5977 320002B000 5760 \\.\PHYSICALDRIVE9 GK EMC SYMMETRIX 5977 3200024000 5760 \\.\PHYSICALDRIVE10 GK EMC SYMMETRIX 5977 3200025000 5760 \\.\PHYSICALDRIVE11 GK EMC SYMMETRIX 5977 3200026000 5760 \\.\PHYSICALDRIVE12 GK EMC SYMMETRIX 5977 3200027000 5760 \\.\PHYSICALDRIVE13 EMC SYMMETRIX 5876 8400150000 104857920 \\.\PHYSICALDRIVE14 EMC SYMMETRIX 5876 8400151000 104857920 \\.\PHYSICALDRIVE15 EMC SYMMETRIX 5876 8400152000 104857920 \\.\PHYSICALDRIVE16 EMC SYMMETRIX 5876 8400153000 104857920 \\.\PHYSICALDRIVE17 EMC SYMMETRIX 5977 3200055000 104858880 \\.\PHYSICALDRIVE18 EMC SYMMETRIX 5977 3200056000 104858880 \\.\PHYSICALDRIVE19 EMC SYMMETRIX 5977 3200057000 104858880 \\.\PHYSICALDRIVE20 EMC SYMMETRIX 5977 3200058000 104858880 \\.\PHYSICALDRIVE21 EMC SYMMETRIX 5977 3200059000 52428800 \\.\PHYSICALDRIVE22 EMC SYMMETRIX 5977 320005A000 52428800 \\.\PHYSICALDRIVE23 EMC SYMMETRIX 5977 320005B000 52428800 \\.\PHYSICALDRIVE24 EMC SYMMETRIX 5977 320005C000 52428800 The devices are available and simply need to be brought online by right clicking each disk and choosing Online. 39

When all four volumes are brought online, they are available with the same volume names and drive letters as they were when the host was accessing the VNX volumes natively through the VNX storage ports. The data written to the devices when they were directly accessible by the host has been preserved and is available through the VMAX3 thin devices. 40

The incorporated VMAX3 thin devices can be used with all local and remote HYPERMAX OS replication features and can take advantage of all capabilities of the VMAX3 array. Creating a Storage Group to Assign Volumes to the Default SRP In the VMAX3, storage groups are used to mask devices to hosts. They also assign volumes to an SRP and assign SLOs and workload types to devices. When creating thin volumes for host use, volumes can be created for later use and left unassigned to a storage group or they can be assigned to a storage group that has already been created. In the test environment in use here, a storage group called lcseb149_sg is created for host lcseb149 in the default SRP. It is then added to an existing parent storage group (BETA_CLUSTER) as a child group. This masks the volumes created in the next step when they are added to that storage group and allows the hosts to discover the devices. Because no SLO is explicitly chosen, by default, the "Optimized" SLO is assigned. The SLO name appears as <none> if the Optimized SLO is not specifically assigned, but the Optimized SLO is used. Note: Different environments may require different masking steps. # symsg -sid 41 create lcseb149_sg -srp DEFAULT_SRP # symsg show lcseb149_sg -sid 41 41

42 Name: lcseb149_sg Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 19:22:49 2015 Masking Views : No FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 0 Storage Group Names Number of Gatekeepers : 0 # symsg -sg BETA_CLUSTER -sid 41 add sg lcseb149_sg # symsg show BETA_CLUSTER -sid 41 Name: BETA_CLUSTER Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 19:27:58 2015 Masking Views : Yes FAST Managed : No SLO Name : <none> Workload : <none> SRP Name : <none> Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 6 Storage Group Names : bc_gks (IsChild) Rich_B137 (IsChild) Andy_B149 (IsChild) b127_gks (IsChild) lcseb149_sg (IsChild) Number of Gatekeepers : 18 Devices (18): { ---------------------------------------------------------------- Sym Device Cap Dev Pdev Name Config Attr Sts (MB) ---------------------------------------------------------------- 00013 N/A TDEV (GK) RW 6 00014 N/A TDEV (GK) RW 6 00015 N/A TDEV (GK) RW 6 00016 N/A TDEV (GK) RW 6 00017 N/A TDEV (GK) RW 6 00018 N/A TDEV (GK) RW 6 00019 \\.\PHYSICALDRIVE1 TDEV (GK) RW 6 0001A \\.\PHYSICALDRIVE2 TDEV (GK) RW 6 0001B \\.\PHYSICALDRIVE3 TDEV (GK) RW 6

0001C N/A TDEV (GK) RW 6 0001D N/A TDEV (GK) RW 6 0001E N/A TDEV (GK) RW 6 0001F N/A TDEV (GK) RW 6 00020 N/A TDEV (GK) RW 6 00086 N/A TDEV (GK) RW 6 00087 N/A TDEV (GK) RW 6 00088 N/A TDEV (GK) RW 6 00089 N/A TDEV (GK) RW 6 } Creating Thin Volumes for the Default SRP The symconfigure command creates the devices and adds them to the storage group. In this example, two 200 GB devices (0133 and 0135) are created and added to lcseb149_sg. # symconfigure -sid 41 -cmd "create dev count=2, size=200 GB, emulation=fba, config=tdev, sg=lcseb149_sg;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...started. Committing configuration changes...queued. COMMIT requesting required resources...obtained. Step 006 of 009 steps...executing. Step 009 of 009 steps...executing. Local: COMMIT...Done. Adding devices to Storage Group...Done. New symdevs: 00133:00134 [TDEVs] Terminating the configuration change session...done. The configuration change session has successfully completed. # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg Symmetrix ID : 000197200041 Last updated at : Thu Jul 23 16:38:47 2015 Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { 43

---------------------------------------------------------------- Sym Device Cap Dev Pdev Name Config Attr Sts (MB) ---------------------------------------------------------------- 00133 N/A TDEV RW 204801 00134 N/A TDEV RW 204801 } The host can now discover devices 133 and 134. Diagram of the Configured Environment The following diagram shows the FAST.X environment after the commands run in the previous examples. It shows the FAST.X entities and their relationship to each other and the arrays in general. Note: Disk Group 513, which appears in the previous CLI output, is in its own SRP for FAST.X with CloudArray and is not shown in the diagram. Figure 8. FAST.X Environment 44

Moving Volumes to an external SRP with EFD Storage Only In certain conditions, all storage tiers in an SRP are available to be used for any data in that SRP regardless of the chosen SLO. In the DEFAULT_SRP in this configuration, there are SATA, Fibre Channel, and EFD drives, along with external EFDs from an XtremIO array. Restricting the storage that a storage group uses to only EFD devices in all conditions is not possible. For example, the Diamond SLO restricts data to EFD devices only, but only as long as there is free capacity in those disk groups. If there is no capacity left in any EFD disk groups in the SRP, but there is capacity from other drive pools (SATA or FC), data is placed on spinning disks rather than allow a write to fail. The other consideration is that any EFD pool may be used when the Diamond SLO is chosen. This means that data will likely be placed on both internal and external EFD devices. To restrict data to external storage only, place the external devices in their own SRP. Here, storage from an all EFD array (XtremIO) is used. The symsg command moves volumes simply and easily between SRPs by moving thin volumes between storage groups. An empty, external SRP, named XtremIO_SRP has been added to the array with a bin file change, which is required to create additional SRPs. # symcfg list -srp -sid 41 -detail STORAGE RESOURCE POOLS Symmetrix ID : 000197200041 C A P A C I T Y -------------------------------- --- ------------------------------------------------ Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) -------------------------------- --- ---------- ---------- ---------- ---------- ---- CloudArray_SRP.. 16384.0 0.0 16384.0 0.0 0 DEFAULT_SRP FX 74357.5 1441.5 72916.0 234351.3 315 XtremIO_SRP.. 0.0 0.0 0.0 0.0 0 ---------- ---------- ---------- ---------- ---- Total 90741.5 1441.5 89300.0 234351.3 258 Legend: Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used The SRP can be populated in the same way as in the previous tests. In this case, five new entries are added to the original command file and the earlier entries are commented out. The new entries will add the edisks to the new SRP created for XtremIO only. # more /cmd_files/edisk_wwns #add external_disk wwn=514f0c55eba00001, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00002, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00003, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00004, encapsulate_data=no srp=default_srp; #add external_disk wwn=514f0c55eba00005, encapsulate_data=no srp=default_srp; 45

add external_disk wwn=514f0c55eba00007, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba00008, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba00009, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba0000a, encapsulate_data=no srp=xtremio_srp; add external_disk wwn=514f0c55eba0000b, encapsulate_data=no srp=xtremio_srp; 46 # symconfigure -sid 41 -f /cmd_files/edisk_wwns commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 070 steps...executing. Step 009 of 070 steps...executing. Step 014 of 070 steps...executing. Step 017 of 073 steps...executing. Step 020 of 073 steps...executing. Step 020 of 073 steps...executing. Step 032 of 073 steps...executing. Step 043 of 073 steps...executing. Step 044 of 073 steps...executing. Step 044 of 073 steps...executing. Step 046 of 209 steps...executing. Step 049 of 209 steps...executing. Step 049 of 209 steps...executing. Step 193 of 220 steps...executing. Step 194 of 220 steps...executing. Step 202 of 220 steps...executing. Step 210 of 220 steps...executing. Step 216 of 220 steps...executing. Step 217 of 220 steps...executing. Step 217 of 220 steps...executing. Local: COMMIT...Done. New symdevs: FF8D2:FF8D6 [DATA devices] Terminating the configuration change session...done. The configuration change session has successfully completed. # symcfg list -srp -sid 41 -detail The XtremIO SRP, disk group, and pool are now populated. STORAGE RESOURCE POOLS Symmetrix ID : 000197200041 C A P A C I T Y -------------------------------- --- ------------------------------------------------ Flg Usable Allocated Free Subscribed Name DR (GB) (GB) (GB) (GB) (%) -------------------------------- --- ---------- ---------- ---------- ---------- ---- CloudArray_SRP.. 16384.0 334.8 16049.2 1024.0 6 DEFAULT_SRP FX 74357.5 1117.4 73240.1 233327.3 313 XtremIO_SRP.. 499.9 0.0 499.9 0.0 0 ---------- ---------- ---------- ---------- ---- Total 91241.4 1452.2 89789.2 234351.3 256 Legend:

Flags: (D)efault SRP : F = FBA Default,. = N/A (R)DFA DSE : X = Usable,. = Not Used # symdisk list -dskgrp_summary -sid 41 Symmetrix ID: 000197200041 Disk Group Disk Hyper Capacity ----------------------- ---------------------- ------ --------------------- Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) ----------------------- ---------------------- ------ --------------------- 1 DISK_GROUP_001 207 IF 15000 278972 17436 57747178 0 2 DISK_GROUP_002 78 IS 7200 953367 59585 74362641 0 3 DISK_GROUP_003 33 IE 0 190673 11917 6292223 0 512 *ENCAPSDG* 0 -- N/A N/A N/A 0 0 514 EXT_GROUP_514 5 X- N/A N/A Any 512000 125 515 EXT_GROUP_515 4 X- N/A N/A Any 16777216 1 516 EXT_GROUP_516 5 X- N/A N/A Any 512000 125 ---------- ---------- Total 156203258 251 Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A # symcfg list -pool -sid 41 Symmetrix ID: 000197200041 S Y M M E T R I X P O O L S --------------------------------------------------------------------------- Pool Flags Dev Usable Free Used Full Comp Name PTECSL Config Tracks Tracks Tracks (%) (%) ------------ ------ ------------ ---------- ---------- ---------- ---- ---- DG1_FBA15K TFF-EI 2-Way Mir 217546560 208845552 8701008 3 0 DG2_FBA7_2 TSF-EI RAID-6(6+2) 360460800 360460800 0 0 0 DG3_FBA_F TEF-EI RAID-5(3+1) 27034560 27034539 21 0 0 DG515_FBA T-F-EX Unprotected 134217720 131053198 3164522 2 0 *ENCAPSPOOL* T---D- Unknown 0 0 0 0 0 DG514_FBA T-F-EX Unprotected 4095000 4095000 0 0 0 DG516_FBA T-F-EX Unprotected 4095000 4095000 0 0 0 Total ---------- ---------- ---------- ---- ---- Tracks 747449640 735584089 11865551 1 0 Legend: (P)ool Type: S = Snap, R = Rdfa DSE T = Thin (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, M = Mixed, - = N/A Dev (E)mulation: F = FBA, A = AS400, 8 = CKD3380, 9 = CKD3390, - = N/A (C)ompression: E = Enabled, D = Disabled, N = Enabling, S = Disabling, - = N/A (S)tate: E = Enabled, D = Disabled, B = Balancing Disk (L)ocation: I = Internal, X = External, M = Mixed, - = N/A 47

# symcfg show -pool DG516_FBA -detail -thin -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 0 819000 0. Enabled FF8D3 819000 0 819000 0. Enabled FF8D4 819000 0 819000 0. Enabled FF8D5 819000 0 819000 0. Enabled FF8D6 819000 0 819000 0. Enabled ---------- ---------- ---------- ---- Tracks 4095000 0 4095000 0 } 48 No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound An empty Storage Group can be created just as in the previous section but is assigned to the XtremIO_SRP instead of the DEFAULT_SRP. # symsg -sid 41 create lcseb149_xio_sg -srp XtremIO_SRP # symsg -sid 41 -sg BETA_CLUSTER add sg lcseb149_xio_sg

# symaccess -sid 41 show lcseb149_xio_sg -type storage Symmetrix ID : 000197200041 Storage Group Name : lcseb149_xio_sg Last update time : 01:47:15 PM on Tue Jul 28,2015 Group last update time : 01:47:15 PM on Tue Jul 28,2015 Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : None Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symaccess -sid 41 show BETA_CLUSTER -type storage Symmetrix ID : 000197200041 Storage Group Name : BETA_CLUSTER Last update time : 01:47:15 PM on Tue Jul 28,2015 Group last update time : 01:47:15 PM on Tue Jul 28,2015 Number of Storage Groups : 7 Storage Group Names : bc_gks (IsChild) Rich_B137 (IsChild) Andy_B149 (IsChild) b127_gks (IsChild) lcseb149_sg (IsChild) lcseb149_xio_sg (IsChild) Devices : 00013:00020 00086:00089 00133:00134 Masking View Names { BETA_CLUSTER } # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg At this point, there are two thin devices from the previous test mapped to host lcseb149 by assigning them to the lcseb149_sg storage group. Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 19:34:50 2015 Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 49

Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { ---------------------------------------------------------------- Sym Device Cap Dev Pdev Name Config Attr Sts (MB) ---------------------------------------------------------------- 00133 N/A TDEV RW 204801 00134 N/A TDEV RW 204801 } Host data has been written to the two devices which has allocated tracks in the thin pool for external disk group 514. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 2372630 # of Thin Device Tracks : 2372630 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 57 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D7 819000 299568 519432 36. Enabled FF8D8 819000 605021 213979 73. Enabled FF8D9 819000 299672 519328 36. Enabled FF8DA 819000 584589 234411 71. Enabled FF8DB 819000 583780 235220 71. Enabled ---------- ---------- ---------- ---- Tracks 4095000 2372630 1722370 57 } 50 No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2):

{ ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 1167206 72 1167206 1 00134-1638405 1205424 74 1205424 1 ---------- ---------- --- ---------- --- Tracks 3276810 2372630 72 2372630 1 } Design and Implementation Best Practices for FAST.X Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound Because the Optimized SLO was used, data on devices 00133 and 00134 can exist on any of the storage in the default SRP. If the goal is to have the extents for devices 00133 and 00134 on XtremIO storage only, the thin volumes and all their extents must be moved to the XtremIO_SRP that contains only edisks created from XtremIO volumes. Before moving the devices, the lcseb_xio_sg storage group is empty. # symsg show lcseb149_xio_sg -sid 41 Name: lcseb149_xio_sg Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 19:40:22 2015 Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : XtremIO_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 The pool for the external disk group 516 has no thin devices and, therefore, no track allocations. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 51

# of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 0 819000 0. Enabled FF8D3 819000 0 819000 0. Enabled FF8D4 819000 0 819000 0. Enabled FF8D5 819000 0 819000 0. Enabled FF8D6 819000 0 819000 0. Enabled ---------- ---------- ---------- ---- Tracks 4095000 0 4095000 0 } No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound The symsg move or moveall command is used to move the devices from the storage group using the DEFAULT_SRP to the storage group using the XtremIO_SRP. # symsg -sg lcseb149_sg -sid 41 moveall lcseb149_xio_sg # symsg show lcseb149_sg -sid 41 Name: lcseb149_sg Volumes 00133 and 00134 are no longer in the lcseb149_sg storage group. They have been moved to the lcseb149_xio_sg. 52 Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 20:22:14 2015 Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : DEFAULT_SRP Host I/O Limit : None

Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Design and Implementation Best Practices for FAST.X # symsg show lcseb149_xio_sg -sid 41 Name: lcseb149_xio_sg Symmetrix ID : 000197200041 Last updated at : Tue Jul 28 20:22:14 2015 Masking Views : Yes FAST Managed : Yes SLO Name : <none> Workload : <none> SRP Name : XtremIO_SRP Host I/O Limit : None Host I/O Limit MB/Sec Host I/O Limit IO/Sec Dynamic Distribution Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Number of Gatekeepers : 0 Devices (2): { ---------------------------------------------------------------- Sym Device Cap Dev Pdev Name Config Attr Sts (MB) ---------------------------------------------------------------- 00133 N/A TDEV RW 204801 00134 N/A TDEV RW 204801 } Once the volumes are reassigned to the XtremIO storage group, FAST begins to move the data to the XtremIO_SRP. The data movement can be observed by monitoring the tracks in the thin pools. The number of tracks that remain in the DEFAULT_SRP (DG514_FBA) show what is left to be moved, while the number in the XtremIO_SRP (DG516_FBA) shows what has already moved. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 2337576 # of Thin Device Tracks : 2336339 53

# of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 57 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D7 819000 295208 523792 36. Enabled FF8D8 819000 595970 223030 72. Enabled FF8D9 819000 294744 524256 35. Enabled FF8DA 819000 576594 242406 70. Enabled FF8DB 819000 575060 243940 70. Enabled ---------- ---------- ---------- ---- Tracks 4095000 2337576 1757424 57 } 54 No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 1149754 71 1149754 1 00134-1638405 1186585 73 1186585 1 ---------- ---------- --- ---------- --- Tracks 3276810 2336339 71 2336339 1 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5

# of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 123102 # of Thin Device Tracks : 119262 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 3 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 14112 804888 1. Enabled FF8D3 819000 32928 786072 4. Enabled FF8D4 819000 32214 786786 3. Enabled FF8D5 819000 29736 789264 3. Enabled FF8D6 819000 14112 804888 1. Enabled ---------- ---------- ---------- ---- Tracks 4095000 123102 3971898 3 } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 60692 4 60692 1 00134-1638405 58570 4 58570 1 ---------- ---------- --- ---------- --- Tracks 3276810 119262 4 119262 1 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound FAST continues to move the data in the background. The devices are still available for host reads and writes as the data moves. New host writes all go to the XtremIO_SRP. # symcfg show -pool DG514_FBA -thin -detail -thin -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG514_FBA Pool Type : Thin 55

Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 2021107 # of Thin Device Tracks : 2020688 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 49 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D7 819000 256527 562473 31. Enabled FF8D8 819000 514177 304823 62. Enabled FF8D9 819000 255396 563604 31. Enabled FF8DA 819000 498305 320695 60. Enabled FF8DB 819000 496702 322298 60. Enabled ---------- ---------- ---------- ---- Tracks 4095000 2021107 2073893 49 } No Thin Devices Bound to Device Pool DG514_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 990726 61 990726 1 00134-1638405 1029962 63 1029962 1 ---------- ---------- --- ---------- --- Tracks 3276810 2020688 62 2020688 1 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -thin -sid 41 Symmetrix ID: 000197200041 56

Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 731706 # of Thin Device Tracks : 728059 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 17 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 89208 729792 10. Enabled FF8D3 819000 185640 633360 22. Enabled FF8D4 819000 186774 632226 22. Enabled FF8D5 819000 180876 638124 22. Enabled FF8D6 819000 89208 729792 10. Enabled ---------- ---------- ---------- ---- Tracks 4095000 731706 3363294 17 } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 375984 23 375984 1 00134-1638405 352075 22 352075 1 ---------- ---------- --- ---------- --- Tracks 3276810 728059 22 728059 1 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound 57

When all tracks have moved, the migration is complete and all data on 00133 and 00134 uses external storage from the XtremIO array. The thin devices no longer appear in the DG514_FBA thin pool and have 100% of their tracks in DG516_FBA. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG514_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D7 819000 0 819000 0. Enabled FF8D8 819000 0 819000 0. Enabled FF8D9 819000 0 819000 0. Enabled FF8DA 819000 0 819000 0. Enabled FF8DB 819000 0 819000 0. Enabled ---------- ---------- ---------- ---- Tracks 4095000 0 4095000 0 } 58 No Thin Devices Bound to Device Pool DG514_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG514_FBA Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041

Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 3275932 # of Thin Device Tracks : 3275932 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 79 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 424892 394108 51. Enabled FF8D3 819000 808030 10970 98. Enabled FF8D4 819000 798152 20848 97. Enabled FF8D5 819000 818446 554 99. Enabled FF8D6 819000 426412 392588 52. Enabled ---------- ---------- ---------- ---- Tracks 4095000 3275932 819068 79 } No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 1637966 100 1637966 0 00134-1638405 1637966 100 1637966 0 ---------- ---------- --- ---------- --- Tracks 3276810 3275932 100 3275932 0 } Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, 59

Local Replication and FAST.X D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound Performing local replication against externally provisioned storage is no different than performing replication against VMAX3 volumes using internal storage. SnapVX can create point in time snapshots that do not require target volumes and only consume additional space when the source volume is updated. These snapshots share backend track allocation with the source volumes, meaning that a regular, target-less snapshot only consumes space from the SRP that the source volumes belong to. In this example, taking a snap of the lcseb149_xio_sg storage group creates a point in time copy that uses space for changed tracks from the XtremIO_SRP only. Once a point in time copy is taken, it can be linked to and copied to other devices or the SnapVX session can be terminated if the point in time snap is no longer needed. # symsnapvx -sid 41 -sg lcseb149_xio_sg -name XIO_Only_Snap establish -nop Establish operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Establish...Started. Polling for Establish...Done. Polling for Activate...Started. Polling for Activate...Done. Establish operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx list -sg lcseb149_xio_sg -sid 41 Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : 000197200041 (Microcode Version: 5977) ------------------------------------------------------------------------- Sym Num Flgs Dev Snapshot Name Gens FLRG Last Snapshot Timestamp ----- -------------------------------- ---- ---- ------------------------ 00133 XIO_Only_Snap 1... Wed Jul 29 14:48:28 2015 00134 XIO_Only_Snap 1... Wed Jul 29 14:48:28 2015 Flgs: (F)ailed : X = Failed,. = No Failure (L)ink : X = Link Exists,. = No Link Exists (R)estore : X = Restore Active,. = No Restore Active (G)CM : X = GCM,. = Non-GCM # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name XIO_Only_Snap terminate -nop Terminate operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Terminate...Started. Polling for Terminate...Done. Terminate operation successfully executed for the storage group lcseb149_xio_sg 60 Clones can also be created by linking and copying source devices to target devices. The target devices can be in the same SRP or a different SRP from the source devices.

In this example two new devices (0012E and 0012F) are created and placed in the lcseb149_sg storage group uses an Optimized SLO in the DEFAULT_SRP. This means that their allocations can exist on any disk group in that SRP. Performing a link and copy between the devices in lcseb149_xio_sg (00133 and 00134) and lcseb149_sg (0012E and 0012F) copies the data from the XtremIO_SRP to the DEFAULT_SRP. # symconfigure -sid 41 -cmd "create dev count=2 size=200 GB, emulation=fba, config=tdev, sg=lcseb149_sg;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...started. Committing configuration changes...queued. COMMIT requesting required resources...obtained. Step 005 of 022 steps...executing. Step 007 of 022 steps...executing. Step 007 of 022 steps...executing. Step 011 of 022 steps...executing. Step 016 of 022 steps...executing. Step 016 of 022 steps...executing. Step 017 of 022 steps...executing. Step 019 of 022 steps...executing. Step 022 of 022 steps...executing. Local: COMMIT...Done. Adding devices to Storage Group...Done. New symdevs: 0012E:0012F [TDEVs] Terminating the configuration change session...done. The configuration change session has successfully completed. # symaccess show lcseb149_sg -type storage -sid 41 Symmetrix ID : 000197200041 Storage Group Name : lcseb149_sg Last update time : 02:40:59 PM on Wed Jul 29,2015 Group last update time : 02:40:59 PM on Wed Jul 29,2015 Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : 0012E:0012F Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symaccess show lcseb149_xio_sg -type storage -sid 41 Symmetrix ID : 000197200041 Storage Group Name : lcseb149_xio_sg Last update time : 08:22:14 PM on Tue Jul 28,2015 Group last update time : 08:22:14 PM on Tue Jul 28,2015 61

Number of Storage Groups : 1 Storage Group Names : BETA_CLUSTER (IsParent) Devices : 00133:00134 Masking View Names { BETA_CLUSTER * } * Denotes Masking Views through a cascaded group # symsnapvx -sid 41 -sg lcseb149_xio_sg -name Copy_to_DEFAULT_SRP establish -nop Establish operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Establish...Started. Polling for Establish...Done. Polling for Activate...Started. Polling for Activate...Done. Establish operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx -sid 41 -sg lcseb149_xio_sg link -snapshot_name Copy_to_DEFAULT_SRP -copy -lnsg lcseb149_sg -nop Link operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Link...Started. Polling for Link...Done. Link operation successfully executed for the storage group lcseb149_xio_sg After performing the link with the -copy option, the data begins copying from 00133 and 00134 to 0012E and 0012F. # symsnapvx list -sid 41 -sg lcseb149_xio_sg -linked -detail Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : 000197200041 (Microcode Version: 5977) ----------------------------------------------------------------------------------------------- Sym Link Flgs Remaining Done Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp (Tracks) (%) ----- -------------------------------- ---- ----- ---- ------------------------ ---------- ---- 00133 Copy_to_DEFAULT_SRP 0 0012E.I.. Wed Jul 29 18:12:33 2015 611852 62 00134 Copy_to_DEFAULT_SRP 0 0012F.I.. Wed Jul 29 18:12:33 2015 602664 63 ---------- 1214516 Flgs: (F)ailed : F = Force Failed, X = Failed,. = No Failure (C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged,. = NoCopy Link (M)odified : X = Modified Target Data,. = Not Modified (D)efined : X = All Tracks Defined,. = Define in progress The copy is now complete and the data has been copied to the devices in the DEFAULT_SRP. 62

# symsnapvx list -sid 41 -sg lcseb149_xio_sg -linked -detail Storage Group (SG) Name : lcseb149_xio_sg SG's Symmetrix ID : 000197200041 (Microcode Version: 5977) Design and Implementation Best Practices for FAST.X ----------------------------------------------------------------------------------------------- Sym Link Flgs Remaining Done Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp (Tracks) (%) ----- -------------------------------- ---- ----- ---- ------------------------ ---------- ---- 00133 Copy_to_DEFAULT_SRP 0 0012E.D.X Wed Jul 29 18:12:34 2015 0 100 00134 Copy_to_DEFAULT_SRP 0 0012F.D.X Wed Jul 29 18:12:34 2015 0 100 ---------- 0 Flgs: (F)ailed : F = Force Failed, X = Failed,. = No Failure (C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged,. = NoCopy Link (M)odified : X = Modified Target Data,. = Not Modified (D)efined : X = All Tracks Defined,. = Define in progress After the copy is complete, the devices are unlinked and the session terminated. # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name Copy_to_DEFAULT_SRP unlink -lnsg lcseb149_sg -nop Unlink operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Unlink...Started. Polling for Unlink...Done. Unlink operation successfully executed for the storage group lcseb149_xio_sg # symsnapvx -sid 41 -sg lcseb149_xio_sg -snapshot_name Copy_to_DEFAULT_SRP terminate -nop Terminate operation execution is in progress for the storage group lcseb149_xio_sg. Please wait... Polling for Terminate...Started. Polling for Terminate...Done. Terminate operation successfully executed for the storage group lcseb149_xio_sg For more information on local replication operations, see the VMAX3 Local Replication Technical Notes available on emc.com: http://www.emc.com/collateral/technical-documentation/h13697-emc-vmax3-localreplication.pdf Removing FAST.X Components from an Empty SRP This section removes the FAST.X components from the XtremIO_SRP following the data migration. Before removing the FAST.X entities, the thin devices are removed from the storage group and the allocated tracks belonging to them are freed. # symsg -sg lcseb149_xio_sg -sid 41 rmall # symdev -sid 41 -devs 00133:00134 free -nop 63

'Free Start' operation succeeded for devices in set of ranges. The tracks being freed are watched by viewing the thin pool details. The number of pool allocated tracks declines until there are no more remaining. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 2425930 # of Thin Device Tracks : 2422246 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 59 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 321398 497602 39. Enabled FF8D3 819000 590648 228352 72. Enabled FF8D4 819000 583291 235709 71. Enabled FF8D5 819000 607543 211457 74. Enabled FF8D6 819000 323050 495950 39. Enabled ---------- ---------- ---------- ---- Tracks 4095000 2425930 1669070 59 } 64 No Thin Devices Bound to Device Pool DG516_FBA Other Thin Devices with Allocations in this Pool (2): { ----------------------------------------------------------- Pool Compressed Bound Total Allocated Size/Ratio Sym Pool Name Tracks Tracks (%) Tracks (%) ----------------------------------------------------------- 00133-1638405 1204262 74 1204262 0 00134-1638405 1217984 75 1217984 0 ---------- ---------- --- ---------- --- Tracks 3276810 2422246 74 2422246 0

} Legend: Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound When the free operation on the TDEVs completes, the thin pool contains no allocated tracks and the thin devices have been removed. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 Symmetrix ID : 000197200041 Pool Name : DG516_FBA Pool Type : Thin Disk Location : External Technology Dev Emulation : FBA Dev Configuration : Unprotected Pool State : Enabled Compression State # of Devices in Pool : 5 # of Enabled Devices in Pool : 5 # of Usable Tracks in Pool : 4095000 # of Allocated Tracks in Pool : 0 # of Thin Device Tracks : 0 # of DSE Tracks : 0 # of Local Replication Tracks : 0 # of Tracks saved by compression : 0 # of Shared Tracks in Pool Pool Utilization (%) : 0 Pool Compression Ratio (%) : 0 Max. Subscription Percent Rebalance Variance Max devs per rebalance scan Pool Reserved Capacity Enabled Devices(5): { ---------------------------------------------------------- Sym Usable Alloc Free Full FLG Device Dev Tracks Tracks Tracks (%) S State ---------------------------------------------------------- FF8D2 819000 0 819000 0. Enabled FF8D3 819000 0 819000 0. Enabled FF8D4 819000 0 819000 0. Enabled FF8D5 819000 0 819000 0. Enabled FF8D6 819000 0 819000 0. Enabled ---------- ---------- ---------- ---- Tracks 4095000 0 4095000 0 } No Thin Devices Bound to Device Pool DG516_FBA No Other-Pool Bound Thin Devices have allocations in Device Pool DG516_FBA Legend: 65

Enabled devices FLG: (S)hared Tracks : X = Shared Tracks,. = No Shared Tracks Bound Devices FLG: S(T)atus : B = Bound, I = Binding, U = Unbinding, A = Allocating, D = Deallocating, R = Reclaiming, C = Compressing, N = Uncompressing, F = FreeingAll,. = Unbound The edisks can now be removed from disk group 516. Use symconfigure to drain the devices by adding the drain commands to a device file or drain them individually from the command line. Note: All edisks may need to be drained depending on how the thin pool was used. If the drain operation against all of the devices fails, drain them individually. Devices that are already drained return an error that they are already in the requested state. # symdisk list -spindle -external -sid 41 Symmetrix ID : 000197200041 Disks Selected : 14 Disk Capacity(MB) Spindle Grp Dir Vendor Type Hypr Total Free -------- ---- --- ---------- ---------- ---- ---------- ---------- 8000 515 03H EMC N/A 1 4194304 0 8001 515 04H EMC N/A 1 4194304 0 8002 515 03H EMC N/A 1 4194304 0 8003 515 04H EMC N/A 1 4194304 0 8004 514 01H EMC N/A 1 102400 25 8005 514 02H EMC N/A 1 102400 25 8006 514 01H EMC N/A 1 102400 25 8007 514 02H EMC N/A 1 102400 25 8008 514 01H EMC N/A 1 102400 25 8009 516 02H EMC N/A 1 102400 25 800A 516 01H EMC N/A 1 102400 25 800B 516 02H EMC N/A 1 102400 25 800C 516 01H EMC N/A 1 102400 25 800D 516 02H EMC N/A 1 102400 25 ---------- ---------- Totals 17801216 251 # cat /cmd_files/drain start drain on external_disk spid=8009; start drain on external_disk spid=800a; start drain on external_disk spid=800b; start drain on external_disk spid=800c; start drain on external_disk spid=800d; # symconfigure -sid 41 -f /cmd_files/drain commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. 66

If the drain command fails, drain the devices that require it by specifying the individual device or devices at the command line. # symconfigure -sid 41 -cmd "start drain on external_disk spid=800d;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. # cat remove remove external_disk spid=8009; remove external_disk spid=800a; remove external_disk spid=800b; remove external_disk spid=800c; remove external_disk spid=800d; Remove the edisks after draining all of the devices that require it. # symconfigure -sid 41 -f /cmd_files/remove commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 008 of 070 steps...executing. Step 011 of 070 steps...executing. Step 014 of 070 steps...executing. Step 016 of 069 steps...executing. Step 017 of 069 steps...executing. Step 025 of 069 steps...executing. Step 026 of 069 steps...executing. Step 028 of 069 steps...executing. Step 166 of 190 steps...executing. Step 166 of 190 steps...executing. Step 169 of 190 steps...executing. Step 171 of 190 steps...executing. Step 172 of 190 steps...executing. Step 173 of 190 steps...executing. Step 176 of 190 steps...executing. Step 178 of 190 steps...executing. Step 181 of 190 steps...executing. Step 187 of 190 steps...executing. Step 187 of 190 steps...executing. Step 190 of 190 steps...executing. Local: COMMIT...Done. Terminating the configuration change session...done. The configuration change session has successfully completed. 67

68 Removing the edisks also removes the pool and the disk group. # symcfg show -pool DG516_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 The requested thin pool does not exist -- cannot perform the operation # symdisk list -sid 41 -dskgrp_summary Symmetrix ID: 000197200041 Disk Group Disk Hyper Capacity ----------------------- ---------------------- ------ --------------------- Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) ----------------------- ---------------------- ------ --------------------- 1 DISK_GROUP_001 207 IF 15000 278972 17436 57747178 0 2 DISK_GROUP_002 78 IS 7200 953367 59585 74362641 0 3 DISK_GROUP_003 33 IE 0 190673 11917 6292223 0 512 *ENCAPSDG* 0 -- N/A N/A N/A 0 0 515 EXT_GROUP_515 4 X- N/A N/A Any 16777216 1 ---------- ---------- Total 155179258 1 Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A Removing FAST.X Components from an SRP Containing Volumes edisks can be drained and removed from an SRP without moving or deleting volumes if there is enough free capacity in the SRP to accept the tracks allocated to those edisks. If that is the case, the first step in removing the edisks is to drain them, which moves all of the allocated tracks to other disks in the SRP. In the case of the edisks in DEFAULT_SRP, which are in disk group 514, only one requires draining. # symconfigure -sid 41 -cmd "start drain on external_disk spid=8008;" commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Committing configuration changes...started. Committing configuration changes...committed. Terminating the configuration change session...done. The configuration change session has successfully completed. # cat /cmd_files/remove remove external_disk spid=8004; remove external_disk spid=8005; Once that device is drained, remove the edisks:

remove external_disk spid=8006; remove external_disk spid=8007; remove external_disk spid=8008; Design and Implementation Best Practices for FAST.X # symconfigure -sid 41 -f /cmd_files/remove commit -nop A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix 000197200041 Performing Access checks...allowed. Checking Device Reservations...Allowed. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 009 of 070 steps...executing. Step 012 of 070 steps...executing. Step 014 of 070 steps...executing. Step 016 of 069 steps...executing. Step 019 of 069 steps...executing. Step 026 of 069 steps...executing. Step 028 of 069 steps...executing. Step 030 of 069 steps...executing. Step 039 of 069 steps...executing. Step 040 of 069 steps...executing. Step 040 of 069 steps...executing.i Step 170 of 190 steps...executing. Step 172 of 190 steps...executing. Step 173 of 190 steps...executing. Step 175 of 190 steps...executing. Step 177 of 190 steps...executing. Step 180 of 190 steps...executing. Step 185 of 190 steps...executing. Step 187 of 190 steps...executing. Step 187 of 190 steps...executing. Step 190 of 190 steps...executing. Local: COMMIT...Done. Terminating the configuration change session...done. The configuration change session has successfully completed. When all of the edisks are deleted from a disk group, HYPERMAX OS removes the thin pool and the disk group itself. Both the DG514_FBA thin pool and EXT_GROUP_514 have been deleted. # symcfg show -pool DG514_FBA -thin -detail -all -sid 41 Symmetrix ID: 000197200041 The requested thin pool does not exist -- cannot perform the operation 69

# symdisk list -sid 41 -dskgrp_summary Symmetrix ID: 000197200041 Disk Group Disk Hyper Capacity ----------------------- ---------------------- ------ --------------------- Flgs Speed Size Size Total Free Num Name Cnt LT (RPM) (MB) (MB) (MB) (MB) ----------------------- ---------------------- ------ --------------------- 1 DISK_GROUP_001 207 IF 15000 278972 17436 57747178 0 2 DISK_GROUP_002 78 IS 7200 953367 59585 74362641 0 3 DISK_GROUP_003 33 IE 0 190673 11917 6292223 0 512 *ENCAPSDG* 0 -- N/A N/A N/A 0 0 515 EXT_GROUP_515 4 X- N/A N/A Any 16777216 1 516 EXT_GROUP_516 5 X- N/A N/A Any 512000 125 ---------- ---------- Total 155691258 126 Legend: Disk (L)ocation: I = Internal, X = External, - = N/A (T)echnology: S = SATA, F = Fibre Channel, E = Enterprise Flash Drive, - = N/A 70

Appendix A: Terminology and Acronyms Table 1. Terminology Term Definition Device Volume LU LUN VMAX3 device Thin device (TDEV) Data device (TDAT) Thin pool FAST policy Unisphere Drive Disk Disk group Storage group Thin device extent or chunk Extent Group External array External device External WWN DX edisk or External spindle Tier Virtual RAID Group SLO SLE LU, logical volume LU, logical volume A logical unit or logical volume Logical Unit Number assigned to a LU A LU on the VMAX3 array that uses internal or external storage. Virtually provisioned device where storage capacity is supplied from a specified thin pool of storage An internal device that provides storage capacity used by thin devices. A pool of storage from which thin extents are allocated to thin devices Specifies a set of standard tiers, or thin tiers, used by FAST or FAST VP. Specifies, in percentage values, the permitted storage group capacities associated with each tier. VMAX3 GUI management interface Physical disk Physical disk A numbered and named group of internal physical disks attached to DAs or external LUs, available through DX directors. A collection of devices grouped together for common management. The minimum storage capacity allocated from a pool to a thin device. The size of a thin device extent is 1 VMAX3 track (128 KB). Group of 42 contiguous thin device extents. A supported storage array attached to DX directors. A device that is exported from a virtualized external array. The WWN of a device exported from a virtualized external array. A director meant for connecting a VMAX3 array to virtualized external arrays. A virtual external disk that is created when an external device is imported. A collection of physical disks of the same drive technology, combined with a RAID protection type. Unprotected RAID group created for edisks. Service Level Objective. Defines an expected average response time target for an application. Service Level Expectation. Rank of the response time capabilities of a particular type of drive. 71

Table 2. Acronyms and abbreviations Acronym or abbreviation Definition LU LUN VP SRDF FAST FAST.X SR SG SRP DG DX SLO SLE EFD Logical Unit Logical Unit Number Virtual Provisioning Symmetrix Remote Data Facility Fully Automated Storage Tiering Fully Automated Storage Tiering - External Service Release Storage Group Storage Resource Pool Disk Group DA external Service Level Objective Service Level Expectation Enterprise Flash Drive Appendix B: VMAX3 and External EMC Array Configuration Before configuring FAST.X update the external array with the latest management software or firmware. For details on the external arrays that are supported, see the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page: https://elabnavigator.emc.com/eln/elnhome Please speak with an EMC customer representative to request support for arrays that do not currently appear on the matrix. Confirming the Solutions Enabler and HYPERMAX OS versions Before beginning to configure FAST.X, the VMAX3 array must be running a GA version of HYPERMAX OS that supports FAST.X, which was introduced with the Q3 2015 HYPERMAX OS Service Release. If a HYPERMAX OS upgrade is required, follow the appropriate process for loading the latest GA version of 5977 before proceeding. Executing FAST.X commands from the CLI requires Solutions Enabler 8.1 and higher. If necessary, install or upgrade to the required version of software. 72

To check the version of Solutions Enabler running on the management host and HYPERMAX OS running on the array, run symconfigure version v from the management host. # symconfigure -version -v -sid 74 Symmetrix CLI (SYMCLI) Version : X8.1.0.317 (Edit Level: 2050) Built with SYMAPI Version : X8.1.0.317 (Edit Level: 2050) SYMAPI Run Time Version : X8.1.0.317 (Edit Level: 2050) Built with Configuration Server Protocol Version : 0x27 Symmetrix ID : 000196800174 Configuration Server Version : 5977.653.644 Configuration Server Protocol : 0xD05 Configuration Server Date : 06.12.2015 Before Configuring DX directors # symcfg -sid 74 list -DX all Before setting up of a FAST.X environment, an EMC field technical resource needs to configure DX emulation and assign fibre channel ports for use as DX ports. If the array is newly deployed, EMC personnel will ensure proper sizing for the cache resources, as well as the port-layout configuration. For arrays that have already been deployed and are currently in use, it is necessary to closely examine the existing layout of the array and the port connections being used. Since configuring DX ports requires VMAX Dual Initiator director pairs and four available fibre channel ports, certain configuration changes may be necessary. This may necessitate modification to the existing environment before implementing FAST.X. Prior to EMC configuring DX emulation and port assignment, it is important to perform the following tasks: Unmask all VMAX3 devices from any ports that are to be assigned to DX emulation by removing them from any storage group they are members of For ports that are part of an RDF configuration, remove the RDF devices from the ports or remove the RDF relationship for any devices on the ports. Remove any masking entries related to the director port. This includes removing the WWNs of the ports from any port groups. After the DX emulation and fibre channel ports have been assigned to the directors, list them using symcfg list. In this example, DX emulation is available on four DX directors, each with 2 ports assigned. In this example, the two dual initiator pairs are DX-1H with DX-2H and DX 3H with DX-4H: Symmetrix ID: 000196800174 (Local) S Y M M E T R I X D I R E C T O R S 73

Ident Type Engine Cores Ports Status ----- ------------ ------ ----- ----- ------ DX-1H EDISK 1 2 2 Online DX-2H EDISK 1 2 2 Online DX-3H EDISK 2 2 2 Online DX-4H EDISK 2 2 2 Online EMC Symmetrix DMX, VMAX, VMAX2 The following procedures apply to all supported Symmetrix arrays other than VMAX3. All Symmetrix arrays are symmetric (active/active) storage arrays. 1) Set the correct FA port flags on the external VMAX, VMAX2, or DMX array: DQRS: Disable I/O Queue Reset on SCSI reset SPC2 - SCSI-2 support OS07: SCSI-3 with SCSI 0S-2007 amendment CMSN: Common LUN ID across all initiators UWN: Unique World Wide Name PP: Point-to-point (set for switched fabric connectivity) EAN: Enable Fibre Channel auto-link speed negotiation 2) Run the fibre cables from the DX ports to the switch and from the switch to the FA ports on the external array. 3) Map the external volumes to the FA ports on the external array. If the VCM flag is set, the LUNs must be masked to the initiator s WWN, which, in the case of FAST.X, is the DX port. 4) Zone the DX ports to the FA ports that the external volumes are available on. Create the zones between the DX ports and the FA ports, and activate them. 5) Use the symsan command from Solutions Enabler to confirm that the DX can access the external LUNs on the correct number of paths. EMC support personnel can also generate the symsan report, or the DxSan report, from the main screen of SymmWin (the Configuration Tools menu). # symsan list -sanports -sid XXXX -DX all -port all # symsan -sid XX -dir 1H -p 9 list -sanluns -wwn xxxxxxxxxxxxxxxx (xxxxxxxxxxxxxxxx is a WWN of an external storage port returned by the first command.) 6) The edisks are now be ready to configure using either Solutions Enabler or Unisphere for VMAX. 74

EMC XtremIO Design and Implementation Best Practices for FAST.X To the storage controllers and the operating system of the XtremIO array, the VMAX3 appears like an open systems host. Because of this and the fact that XtremIO has very few prerequisite settings for host connections or volume properties, no settings need to be modified on the XtremIO storage controllers or XtremIO volumes for FAST.X. Note: When creating host accessible devices, the setting for logical block size are modified for Solaris and Linux hosts running applications with a 4KB block size. This is only applicable when those hosts are accessing XtremIO volumes directly. This setting is not changed for any FAST.X volume regardless of what hosts or applications are accessing the XtremIO through the VMAX3 and FAST.X. The block size must be left at the default of 512 KB. The following procedures show the steps to present XtremIO storage to DX directors. 1) Choose the volumes that will be mapped for DX access. 2) Create an initiator group on the XtremIO for the DX initiators. Click Add in the Initiator Groups pane. Fill in an appropriate initiator group name and click Add. In the Add Initiator dialog box, give the first initiator an appropriate name, in this case the DX director and port number, and select the corresponding DX WWN from the pull down menu. Click OK. 75

Complete this for all DX initiators in the configuration. After adding all initiators, click Finish. Click the first volume, hold the Shift key, and click the last volume to select all volumes. This will add the volumes to the Volumes list in the LUN Mapping Configuration pane. Click Map All and Apply. 76

The volumes have been assigned LUNs and are mapped to the storage controllers on the XtremIO. The devices are now available to the VMAX3 through the DX ports. # symsan list -sanports -DX all -port all -sid 41 Symmetrix ID: 000197200041 Flags Num DIR:P I Vendor Array LUNs Remote Port WWN ------ ----- -------------- ---------------- ---- ---------------- 01H:07. EMC XtremIO FNM00151501047 5 21000024FF3D2743 01H:29. EMC XtremIO FNM00151501047 5 21000024FF3D2742 02H:07. EMC XtremIO FNM00151501047 1 21000024FF5D55AD 02H:29. EMC XtremIO FNM00151501047 5 21000024FF5D55AC Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. 77

# symsan list -dir 1H -p 7 -sanluns -wwn 21000024FF3D2743 -sid 41 Symmetrix ID: 000197200041 Remote Port WWN: 21000024FF3D2743 ST A T Flags Block Capacity LUN Dev LUN DIR:P E ICR THS Size (MB) Num Num WWN ------ -- ------- ----- ---------- ----- ----- -------------------------------- 01H:07 RW... F.. 512 512000 0 N/A 514F0C55EBA0000C 01H:07 RW... F.. 512 512000 1 N/A 514F0C55EBA0000D 01H:07 RW... F.. 512 512000 2 N/A 514F0C55EBA0000E 01H:07 RW... F.. 512 512000 3 N/A 514F0C55EBA0000F 01H:07 RW... F.. 512 512000 4 N/A 514F0C55EBA00010 Legend: Flags: (I)ncomplete : X = record is incomplete,. = record is complete. (C)ontroller : X = record is controller,. = record is not controller. (R)eserved : X = record is reserved,. = record is not reserved. (T)ype : A = AS400, F = FBA, C = CKD,. = Unknown t(h)in : X = record is a thin dev,. = record is not a thin dev. (S)ymmetrix : X = Symmetrix device,. = not Symmetrix device. EMC VNX There are a few, simple steps necessarty to present VNX LUNs to a host or, in this case, a VMAX3 array configured for FAST.X. 1) Register the DX initiators in the VNX. Open Unisphere on the array, and select Hosts then Initiators to open the Initiators screen. The WWNs of the DX ports should appear and be ready to register as initiators. 78

Select the first WWN, and click Register. Choose CLARiiON/VNX for the Initiator Type and the ALUA setting for the Failover Mode. Enter a name for the VMAX 3 array and add the array s IP address. Click OK, and then click Yes when prompted to confirm and OK to the remaining prompts. Repeat this process for the remaining initiators. 2) After registering the initiators, add them to the appropriate storage group. If they do not already exist, create the volumes and storage group. Click Hosts and then Storage Groups. 79

In the Storage Groups screen, select the applicable storage group. This group is called FAST.X and contains five volumes called FAST.X_0 through FAST.X_4. 80 Associate the DX initiators with the storage group. Click Connect Hosts.

Select the host name that was assigned to the VMAX3 when the DX initiators were registered from the Available Hosts list. Click the purple arrow to add the host to the Hosts to be Connected list, and click OK. Click Yes and OK to confirm. The LUNs are now visible in the FAST.X environment. The symsan command is run from the management host attached to the VMAX array to confirm that edisks are ready to be configured. The VNX is visible (it is listed as CLARiiON) along with an XtremIO array that is also connected to the DX directors. Both arrays have 5 LUNs available on all 4 paths. To view the individual VNX LUNs, use the symsan command with the sanluns option against any of the DX directors and ports. The details of the 5 VNX LUNs are shown including the LUN WWN. 81

The VNX LUNs are now ready to configure as edisks. Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com). 82