Manuel d'utilisation / d'entretien du produit CX4 du fabricant Dell
Aller à la page of 68
Dell/EMC CX4-series Fibre Channel Storage Array s With Microsoft ® Windows Server ® Failover Clusters Hardware Installation and T roubleshooting Guide.
Notes, Cautions, and W arnings NOTE: A NOTE indicates important informati on that helps you make better use of your computer . CAUTION: A CAUTION indicates either po tential damage to hardware or loss of data and tells you how to avoid the problem. WA RN I N G : A WARNING indicates a potential for property damage, personal injury , or death.
Contents 3 Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 7 Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 8 Cluster Hardware Requirements . . . . . . . . . . . . . 8 Cluster Nodes . . . . . . . . . . . . . . . .
4 Contents 3 Preparing Y our Sy stems for Clustering . . . . . . . . . . . . . . . . . . . . . . . . 39 Cluster Configuration Overview . . . . . . . . . . . . . 39 Installation Overview . . . . . . . . . . . . . . . . . . 41 Installing the Fibre Channel HBAs .
Contents 5 A T roubleshooting . . . . . . . . . . . . . . . . . . . . 55 B Zoning Configuration Form . . . . . . . . . . . 61 C Cluster Data Form . . .
6 Contents.
Introduction 7 Introduction A Dell™ F ailover Cluster combines specific hardwar e and software components to provide enhanced availabili ty for applications and serv ices that ar e run on the cluster .
8 Introduction Cluster Solution Y our cluster implements a minimum of two nodes to a maximum of either eight nodes (for W indows Server 2003 ) or sixteen nodes (for W indows Server 2008) and provides .
Introduction 9 Cluster Nodes T able 1-1 lists the har dwar e req uir ements for the cluster nodes. NOTE: For more information abo ut supported sy stems, HBA s and operating sy stem variants, see the Dell Cluster Configuration Support Matrix on the Dell High A vailability website at www .
10 Introduction Cluster Storage T able 1-2 lists supported storage system s and the configuration r equir ements for the cluster nodes and stand-alone sy stems connected to the storage systems. T able 1-3 lists har dwar e r equirements for the storage processor enclosur es (SPE), disk array enclosures (D AE), and standby power supplies (SPS).
Introduction 11 Each storage system in the cluster is centrally managed by one host system (also called a management station ) running EMC Navispher e ® Manager—a centralized storage management application used to configure Dell/EMC storage systems.
12 Introduction Supported Cluster Configurations The following sections describe the supported cluster configurations. Direct-Attached Cluster In a direct-attached cluster , all the no des of the cluster are dir ectly attached to a single storage system.
Introduction 13 SAN-Attached Cluster In a SAN-attached cluster , all nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to dir ect-attached clusters in configuration fle xibility , expandability , and performance.
14 Introduction •T h e Getting Started Guide provides an overview of initially setting up your system. • F or more information on deploying your cluster with W indows Server 2003 operating systems, see the Dell F ailover Clusters with Microsoft Windows Server 2003 Installation and T r oubleshooting Guide .
Cabling Y our Cluster Hardware 15 Cabling Y our Cluster Hardware NOTE: T o configure Dell bl ade server module s in a Dell PowerEdge cluster , see the Using Dell Blade Servers in a Dell PowerEdge High A vailability Cluster document located on the Dell Support website at support.
16 Cabling Y our Cluster Hardware Figure 2-1. Power Cabling Example With One Po wer Supply in the PowerEdge Sy stems 01 0123 01 0123 redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Cabling Y our Cluster Hardware 17 Figure 2-2. Power Cabling Example With T w o Power Supplies in the PowerEdge Sy stems Cabling Y our Cluster for Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node, as described in T able 2-1.
18 Cabling Y our Cluster Hardware F igur e 2-3 shows an e xample of cabling in which dedicated network adapters in each node are connected to each ot her (for the private network) and the remaining network adapters ar e co nnected to the public network.
Cabling Y our Cluster Hardware 19 Cabling the Private Network The private network connection to the no des is provided by a different network adapter in each node. This network is used for intra-cluster communications. T able 2-2 describe s three possible private network configurations.
20 Cabling Y our Cluster Hardware Cabling Storage for Y our Direct-Attached Cluster A dire ct-attached cluster configuratio n consists of redundant F ibre Channel host bus adapter (HBA) ports cabled di rectly to a Dell/EMC storage system.
Cabling Y our Cluster Hardware 21 Cabling a Cluster to a Dell/EMC Storage Sy stem Each cluster node attaches to the storage system using two F ibr e optic cables with duplex local connector (LC) mult imode connectors th at attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell/EMC storage system.
22 Cabling Y our Cluster Hardware Figure 2-5. Cabling a T wo-Node Cluster to a CX4-120 or CX4-240 Storage Sy stem Figure 2-6. Cabling a T wo-Node Cluster to a CX4-480 Storage Sy stem 01 01 23 01 01 23.
Cabling Y our Cluster Hardware 23 Figure 2-7. Cabling a T wo-Node Clus ter to a CX4-960 Storage Sy stem Cabling a Multi-Node Cluster to a Dell/EMC Storage Sy stem Y ou can configure a cluster with mor.
24 Cabling Y our Cluster Hardware 2 Connect cluster node 2 to the storage system: a Install a cable from cluster node 2 HBA port 0 to the second front-end fibre channel port on SP -A. b Install a cable from cluster node 2 HBA port 1 to the second front-end fibre channel port on SP -B.
Cabling Y our Cluster Hardware 25 Cabling T w o T wo-Node Clusters to a Dell/EMC Storage Sy stem The following steps are an e xample of ho w to cable a two two-node cluster . The Dell/EMC storage system needs to have at least 4 front-end fibre channel ports available on each storage processor .
26 Cabling Y our Cluster Hardware F igur e 2-8 shows an e xample of a two node SAN-attached cluster . F igur e 2-9 shows an e xample of an eight-node SAN-attached cluster . Similar cabling concepts can be applied to clusters that contain a different number of nodes.
Cabling Y our Cluster Hardware 27 Figure 2-9. Eight-Node SAN-Attached Cluster public network storage sy stem cluster nodes (2-8) Fibre Channel switch Fibre Channel switch private network.
28 Cabling Y our Cluster Hardware Cabling a SAN-Attached Cluster to a Dell/EMC Storage Sy stem The cluster nodes attach to the storage system using a redundant switch fabric and F i bre optic cables with duple x LC multimode connectors.
Cabling Y our Cluster Hardware 29 Cabling a SAN-Attached Cluster to a De ll/EMC CX4-120 or CX4-240 Storage Sy stem 1 Connect cluster node 1 to the SAN: a Connect a cable from HBA port 0 to F i br e Channel switch 0 (sw0). b Connect a cable from HBA port 1 to F i br e Channel switch 1 (sw1).
30 Cabling Y our Cluster Hardware Figure 2-10. Cabling a SAN-Attached Cl uster to the Dell/EMC CX4-120 or CX4-240 Cabling a SAN-Attached Cluster to the Dell/EMC CX4-480 or CX4-960 Storage Sy stem 1 Connect cluster node 1 to the SAN: a Connect a cable from HBA port 0 to F ibre Channel switch 0 (sw0).
Cabling Y our Cluster Hardware 31 d Connect a cable from F i bre Channe l switch 0 (sw0) to the second front-end fibre channel port on SP -B. e Connect a cable from F i bre Channel switch 1 (sw1) to the thir d front- end fibr e channel port on SP -A.
32 Cabling Y our Cluster Hardware Figure 2-12. Cabling a SAN-Attached Cluster to the DellEMC CX4-960 Cabling Multiple SAN-Atta ched Clusters to a De ll/EMC Storage Sy stem T o cable multiple clusters .
Cabling Y our Cluster Hardware 33 Cabling Multiple SAN-Attached Clus ters to the CX4-120 or CX4-240 Storage Sy stem 1 In the first cluster , connect cluster node 1 to the SAN: a Connect a cable from HBA port 0 to F i br e Channel switch 0 (sw0). b Connect a cable from HBA port 1 to F i br e Channel switch 1 (sw1).
34 Cabling Y our Cluster Hardware c Connect a cable from F i bre Channe l switch 0 (sw0) to the second front-end fibr e channel port on SP -A. d Connect a cable from F i bre Channe l switch 0 (sw0) to the second front-end fibr e channel port on SP -B.
Cabling Y our Cluster Hardware 35 • MSCS is limited to 22 drive letters. Because drive letters A through D ar e r eserved for local disks, a maximum of 22 drive letters (E to Z) can be used for your storage system disks. • W indows Server 2003 and 2008 support mount points, allowing greater than 22 drives per cluster .
36 Cabling Y our Cluster Hardware NOTE: While tape libraries can be connected to multiple fabrics, they do not provide path failover . Figure 2-14. Cabling a Storage Sy stem and a T ape Library Obtaining More Information See the storage and tape backup docum entation for more information on configuring these components.
Cabling Y our Cluster Hardware 37 Figure 2-15. Cluster Config uration Using SAN-Based Backup cluster 2 cluster 1 Fibre Channel switch storage sy stems tape library Fibre Channel switch.
38 Cabling Y our Cluster Hardware.
Preparing Y our Sy stems for Clustering 39 Preparing Y our Sy stems for Clustering WARNING: Only trained service technicians are authoriz ed to remove and access any of the components inside the sy stem.
40 Preparing Y our Sy st ems for Clustering 5 Configure each cluster node as a member in the same W indows Active Directory Domain. NOTE: Y ou can configure the cluster nodes as Domain Controllers.
Preparing Y our Sy stems for Clustering 41 12 Configur e highly-available applications and services on your F ailover Cluster . Depending on your configurat ion, this may also requir e providing additional L UNs to the cluster or cr eating new cluster r esource groups.
42 Preparing Y our Sy st ems for Clustering Installing the Fibre Channel HBAs Fo r d u a l - HBA configurations, it is recomm ended that you install the F ibre Channel HBAs on separate perip heral component interconnect (PCI) buses. Placing the adapters on separate buse s improves availability and performance.
Preparing Y our Sy stems for Clustering 43 Zoning automatically and transparently enforces access of information to the zone devices. Mor e than one P owerEdg e cluster configuration can share Dell/EMC storage system(s) in a switched fabric using F ibr e Channel switch zoning and with Access Control enabled .
44 Preparing Y our Sy st ems for Clustering CAUTION: When you replace a Fibre Channe l HBA in a PowerEdge server , reconfigure your zones to provide continuous client data access. Additionally , when you replace a switch module, recon figure your zones to prevent data loss or corruption.
Preparing Y our Sy stems for Clustering 45 Installing and Configuring the Shared Storage Sy stem See "Cluster Hardwar e Requir ements" on page 8 for a list of supported Dell/EMC storage systems.
46 Preparing Y our Sy st ems for Clustering Access Control is enabled using N avisphere Manager . After you enable Access Control and connect to the storage system from a management station, Access Control appears in the Storage System P roperties window of Navisphere Manager .
Preparing Y our Sy stems for Clustering 47 T able 3-2. Storage Group Properties Property Description Unique ID A unique identifier that is automatically assigned to the storage group that cannot be changed. Storage group name The name of the storage group.
48 Preparing Y our Sy st ems for Clustering Navisphere Manager Navisphere Manager provides centralized storage management and configuration from a single management console. Using a graphical user interface (GUI), Navispher e Manager allo ws you to configure and manage the disks and components in one or more shar ed storage systems.
Preparing Y our Sy stems for Clustering 49 2 Add the following two separate lines to the agentID.txt file, with no special formatting: • F irst line: F ully qualified hostname. F or example, enter node1.domain1.com , if the host name is node1 and the domain name is domain1 .
50 Preparing Y our Sy st ems for Clustering 3 Enter the IP address of the storage management server on your storage system and then pr ess <Enter>. NOTE: The storage management server is usually one of the SPs on your storage sy stem. 4 In the Enterprise Storage window , click the Storage tab.
Preparing Y our Sy stems for Clustering 51 d Repeat step b and step c to add additional hosts. e Click Apply . 16 Click OK to exit the Storage Group P roperties dialog box. Configuring the Hard Drives on the Shared Storage Sy stem(s) This section provides information fo r configuring the hard drives on the shared storage systems.
52 Preparing Y our Sy st ems for Clustering Assigning LUNs to Hosts If you have Access Control enabled in Navisphere Manager , you must create storage groups and assign L UNs to the proper host systems.
Preparing Y our Sy stems for Clustering 53 Updating a Dell/EMC Storage Sy stem for Clustering If you ar e updating an existing Dell/ EMC storage system to meet the cluster r equirements for the shar ed storage subsystem, you may need to install additional F ibre Channel disk drives in the shar ed storage system.
54 Preparing Y our Sy st ems for Clustering.
T roubleshooting 55 T roubleshooting This appendix provides troublesho oting information for your cluster configuration. T able A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. T able A-1.
56 T roubl eshooting One of the nodes takes a long time to join the cluster . or One of the nodes fail to join the cluster . The node-to-node network has failed due to a cabling or har dware fai lure. Check the network cabling. Ensure that the node-to-node interconnection and the public network are connected to the correct NICs.
T roubleshooting 57 Attempts to connect to a cluster using Cluster Administrator fail. The Cluster Service has not been started. A cluster has not been formed on the system. The system has just been booted and services are still starting. V erify that the Cluster Service is running and that a cluster has been formed.
58 T roubl eshooting Y ou are prompted to configure one network instead of two during MSCS installation. The TCP/IP configuration is incorrect. The node-to-node network and public network must be assigned static IP addr esses on different subnets.
T roubleshooting 59 Unable to add a node to the cluster . The new node cannot access the shared disks. The shared disks are enumerated by the operating system differently on the cluster nodes. Ensure that the new cluster node can enumerate the cluster disks using W indows Disk Administration.
60 T roubl eshooting Cluster Services does not operate correctly on a cluster running W indows Server 2003 and the Internet F irewall enabled. The W indows Internet Connection F irewall is enabled, which may conflict with Cluster Services. P erform the following steps: 1 On the W indows desktop, right-click My Computer and click Manage .
Zoning Configuration Form 61 Zoning Configuration Form Node HBA WWPNs or Alias Names Storage WWPNs or Alias Names Zone Name Zone Set for Configuration Name.
62 Zoning Configuration Form.
Cluster Data Form 63 Cluster Data Form Y ou can attach the following form in a convenient location near each cluster node or rack to r ecord information abou t the cluster .
64 Cluster Data Form Additional Networks T able C-3. Storage Array Information Array Array xPE T ype Array Service T ag Number or W orld Wide Name Seed Number of Attached DAEs 1 2 3 4.
Index 65 Index A Access Control about, 4 5 C cable configurations cluster interconnect, 1 9 for client networks, 1 8 for mouse, k eyboard, and monitor , 1 5 for power supplies, 1 5 cluster optional co.
66 Index H HBA drivers installing and configuring, 4 2 host bus adapter configuring the F ibre Channel HBA, 4 2 K key b o a rd cabling, 1 5 L LU N s assigning to hosts, 5 2 configuring and managing, 5.
Index 67 S SAN configuring SAN backup in your cluster , 3 6 SAN-Attached Cluster , 13 SAN-attached cluster about, 2 5 configurations, 1 2 shared storage assigning L UNs to hosts, 5 2 single initiator .
68 Index.
Un point important après l'achat de l'appareil (ou même avant l'achat) est de lire le manuel d'utilisation. Nous devons le faire pour quelques raisons simples:
Si vous n'avez pas encore acheté Dell CX4 c'est un bon moment pour vous familiariser avec les données de base sur le produit. Consulter d'abord les pages initiales du manuel d'utilisation, que vous trouverez ci-dessus. Vous devriez y trouver les données techniques les plus importants du Dell CX4 - de cette manière, vous pouvez vérifier si l'équipement répond à vos besoins. Explorant les pages suivantes du manuel d'utilisation Dell CX4, vous apprendrez toutes les caractéristiques du produit et des informations sur son fonctionnement. Les informations sur le Dell CX4 va certainement vous aider à prendre une décision concernant l'achat.
Dans une situation où vous avez déjà le Dell CX4, mais vous avez pas encore lu le manuel d'utilisation, vous devez le faire pour les raisons décrites ci-dessus,. Vous saurez alors si vous avez correctement utilisé les fonctions disponibles, et si vous avez commis des erreurs qui peuvent réduire la durée de vie du Dell CX4.
Cependant, l'un des rôles les plus importants pour l'utilisateur joués par les manuels d'utilisateur est d'aider à résoudre les problèmes concernant le Dell CX4. Presque toujours, vous y trouverez Troubleshooting, soit les pannes et les défaillances les plus fréquentes de l'apparei Dell CX4 ainsi que les instructions sur la façon de les résoudre. Même si vous ne parvenez pas à résoudre le problème, le manuel d‘utilisation va vous montrer le chemin d'une nouvelle procédure – le contact avec le centre de service à la clientèle ou le service le plus proche.