Sophie

Sophie

distrib > CentOS > 5 > x86_64 > by-pkgid > 31c2634f4141661af454e52d4707600b > files > 240

Cluster_Administration-fr-FR-5.8-1.el5.centos.noarch.rpm

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>Cluster Administration</title><link rel="stylesheet" type="text/css" href="Common_Content/css/default.css" /><link rel="stylesheet" media="print" href="Common_Content/css/print.css" type="text/css" /><meta name="generator" content="publican 2.8" /><meta name="package" content="Red_Hat_Enterprise_Linux-Cluster_Administration-5-fr-FR-5-50" /><meta name="description" content="Configuring and Managing a Red Hat Cluster describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 5. It does not include information about Red Hat Linux Virtual Servers (LVS). Information about installing and configuring LVS is in a separate document." /></head><body class="desktop "><p id="title"><a class="left" href="http://www.redhat.com"><img src="Common_Content/images/image_left.png" alt="Product Site" /></a><a class="right" href="http://docs.redhat.com"><img src="Common_Content/images/image_right.png" alt="Documentation Site" /></a></p><div xml:lang="fr-FR" class="book" id="index-cluster-administration" lang="fr-FR"><div class="titlepage"><div><div class="producttitle"><span class="productname">Red Hat Enterprise Linux</span> <span class="productnumber">5</span></div><div><h1 class="title">Cluster Administration</h1></div><div><h2 class="subtitle">Configuring and Managing a Red Hat Cluster</h2></div><p class="edition">Édition 5</p><div><h3 class="corpauthor">
		<span class="inlinemediaobject"><object data="Common_Content/images/title_logo.svg" type="image/svg+xml"> Logo</object></span>

	</h3></div><hr /><div><div id="id778762" class="legalnotice"><h1 class="legalnotice">Note légale</h1><div class="para">
		Copyright <span class="trademark"></span>© 2012 Red Hat Inc..
	</div><div class="para">
		The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at <a href="http://creativecommons.org/licenses/by-sa/3.0/">http://creativecommons.org/licenses/by-sa/3.0/</a>. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
	</div><div class="para">
		Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
	</div><div class="para">
		Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
	</div><div class="para">
		<span class="trademark">Linux</span>® is the registered trademark of Linus Torvalds in the United States and other countries.
	</div><div class="para">
		<span class="trademark">Java</span>® is a registered trademark of Oracle and/or its affiliates.
	</div><div class="para">
		<span class="trademark">XFS</span>® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
	</div><div class="para">
		<span class="trademark">MySQL</span>® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
	</div><div class="para">
		All other trademarks are the property of their respective owners.
	</div><div class="para">
		
<div class="address"><p><br />
			<span class="street">1801 Varsity Drive</span><br />
			 <span class="city">Raleigh</span>, <span class="state">NC</span> <span class="postcode">27606-2072</span> <span class="country">USA</span><br />
			 <span class="phone">Phone: +1 919 754 3700</span><br />
			 <span class="phone">Phone: 888 733 4281</span><br />
			 <span class="fax">Fax: +1 919 754 3701</span><br />
<br />
</p></div>

	</div></div></div><div><div class="abstract"><h6>Résumé</h6><div class="para">
			<em class="citetitle">Configuring and Managing a Red Hat Cluster </em> describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 5. It does not include information about Red Hat Linux Virtual Servers (LVS). Information about installing and configuring LVS is in a separate document.
		</div></div></div></div><hr /></div><div class="toc"><dl><dt><span class="preface"><a href="#ch-intro-CA">Introduction</a></span></dt><dd><dl><dt><span class="section"><a href="#id894860">1. Conventions d'écriture</a></span></dt><dd><dl><dt><span class="section"><a href="#id835955">1.1. Conventions typographiques</a></span></dt><dt><span class="section"><a href="#id859669">1.2. Conventions pour citations mises en avant</a></span></dt><dt><span class="section"><a href="#id861394">1.3. Notes et avertissements</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-intro-feedback-CA">2. Feedback</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-overview-CA">1. Red Hat Cluster Configuration and Management Overview</a></span></dt><dd><dl><dt><span class="section"><a href="#s1-clust-config-basics-CA">1.1. Configuration Basics</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-hw-setup-CA">1.1.1. Setting Up Hardware</a></span></dt><dt><span class="section"><a href="#s2-install-clust-sw-CA">1.1.2. Installing Red Hat Cluster software</a></span></dt><dt><span class="section"><a href="#s2-config-cluster-CA">1.1.3. Configuring Red Hat Cluster Software</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-conga-overview-CA">1.2. Conga</a></span></dt><dt><span class="section"><a href="#s1-clumgmttools-overview-CA">1.3. <code class="command">system-config-cluster</code> Cluster Administration GUI</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-cluconfig-tool-CA">1.3.1. <span class="application"><strong>Cluster Configuration Tool</strong></span></a></span></dt><dt><span class="section"><a href="#s2-admin-overview-CA">1.3.2. <span class="application"><strong>Cluster Status Tool</strong></span></a></span></dt></dl></dd><dt><span class="section"><a href="#s1-cmdlinetools-overview-CA">1.4. Command Line Administration Tools</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-before-config-CA">2. Before Configuring a Red Hat Cluster</a></span></dt><dd><dl><dt><span class="section"><a href="#s1-clust-config-considerations-CA">2.1. General Configuration Considerations</a></span></dt><dt><span class="section"><a href="#s1-hw-compat-CA">2.2. Compatible Hardware</a></span></dt><dt><span class="section"><a href="#s1-iptables-CA">2.3. Enabling IP Ports</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-iptables-clnodes-CA">2.3.1. Enabling IP Ports on Cluster Nodes</a></span></dt><dt><span class="section"><a href="#s2-iptables-conga-CA">2.3.2. Enabling IP Ports on Computers That Run <span class="application"><strong>luci</strong></span></a></span></dt></dl></dd><dt><span class="section"><a href="#s1-acpi-CA">2.4. Configuring ACPI For Use with Integrated Fence Devices</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-acpi-disable-chkconfig-CA">2.4.1. Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management</a></span></dt><dt><span class="section"><a href="#s2-bios-setting-CA">2.4.2. Disabling ACPI Soft-Off with the BIOS</a></span></dt><dt><span class="section"><a href="#s2-acpi-disable-boot-CA">2.4.3. Disabling ACPI Completely in the <code class="filename">grub.conf</code> File</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-clust-svc-ov-CA">2.5. Considerations for Configuring HA Services</a></span></dt><dt><span class="section"><a href="#s1-max-luns-CA">2.6. Configuring max_luns</a></span></dt><dt><span class="section"><a href="#s1-qdisk-considerations-CA">2.7. Considerations for Using Quorum Disk</a></span></dt><dt><span class="section"><a href="#s1-selinux-CA">2.8. Red Hat Cluster Suite and SELinux</a></span></dt><dt><span class="section"><a href="#s1-multicast-considerations-CA">2.9. Multicast Addresses</a></span></dt><dt><span class="section"><a href="#s1-iptables_firewall-CA">2.10. Configuring the iptables Firewall to Allow Cluster Components</a></span></dt><dt><span class="section"><a href="#s1-conga-considerations-CA">2.11. Considerations for Using <span class="application"><strong>Conga</strong></span></a></span></dt><dt><span class="section"><a href="#s1-vm-considerations-CA">2.12. Configuring Virtual Machines in a Clustered Environment</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-config-conga-CA">3. Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></a></span></dt><dd><dl><dt><span class="section"><a href="#s1-config-tasks-conga-CA">3.1. Configuration Tasks</a></span></dt><dt><span class="section"><a href="#s1-start-luci-ricci-conga-CA">3.2. Starting <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span></a></span></dt><dt><span class="section"><a href="#s1-creating-cluster-conga-CA">3.3. Creating A Cluster</a></span></dt><dt><span class="section"><a href="#s1-general-prop-conga-CA">3.4. Global Cluster Properties</a></span></dt><dt><span class="section"><a href="#s1-config-fence-devices-conga-CA">3.5. Configuring Fence Devices</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-create-fence-devices-conga-CA">3.5.1. Creating a Shared Fence Device</a></span></dt><dt><span class="section"><a href="#s2-modify-delete-fence-devices-conga-CA">3.5.2. Modifying or Deleting a Fence Device</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-member-conga-CA">3.6. Configuring Cluster Members</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-init-member-conga-CA">3.6.1. Initially Configuring Members</a></span></dt><dt><span class="section"><a href="#s2-add-member-running-conga-CA">3.6.2. Adding a Member to a Running Cluster</a></span></dt><dt><span class="section"><a href="#s2-delete-member-conga-CA">3.6.3. Deleting a Member from a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-failover-domain-conga-CA">3.7. Configuring a Failover Domain</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-config-add-failoverdm-conga-CA">3.7.1. Adding a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-modify-failoverdm-conga-CA">3.7.2. Modifying a Failover Domain</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-add-resource-conga-CA">3.8. Adding Cluster Resources</a></span></dt><dt><span class="section"><a href="#s1-add-service-conga-CA">3.9. Adding a Cluster Service to the Cluster</a></span></dt><dt><span class="section"><a href="#s1-config-storage-conga-CA">3.10. Configuring Cluster Storage</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-mgmt-conga-CA">4. Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></a></span></dt><dd><dl><dt><span class="section"><a href="#s1-admin-start-conga-CA">4.1. Starting, Stopping, and Deleting Clusters</a></span></dt><dt><span class="section"><a href="#s1-admin-manage-nodes-conga-CA">4.2. Managing Cluster Nodes</a></span></dt><dt><span class="section"><a href="#s1-admin-manage-ha-services-conga-CA">4.3. Managing High-Availability Services</a></span></dt><dt><span class="section"><a href="#s1-admin-problems-conga-CA">4.4. Diagnosing and Correcting Problems in a Cluster</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-config-scc-CA">5. Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></a></span></dt><dd><dl><dt><span class="section"><a href="#s1-config-tasks-CA">5.1. Configuration Tasks</a></span></dt><dt><span class="section"><a href="#s1-start-clustertool-CA">5.2. Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span></a></span></dt><dt><span class="section"><a href="#s1-naming-cluster-CA">5.3. Configuring Cluster Properties</a></span></dt><dt><span class="section"><a href="#s1-config-fence-devices-CA">5.4. Configuring Fence Devices</a></span></dt><dt><span class="section"><a href="#s1-add-delete-member-CA">5.5. Adding and Deleting Members</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-add-member-new-CA">5.5.1. Adding a Member to a Cluster</a></span></dt><dt><span class="section"><a href="#s2-add-member-running-CA">5.5.2. Adding a Member to a Running Cluster</a></span></dt><dt><span class="section"><a href="#s2-delete-member-CA">5.5.3. Deleting a Member from a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-failover-domain-CA">5.6. Configuring a Failover Domain</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-config-add-failoverdm-CA">5.6.1. Adding a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-remove-failoverdm-CA">5.6.2. Removing a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-remove-member-failoverdm-CA">5.6.3. Removing a Member from a Failover Domain</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-service-dev-CA">5.7. Adding Cluster Resources</a></span></dt><dt><span class="section"><a href="#s1-add-service-CA">5.8. Adding a Cluster Service to the Cluster</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-add-service-CA-relocate">5.8.1. Relocating a Service in a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-propagate-config-CA">5.9. Propagating The Configuration File: New Cluster</a></span></dt><dt><span class="section"><a href="#s1-starting-cluster-CA">5.10. Starting the Cluster Software</a></span></dt></dl></dd><dt><span class="chapter"><a href="#ch-mgmt-scc-CA">6. Managing Red Hat Cluster With <code class="command">system-config-cluster</code></a></span></dt><dd><dl><dt><span class="section"><a href="#s1-admin-start-CA">6.1. Starting and Stopping the Cluster Software</a></span></dt><dt><span class="section"><a href="#s1-admin-service-CA">6.2. Managing High-Availability Services</a></span></dt><dt><span class="section"><a href="#s1-admin-modify-CA">6.3. Modifying the Cluster Configuration</a></span></dt><dt><span class="section"><a href="#s1-admin-backup-CA">6.4. Backing Up and Restoring the Cluster Database</a></span></dt><dt><span class="section"><a href="#s1-admin-disable-resource-CA">6.5. Disabling Resources of a Clustered Service for Maintenance</a></span></dt><dt><span class="section"><a href="#s1-admin-disable-CA">6.6. Disabling the Cluster Software</a></span></dt><dt><span class="section"><a href="#s1-admin-problems-CA">6.7. Diagnosing and Correcting Problems in a Cluster</a></span></dt></dl></dd><dt><span class="appendix"><a href="#ap-httpd-service-CA">A. Example of Setting Up Apache HTTP Server</a></span></dt><dd><dl><dt><span class="section"><a href="#s1-apache-setup-CA">A.1. Apache HTTP Server Setup Overview</a></span></dt><dt><span class="section"><a href="#s1-apache-sharedfs-CA">A.2. Configuring Shared Storage</a></span></dt><dt><span class="section"><a href="#s1-apache-inshttpd-CA">A.3. Installing and Configuring the Apache HTTP Server</a></span></dt></dl></dd><dt><span class="appendix"><a href="#ap-fence-device-param-CA">B. Fence Device Parameters</a></span></dt><dt><span class="appendix"><a href="#ap-ha-resource-params-CA">C. HA Resource Parameters</a></span></dt><dt><span class="appendix"><a href="#ap-ha-resource-behavior-CA">D. HA Resource Behavior</a></span></dt><dd><dl><dt><span class="section"><a href="#s1-clust-rsc-desc-CA">D.1. Parent, Child, and Sibling Relationships Among Resources</a></span></dt><dt><span class="section"><a href="#s1-clust-rsc-sibling-starting-order-CA">D.2. Sibling Start Ordering and Resource Child Ordering</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-clust-rsc-typed-resources-CA">D.2.1. Typed Child Resource Start and Stop Ordering</a></span></dt><dt><span class="section"><a href="#s2-clust-rsc-non-typed-resources-CA">D.2.2. Non-typed Child Resource Start and Stop Ordering</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-clust-rsc-inherit-resc-reuse-CA">D.3. Inheritance, the &lt;resources&gt; Block, and Reusing Resources</a></span></dt><dt><span class="section"><a href="#s1-clust-rsc-failure-rec-CA">D.4. Failure Recovery and Independent Subtrees</a></span></dt><dt><span class="section"><a href="#s1-clust-rsc-testing-config-CA">D.5. Debugging and Testing Services and Resource Ordering</a></span></dt></dl></dd><dt><span class="appendix"><a href="#ap-status-check-CA">E. Cluster Service Resource Check and Failover Timeout</a></span></dt><dd><dl><dt><span class="section"><a href="#resource-status-check-CA">E.1. Modifying the Resource Status Check Interval</a></span></dt><dt><span class="section"><a href="#resource-timeout-CA">E.2. Enforcing Resource Timeouts</a></span></dt><dt><span class="section"><a href="#concensus-timeout-CA">E.3. Changing Consensus Timeout</a></span></dt></dl></dd><dt><span class="appendix"><a href="#ap-upgrade-rhel4-to-rhel5-CA">F. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5</a></span></dt><dt><span class="appendix"><a href="#appe-Publican-Revision_History">G. Revision History</a></span></dt><dt><span class="index"><a href="#id625066">Index</a></span></dt></dl></div><div xml:lang="fr-FR" class="preface" id="ch-intro-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Introduction</h1></div></div></div><a id="id831309" class="indexterm"></a><div class="para">
		This document provides information about installing, configuring and managing Red Hat Cluster components. Red Hat Cluster components are part of Red Hat Cluster Suite and allow you to connect a group of computers (called <em class="firstterm">nodes</em> or <em class="firstterm">members</em>) to work together as a cluster. This document does not include information about installing, configuring, and managing Linux Virtual Server (LVS) software. Information about that is in a separate document.
	</div><div class="para">
		The audience of this document should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of clusters, storage, and server computing.
	</div><div class="para">
		This document is organized as follows:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#ch-overview-CA">Chapitre 1, <em>Red Hat Cluster Configuration and Management Overview</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ch-before-config-CA">Chapitre 2, <em>Before Configuring a Red Hat Cluster</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ch-config-conga-CA">Chapitre 3, <em>Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ch-mgmt-conga-CA">Chapitre 4, <em>Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ch-config-scc-CA">Chapitre 5, <em>Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ch-mgmt-scc-CA">Chapitre 6, <em>Managing Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ap-httpd-service-CA">Annexe A, <em>Example of Setting Up Apache HTTP Server</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ap-fence-device-param-CA">Annexe B, <em>Fence Device Parameters</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ap-ha-resource-params-CA">Annexe C, <em>HA Resource Parameters</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ap-ha-resource-behavior-CA">Annexe D, <em>HA Resource Behavior</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#ap-upgrade-rhel4-to-rhel5-CA">Annexe F, <em>Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5</em></a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#appe-Publican-Revision_History">Annexe G, <em>Revision History</em></a>
			</div></li></ul></div><div class="para">
		For more information about Red Hat Enterprise Linux 5, refer to the following resources:
	</div><a id="id805251" class="indexterm"></a><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<em class="citetitle">Red Hat Enterprise Linux Installation Guide</em> — Provides information regarding installation of Red Hat Enterprise Linux 5.
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Red Hat Enterprise Linux Deployment Guide</em> — Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 5.
			</div></li></ul></div><div class="para">
		For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 5, refer to the following resources:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<em class="citetitle">Red Hat Cluster Suite Overview</em> — Provides a high level overview of the Red Hat Cluster Suite.
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Logical Volume Manager Administration</em> — Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Global File System: Configuration and Administration</em> — Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System).
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Global File System 2: Configuration and Administration</em> — Provides information about installing, configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2).
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Using Device-Mapper Multipath</em> — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Using GNBD with Global File System</em> — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS.
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Linux Virtual Server Administration</em> — Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS).
			</div></li><li class="listitem"><div class="para">
				<em class="citetitle">Red Hat Cluster Suite Release Notes</em> — Provides information about the current release of Red Hat Cluster Suite.
			</div></li></ul></div><div class="para">
		Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and <a href="http://docs.redhat.com/docs/en-US/index.html">http://docs.redhat.com/docs/en-US/index.html</a>.
	</div><div xml:lang="fr-FR" class="section" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title" id="id894860">1. Conventions d'écriture</h2></div></div></div><div class="para">
		Ce manuel utilise plusieurs conventions pour souligner l'importance de certains mots ou expressions, mais aussi en vue d'attirer l'attention sur certains passages d'informations précis.
	</div><div class="para">
		Pour les éditions sur support papier et numérique (PDF), ce manuel utilise des caractères issus de <a href="https://fedorahosted.org/liberation-fonts/">Liberation Fonts</a>. La police de caractères Liberation Fonts est également utilisée pour les éditions HTML si elle est installée sur votre système. Sinon, des polices de caractères alternatives équivalentes sont utilisées. Notez que Red Hat Enterprise Linux 5 et versions supérieures contiennent la police Liberation Fonts par défaut.
	</div><div class="section"><div class="titlepage"><div><div><h3 class="title" id="id835955">1.1. Conventions typographiques</h3></div></div></div><div class="para">
			Quatre conventions typographiques sont utilisées pour attirer l'attention sur certains mots et expressions. Ces conventions et les circonstances auxquelles elles s'appliquent sont les suivantes.
		</div><div class="para">
			<code class="literal">Caractères gras à espacement fixe</code>
		</div><div class="para">
			Utilisée pour surligner certaines entrées du système, comme les commandes de console, les noms de fichiers et les chemins d'accès. Également utilisé pour surligner les touches et les combinaisons de touches. Par exemple :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Pour consulter le contenu du fichier <code class="filename">mon_nouvel_ouvrage_littéraire</code> qui se situe dans votre dossier courant, saisissez la commande <code class="command">cat mon_nouvel_ouvrage_littéraire</code> à la demande du terminal et appuyez sur <span class="keycap"><strong>Entrée</strong></span> pour exécuter la commande.
			</div></blockquote></div><div class="para">
			L'exemple ci-dessus contient un nom de fichier, une commande-console et un nom de touche, tous présentés sous forme de caractères gras à espacement fixe et tous bien distincts grâce au contexte.
		</div><div class="para">
			Les combinaisons de touches sont différenciées des noms de touches par le caractère « plus » (« + ») qui fait partie de chaque combinaison de touches. Ainsi :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Appuyez sur <span class="keycap"><strong>Entrée</strong></span> pour exécuter la commande.
			</div><div class="para">
				Appuyez sur <span class="keycap"><strong>Ctrl</strong></span>+<span class="keycap"><strong>Alt</strong></span>+<span class="keycap"><strong>F2</strong></span> pour passer au premier terminal virtuel. Appuyer sur <span class="keycap"><strong>Ctrl</strong></span>+<span class="keycap"><strong>Alt</strong></span>+<span class="keycap"><strong>F1</strong></span> pour retournez à votre session X-Windows.
			</div></blockquote></div><div class="para">
			Le premier paragraphe surligne la touche précise sur laquelle il faut appuyer. Le second surligne deux combinaisons de touches (chacun étant un ensemble de trois touches à presser simultanément).
		</div><div class="para">
			Si le code source est mentionné, les noms de classes, les méthodes, les fonctions, les noms de variables et les valeurs de retour citées dans un paragraphe seront présentées comme ci-dessus, en <code class="literal">caractères gras à espacement fixe</code>. Par exemple :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Les classes de fichiers comprennent le nom de classe <code class="classname">filesystem</code> pour les noms de fichier, <code class="classname">file</code> pour les fichiers et <code class="classname">dir</code> pour les dossiers. Chaque classe correspond à un ensemble de permissions associées.
			</div></blockquote></div><div class="para">
			<span class="application"><strong>Caractères gras proportionnels</strong></span>
		</div><div class="para">
			Cette convention marque le surlignage des mots ou phrases que l'on rencontre sur un système, comprenant des noms d'application, des boîtes de dialogue textuelles, des boutons étiquettés, des cases à cocher et des boutons d'options mais aussi des intitulés de menus et de sous-menus. Par exemple :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Sélectionnez <span class="guimenu"><strong>Système</strong></span> → <span class="guisubmenu"><strong>Préférences</strong></span> → <span class="guimenuitem"><strong>Souris</strong></span> à partir de la barre du menu principal pour lancer les <span class="application"><strong>Préférences de la souris</strong></span>. À partir de l'onglet <span class="guilabel"><strong>Boutons</strong></span>, cliquez sur la case à cocher <span class="guilabel"><strong>Pour gaucher</strong></span> puis cliquez sur <span class="guibutton"><strong>Fermer</strong></span> pour faire passer le bouton principal de la souris de la gauche vers la droite (ce qui permet l'utilisation de la souris par la main gauche).
			</div><div class="para">
				Pour insérer un caractère spécial dans un fichier <span class="application"><strong>gedit</strong></span>, choisissez <span class="guimenu"><strong>Applications</strong></span> → <span class="guisubmenu"><strong>Accessoires</strong></span> → <span class="guimenuitem"><strong>Table de caractères</strong></span> à partir de la barre du menu principal. Ensuite, sélectionnez <span class="guimenu"><strong>Rechercher</strong></span> → <span class="guimenuitem"><strong> Rechercher…</strong></span> à partir de la barre de menu de <span class="application"><strong>Table de caractères</strong></span>, saisissez le nom du caractère dans le champ <span class="guilabel"><strong>Rechercher</strong></span> puis cliquez sur <span class="guibutton"><strong>Suivant</strong></span>. Le caractère que vous recherchez sera surligné dans la <span class="guilabel"><strong>Table de caractères</strong></span>. Double-cliquez sur le caractère surligné pour l'insérer dans le champ <span class="guilabel"><strong>Texte à copier</strong></span>, puis cliquez sur le bouton <span class="guibutton"><strong>Copier</strong></span>. Maintenant, revenez à votre document et sélectionnez <span class="guimenu"><strong>Édition</strong></span> → <span class="guimenuitem"><strong>Coller</strong></span> à partir de la barre de menu de <span class="application"><strong>gedit</strong></span>.
			</div></blockquote></div><div class="para">
			Le texte ci-dessus contient des noms d'applications, des noms de menus et d'autres éléments s'appliquant à l'ensemble du système, des boutons et textes que l'on trouve dans une interface graphique. Ils sont tous présentés sous la forme gras proportionnel et identifiables en fonction du contexte.
		</div><div class="para">
			<code class="command"><em class="replaceable"><code>Italique gras à espacement fixe </code></em></code> ou <span class="application"><strong><em class="replaceable"><code>Italique gras proportionnel</code></em></strong></span>
		</div><div class="para">
			Qu'ils soient en caractères gras à espacement fixe ou à caractères gras proportionnels, l'ajout de l'italique indique la présence de texte remplaçable ou variable. Les caractères en italique indiquent la présence de texte que vous ne saisissez pas littéralement ou de texte affiché qui change en fonction des circonstances. Par exemple :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Pour se connecter à une machine distante en utilisant ssh, saisissez <code class="command">ssh <em class="replaceable"><code>nom d'utilisateur</code></em>@<em class="replaceable"><code>domain.name (nom.domaine)</code></em></code> après l'invite de commande de la console. Si la machine distante est <code class="filename">exemple.com</code> et que votre nom d'utilisateur pour cette machine est john, saisissez <code class="command">ssh john@example.com</code>.
			</div><div class="para">
				La commande <code class="command">mount -o remount <em class="replaceable"><code>système de fichiers</code></em></code> monte le système de fichiers nommé. Ainsi, pour monter <code class="filename">/home</code> dans le système de fichiers, la commande est <code class="command">mount -o remount /home</code>.
			</div><div class="para">
				Pour connaître la version d'un paquet actuellement installé, utilisez la commande <code class="command">rpm -q <em class="replaceable"><code>paquet</code></em></code>. Elle vous permettra de retourner le résultat suivant : <code class="command"><em class="replaceable"><code>version-de-paquet</code></em></code>.
			</div></blockquote></div><div class="para">
			Notez les mots en caractères italiques et gras au dessus de — nom d'utilisateur, domain.name, système fichier, paquet, version et mise à jour. Chaque mot est un paramètre substituable de la ligne de commande, soit pour le texte que vous saisissez suite à l'activation d'une commande, soit pour le texte affiché par le système.
		</div><div class="para">
			Mis à part l'utilisation habituelle de présentation du titre d'un ouvrage, les caractères italiques indiquent l'utilisation initiale d'un terme nouveau et important. Ainsi :
		</div><div class="blockquote"><blockquote class="blockquote"><div class="para">
				Publican est un système de publication <em class="firstterm">DocBook</em>.
			</div></blockquote></div></div><div class="section"><div class="titlepage"><div><div><h3 class="title" id="id859669">1.2. Conventions pour citations mises en avant</h3></div></div></div><div class="para">
			Les sorties de terminaux et les citations de code source sont mis en avant par rapport au texte avoisinant.
		</div><div class="para">
			Les sorties envoyées vers un terminal sont en caractères <code class="computeroutput">Romains à espacement fixe</code> et présentées ainsi :
		</div><pre class="screen">books        Desktop   documentation  drafts  mss    photos   stuff  svn
books_tests  Desktop1  downloads      images  notes  scripts  svgs</pre><div class="para">
			Les citations de code source sont également présentées en <code class="computeroutput">romains à espacement fixe</code> mais sont présentés et surlignés comme suit :
		</div><pre class="programlisting">package org.<span class="perl_Function">jboss</span>.<span class="perl_Function">book</span>.<span class="perl_Function">jca</span>.<span class="perl_Function">ex1</span>;

<span class="perl_Keyword">import</span> javax.naming.InitialContext;

<span class="perl_Keyword">public</span> <span class="perl_Keyword">class</span> ExClient
{
   <span class="perl_Keyword">public</span> <span class="perl_DataType">static</span> <span class="perl_DataType">void</span> <span class="perl_Function">main</span>(String args[]) 
       <span class="perl_Keyword">throws</span> Exception
   {
      InitialContext iniCtx = <span class="perl_Keyword">new</span> InitialContext();
      Object         ref    = iniCtx.<span class="perl_Function">lookup</span>(<span class="perl_String">"EchoBean"</span>);
      EchoHome       home   = (EchoHome) ref;
      Echo           echo   = home.<span class="perl_Function">create</span>();

      System.<span class="perl_Function">out</span>.<span class="perl_Function">println</span>(<span class="perl_String">"Created Echo"</span>);

      System.<span class="perl_Function">out</span>.<span class="perl_Function">println</span>(<span class="perl_String">"Echo.echo('Hello') = "</span> + echo.<span class="perl_Function">echo</span>(<span class="perl_String">"Hello"</span>));
   }
}</pre></div><div class="section"><div class="titlepage"><div><div><h3 class="title" id="id861394">1.3. Notes et avertissements</h3></div></div></div><div class="para">
			Enfin, nous utilisons trois styles visuels pour attirer l'attention sur des informations qui auraient pu être normalement négligées :
		</div><div class="note"><div class="admonition_header"><h2>Remarque</h2></div><div class="admonition"><div class="para">
				Une remarque est une forme de conseil, un raccourci ou une approche alternative par rapport à une tâche à entreprendre. L'ignorer ne devrait pas provoquer de conséquences négatives, mais vous pourriez passer à côté d'une astuce qui vous aurait simplifiée la vie.
			</div></div></div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
				Les blocs d'informations importantes détaillent des éléments qui pourraient être facilement négligés : des modifications de configurations qui s'appliquent uniquement à la session actuelle ou des services qui ont besoin d'être redémarrés avant toute mise à jour. Si vous ignorez une case étiquetée « Important », vous ne perdrez aucunes données mais cela pourrait être source de frustration et d'irritation.
			</div></div></div><div class="warning"><div class="admonition_header"><h2>Avertissement</h2></div><div class="admonition"><div class="para">
				Un avertissement ne devrait pas être ignoré. Ignorer des avertissements risque fortement d'entrainer des pertes de données.
			</div></div></div></div></div><div class="section" id="s1-intro-feedback-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-intro-feedback-CA">2. Feedback</h2></div></div></div><a id="id850691" class="indexterm"></a><a id="id803490" class="indexterm"></a><div class="para">
			If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (<a href="http://bugzilla.redhat.com/bugzilla/">http://bugzilla.redhat.com/bugzilla/</a>) against the component <span class="guimenuitem"><strong>Documentation-cluster</strong></span>.
		</div><div class="para">
			Be sure to mention the manual identifier:
		</div><pre class="screen">
Cluster_Administration(EN)-5 (2012-1-25T15:52)
</pre><div class="para">
			By mentioning this manual's identifier, we know exactly which version of the guide you have.
		</div><div class="para">
			If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
		</div></div></div><div xml:lang="fr-FR" class="chapter" id="ch-overview-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 1. Red Hat Cluster Configuration and Management Overview</h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-clust-config-basics-CA">1.1. Configuration Basics</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-hw-setup-CA">1.1.1. Setting Up Hardware</a></span></dt><dt><span class="section"><a href="#s2-install-clust-sw-CA">1.1.2. Installing Red Hat Cluster software</a></span></dt><dt><span class="section"><a href="#s2-config-cluster-CA">1.1.3. Configuring Red Hat Cluster Software</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-conga-overview-CA">1.2. Conga</a></span></dt><dt><span class="section"><a href="#s1-clumgmttools-overview-CA">1.3. <code class="command">system-config-cluster</code> Cluster Administration GUI</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-cluconfig-tool-CA">1.3.1. <span class="application"><strong>Cluster Configuration Tool</strong></span></a></span></dt><dt><span class="section"><a href="#s2-admin-overview-CA">1.3.2. <span class="application"><strong>Cluster Status Tool</strong></span></a></span></dt></dl></dd><dt><span class="section"><a href="#s1-cmdlinetools-overview-CA">1.4. Command Line Administration Tools</a></span></dt></dl></div><div class="para">
		Red Hat Cluster allows you to connect a group of computers (called <em class="firstterm">nodes</em> or <em class="firstterm">members</em>) to work together as a cluster. It provides a wide variety of ways to configure hardware and software to suit your clustering needs (for example, a cluster for sharing files on a GFS file system or a cluster with high-availability service failover). This book provides information about how to use configuration tools to configure your cluster and provides considerations to take into account before deploying a Red Hat Cluster. To ensure that your deployment of Red Hat Cluster fully meets your needs and can be supported, consult with an authorized Red Hat representative before you deploy it.
	</div><div class="section" id="s1-clust-config-basics-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-config-basics-CA">1.1. Configuration Basics</h2></div></div></div><div class="para">
			To set up a cluster, you must connect the nodes to certain cluster hardware and configure the nodes into the cluster environment. This chapter provides an overview of cluster configuration and management, and tools available for configuring and managing a Red Hat Cluster.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				For information on best practices for deploying and upgrading Red Hat Enterprise Linux 5 Advanced Platform (Clustering and GFS/GFS2), refer to the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on Red Hat Customer Portal at <a href="https://access.redhat.com/kb/docs/DOC-40821">. https://access.redhat.com/kb/docs/DOC-40821</a>.
			</div></div></div><div class="para">
			Configuring and managing a Red Hat Cluster consists of the following basic steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Setting up hardware. Refer to <a class="xref" href="#s2-hw-setup-CA">Section 1.1.1, « Setting Up Hardware »</a>.
				</div></li><li class="listitem"><div class="para">
					Installing Red Hat Cluster software. Refer to <a class="xref" href="#s2-install-clust-sw-CA">Section 1.1.2, « Installing Red Hat Cluster software »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring Red Hat Cluster Software. Refer to <a class="xref" href="#s2-config-cluster-CA">Section 1.1.3, « Configuring Red Hat Cluster Software »</a>.
				</div></li></ol></div><div class="section" id="s2-hw-setup-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-hw-setup-CA">1.1.1. Setting Up Hardware</h3></div></div></div><div class="para">
				Setting up hardware consists of connecting cluster nodes to other hardware required to run a Red Hat Cluster. The amount and type of hardware varies according to the purpose and availability requirements of the cluster. Typically, an enterprise-level cluster requires the following type of hardware (refer to <a class="xref" href="#fig-clust-hw-ov-CA">Figure 1.1, « Red Hat Cluster Hardware Overview »</a>).For considerations about hardware and other cluster configuration concerns, refer to "Before Configuring a Red Hat Cluster" or check with an authorized Red Hat representative.
			</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
						Cluster nodes — Computers that are capable of running Red Hat Enterprise Linux 5 software, with at least 1GB of RAM. The maximum number of nodes supported in a Red Hat Cluster is 16.
					</div></li><li class="listitem"><div class="para">
						Ethernet switch or hub for public network — This is required for client access to the cluster.
					</div></li><li class="listitem"><div class="para">
						Ethernet switch or hub for private network — This is required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
					</div></li><li class="listitem"><div class="para">
						Network power switch — A network power switch is recommended to perform fencing in an enterprise-level cluster.
					</div></li><li class="listitem"><div class="para">
						Fibre Channel switch — A Fibre Channel switch provides access to Fibre Channel storage. Other options are available for storage according to the type of storage interface; for example, iSCSI or GNBD. A Fibre Channel switch can be configured to perform fencing.
					</div></li><li class="listitem"><div class="para">
						Storage — Some type of storage is required for a cluster. The type required depends on the purpose of the cluster.
					</div></li></ul></div><div class="figure" id="fig-clust-hw-ov-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/9159.png" width="444" alt="Red Hat Cluster Hardware Overview" /><div class="longdesc"><div class="para">
							cluster hardware
						</div></div></div></div><h6>Figure 1.1. Red Hat Cluster Hardware Overview</h6></div><br class="figure-break" /></div><div class="section" id="s2-install-clust-sw-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-install-clust-sw-CA">1.1.2. Installing Red Hat Cluster software</h3></div></div></div><div class="para">
				To install Red Hat Cluster software, you must have entitlements for the software. If you are using the <span class="application"><strong>Conga</strong></span> configuration GUI, you can let it install the cluster software. If you are using other tools to configure the cluster, secure and install the software as you would with Red Hat Enterprise Linux software.
			</div><div class="section" id="s3-install-clust-sw-CA-upgr"><div class="titlepage"><div><div><h4 class="title" id="s3-install-clust-sw-CA-upgr">1.1.2.1. Upgrading the Cluster Software</h4></div></div></div><div class="para">
					It is possible to upgrade the cluster software on a given major release of Red Hat Enterprise Linux without taking the cluster out of production. Doing so requires disabling the cluster software on one host at a time, upgrading the software, and restarting the cluster software on that host.
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Shut down all cluster services on a single cluster node. For instructions on stopping cluster software on a node, refer to <a class="xref" href="#s1-admin-start-CA">Section 6.1, « Starting and Stopping the Cluster Software »</a>. It may be desirable to manually relocate cluster-managed services and virtual machines off of the host prior to stopping rgmanager.
						</div></li><li class="listitem"><div class="para">
							Execute the yum update command to install the new RPMs. For example:
						</div><pre class="screen">
yum update -y openais cman rgmanager lvm2-cluster gfs2-utils
</pre></li><li class="listitem"><div class="para">
							Reboot the cluster node or restart the cluster services manually. For instructions on starting cluster software on a node, refer to <a class="xref" href="#s1-admin-start-CA">Section 6.1, « Starting and Stopping the Cluster Software »</a>.
						</div></li></ol></div></div></div><div class="section" id="s2-config-cluster-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-cluster-CA">1.1.3. Configuring Red Hat Cluster Software</h3></div></div></div><a id="id569644" class="indexterm"></a><div class="para">
				Configuring Red Hat Cluster software consists of using configuration tools to specify the relationship among the cluster components. <a class="xref" href="#fig-software-flow-conga-CA">Figure 1.2, « Cluster Configuration Structure »</a> shows an example of the hierarchical relationship among cluster nodes, high-availability services, and resources. The cluster nodes are connected to one or more fencing devices. Nodes can be grouped into a failover domain for a cluster service. The services comprise resources such as NFS exports, IP addresses, and shared GFS partitions.
			</div><div class="figure" id="fig-software-flow-conga-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/clust-config-struct.png" width="444" alt="Cluster Configuration Structure" /><div class="longdesc"><div class="para">
							cluster config flowchart
						</div></div></div></div><h6>Figure 1.2. Cluster Configuration Structure</h6></div><br class="figure-break" /><div class="para">
				The following cluster configuration tools are available with Red Hat Cluster:
			</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
						<span class="application"><strong>Conga</strong></span> — This is a comprehensive user interface for installing, configuring, and managing Red Hat clusters, computers, and storage attached to clusters and computers.
					</div></li><li class="listitem"><div class="para">
						<code class="command">system-config-cluster</code> — This is a user interface for configuring and managing a Red Hat cluster.
					</div></li><li class="listitem"><div class="para">
						Command line tools — This is a set of command line tools for configuring and managing a Red Hat cluster.
					</div></li></ul></div><div class="para">
				A brief overview of each configuration tool is provided in the following sections:
			</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
						<a class="xref" href="#s1-conga-overview-CA">Section 1.2, « Conga »</a>
					</div></li><li class="listitem"><div class="para">
						<a class="xref" href="#s1-clumgmttools-overview-CA">Section 1.3, « <code class="command">system-config-cluster</code> Cluster Administration GUI »</a>
					</div></li><li class="listitem"><div class="para">
						<a class="xref" href="#s1-cmdlinetools-overview-CA">Section 1.4, « Command Line Administration Tools »</a>
					</div></li></ul></div><div class="para">
				In addition, information about using <span class="application"><strong>Conga</strong></span> and <code class="command">system-config-cluster</code> is provided in subsequent chapters of this document. Information about the command line tools is available in the man pages for the tools.
			</div></div></div><div class="section" id="s1-conga-overview-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-conga-overview-CA">1.2. Conga</h2></div></div></div><a id="id620965" class="indexterm"></a><a id="id620977" class="indexterm"></a><div class="para">
			<span class="application"><strong>Conga</strong></span> is an integrated set of software components that provides centralized configuration and management of Red Hat clusters and storage. <span class="application"><strong>Conga</strong></span> provides the following major features:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					One Web interface for managing cluster and storage
				</div></li><li class="listitem"><div class="para">
					Automated Deployment of Cluster Data and Supporting Packages
				</div></li><li class="listitem"><div class="para">
					Easy Integration with Existing Clusters
				</div></li><li class="listitem"><div class="para">
					No Need to Re-Authenticate
				</div></li><li class="listitem"><div class="para">
					Integration of Cluster Status and Logs
				</div></li><li class="listitem"><div class="para">
					Fine-Grained Control over User Permissions
				</div></li></ul></div><div class="para">
			The primary components in <span class="application"><strong>Conga</strong></span> are <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span>, which are separately installable. <span class="application"><strong>luci</strong></span> is a server that runs on one computer and communicates with multiple clusters and computers via <span class="application"><strong>ricci</strong></span>. <span class="application"><strong>ricci</strong></span> is an agent that runs on each computer (either a cluster member or a standalone computer) managed by <span class="application"><strong>Conga</strong></span>.
		</div><div class="para">
			<span class="application"><strong>luci</strong></span> is accessible through a Web browser and provides three major functions that are accessible through the following tabs:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<span class="guimenu"><strong>homebase</strong></span> — Provides tools for adding and deleting computers, adding and deleting users, and configuring user privileges. Only a system administrator is allowed to access this tab.
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>cluster</strong></span> — Provides tools for creating and configuring clusters. Each instance of <span class="application"><strong>luci</strong></span> lists clusters that have been set up with that <span class="application"><strong>luci</strong></span>. A system administrator can administer all clusters listed on this tab. Other users can administer only clusters that the user has permission to manage (granted by an administrator).
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>storage</strong></span> — Provides tools for remote administration of storage. With the tools on this tab, you can manage storage on computers whether they belong to a cluster or not.
				</div></li></ul></div><div class="para">
			To administer a cluster or storage, an administrator adds (or <em class="firstterm">registers</em>) a cluster or a computer to a <span class="application"><strong>luci</strong></span> server. When a cluster or a computer is registered with <span class="application"><strong>luci</strong></span>, the FQDN hostname or IP address of each computer is stored in a <span class="application"><strong>luci</strong></span> database.
		</div><div class="para">
			You can populate the database of one <span class="application"><strong>luci</strong></span> instance from another <span class="application"><strong>luci</strong></span>instance. That capability provides a means of replicating a <span class="application"><strong>luci</strong></span> server instance and provides an efficient upgrade and testing path. When you install an instance of <span class="application"><strong>luci</strong></span>, its database is empty. However, you can import part or all of a <span class="application"><strong>luci</strong></span> database from an existing <span class="application"><strong>luci</strong></span> server when deploying a new <span class="application"><strong>luci</strong></span> server.
		</div><div class="para">
			Each <span class="application"><strong>luci</strong></span> instance has one user at initial installation — admin. Only the admin user may add systems to a <span class="application"><strong>luci</strong></span> server. Also, the admin user can create additional user accounts and determine which users are allowed to access clusters and computers registered in the <span class="application"><strong>luci</strong></span> database. It is possible to import users as a batch operation in a new <span class="application"><strong>luci</strong></span> server, just as it is possible to import clusters and computers.
		</div><div class="para">
			When a computer is added to a <span class="application"><strong>luci</strong></span> server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA). After that, you can remotely configure and manage clusters and storage through the <span class="application"><strong>luci</strong></span> user interface. <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span> communicate with each other via XML.
		</div><div class="para">
			The following figures show sample displays of the three major <span class="application"><strong>luci</strong></span> tabs: <span class="guimenu"><strong>homebase</strong></span>, <span class="guimenu"><strong>cluster</strong></span>, and <span class="guimenu"><strong>storage</strong></span>.
		</div><div class="para">
			For more information about <span class="application"><strong>Conga</strong></span>, refer to <a class="xref" href="#ch-config-conga-CA">Chapitre 3, <em>Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>, <a class="xref" href="#ch-mgmt-conga-CA">Chapitre 4, <em>Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>, and the online help available with the <span class="application"><strong>luci</strong></span> server.
		</div><div class="figure" id="fig-ov-luci-homebase-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/luci-homebase-tab.png" width="444" alt="luci homebase Tab" /><div class="longdesc"><div class="para">
						luci homebase tab
					</div></div></div></div><h6>Figure 1.3. <span class="application">luci </span> <span class="guimenu">homebase</span> Tab</h6></div><br class="figure-break" /><div class="figure" id="fig-ov-luci-cluster-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/luci-cluster-tab.png" width="444" alt="luci cluster Tab" /><div class="longdesc"><div class="para">
						luci cluster tab
					</div></div></div></div><h6>Figure 1.4. <span class="application">luci </span> <span class="guimenu">cluster</span> Tab</h6></div><br class="figure-break" /><div class="figure" id="fig-ov-luci-storage-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/luci-storage-tab.png" width="444" alt="luci storage Tab" /><div class="longdesc"><div class="para">
						luci storage tab
					</div></div></div></div><h6>Figure 1.5. <span class="application">luci </span> <span class="guimenu">storage</span> Tab</h6></div><br class="figure-break" /></div><div class="section" id="s1-clumgmttools-overview-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clumgmttools-overview-CA">1.3. <code class="command">system-config-cluster</code> Cluster Administration GUI</h2></div></div></div><div class="para">
			This section provides an overview of the cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite — <code class="command">system-config-cluster</code>. It is for use with the cluster infrastructure and the high-availability service management components. <code class="command">system-config-cluster</code> consists of two major functions: the <span class="application"><strong>Cluster Configuration Tool</strong></span> and the <span class="application"><strong>Cluster Status Tool</strong></span>. The <span class="application"><strong>Cluster Configuration Tool</strong></span> provides the capability to create, edit, and propagate the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>). The <span class="application"><strong>Cluster Status Tool</strong></span> provides the capability to manage high-availability services. The following sections summarize those functions.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				While <code class="command">system-config-cluster</code> provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, <span class="application"><strong>Conga</strong></span>, provides more convenience and flexibility than <code class="command">system-config-cluster</code>.
			</div></div></div><div class="section" id="s2-cluconfig-tool-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-cluconfig-tool-CA">1.3.1. <span class="application"><strong>Cluster Configuration Tool</strong></span></h3></div></div></div><div class="para">
				You can access the <span class="application"><strong>Cluster Configuration Tool</strong></span> (<a class="xref" href="#fig-intro-cluconfig-ov-CA">Figure 1.6, « <span class="application">Cluster Configuration Tool</span> »</a>) through the <span class="guilabel"><strong>Cluster Configuration</strong></span> tab in the Cluster Administration GUI.
			</div><div class="figure" id="fig-intro-cluconfig-ov-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/clustertoolgui.png" width="444" alt="Cluster Configuration Tool" /><div class="longdesc"><div class="para">
							cluster tool
						</div></div></div></div><h6>Figure 1.6. <span class="application">Cluster Configuration Tool</span></h6></div><br class="figure-break" /><div class="para">
				The <span class="application"><strong>Cluster Configuration Tool</strong></span> represents cluster configuration components in the configuration file (<code class="filename">/etc/cluster/cluster.conf</code>) with a hierarchical graphical display in the left panel. A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it. Clicking the triangle icon expands and collapses the portion of the tree below a component. The components displayed in the GUI are summarized as follows:
			</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
						<span class="guilabel"><strong>Cluster Nodes</strong></span> — Displays cluster nodes. Nodes are represented by name as subordinate elements under <span class="guilabel"><strong>Cluster Nodes</strong></span>. Using configuration buttons at the bottom of the right frame (below <span class="guilabel"><strong>Properties</strong></span>), you can add nodes, delete nodes, edit node properties, and configure fencing methods for each node.
					</div></li><li class="listitem"><div class="para">
						<span class="guilabel"><strong>Fence Devices</strong></span> — Displays fence devices. Fence devices are represented as subordinate elements under <span class="guilabel"><strong>Fence Devices</strong></span>. Using configuration buttons at the bottom of the right frame (below <span class="guilabel"><strong>Properties</strong></span>), you can add fence devices, delete fence devices, and edit fence-device properties. Fence devices must be defined before you can configure fencing (with the <span class="guibutton"><strong>Manage Fencing For This Node</strong></span> button) for each node.
					</div></li><li class="listitem"><div class="para">
						<span class="guilabel"><strong>Managed Resources</strong></span> — Displays failover domains, resources, and services.
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								<span class="guilabel"><strong>Failover Domains</strong></span> — For configuring one or more subsets of cluster nodes used to run a high-availability service in the event of a node failure. Failover domains are represented as subordinate elements under <span class="guilabel"><strong>Failover Domains</strong></span>. Using configuration buttons at the bottom of the right frame (below <span class="guilabel"><strong>Properties</strong></span>), you can create failover domains (when <span class="guilabel"><strong>Failover Domains</strong></span> is selected) or edit failover domain properties (when a failover domain is selected).
							</div></li><li class="listitem"><div class="para">
								<span class="guilabel"><strong>Resources</strong></span> — For configuring shared resources to be used by high-availability services. Shared resources consist of file systems, IP addresses, NFS mounts and exports, and user-created scripts that are available to any high-availability service in the cluster. Resources are represented as subordinate elements under <span class="guilabel"><strong>Resources</strong></span>. Using configuration buttons at the bottom of the right frame (below <span class="guilabel"><strong>Properties</strong></span>), you can create resources (when <span class="guilabel"><strong>Resources</strong></span> is selected) or edit resource properties (when a resource is selected).
							</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
									The <span class="application"><strong>Cluster Configuration Tool</strong></span> provides the capability to configure private resources, also. A private resource is a resource that is configured for use with only one service. You can configure a private resource within a <span class="application"><strong>Service</strong></span> component in the GUI.
								</div></div></div></li><li class="listitem"><div class="para">
								<span class="guilabel"><strong>Services</strong></span> — For creating and configuring high-availability services. A service is configured by assigning resources (shared or private), assigning a failover domain, and defining a recovery policy for the service. Services are represented as subordinate elements under <span class="guilabel"><strong>Services</strong></span>. Using configuration buttons at the bottom of the right frame (below <span class="guilabel"><strong>Properties</strong></span>), you can create services (when <span class="guilabel"><strong>Services</strong></span> is selected) or edit service properties (when a service is selected).
							</div></li></ul></div></li></ul></div><a id="id829966" class="indexterm"></a></div><div class="section" id="s2-admin-overview-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-admin-overview-CA">1.3.2. <span class="application"><strong>Cluster Status Tool</strong></span></h3></div></div></div><div class="para">
				You can access the <span class="application"><strong>Cluster Status Tool</strong></span> (<a class="xref" href="#fig-intro-clustatus-CA">Figure 1.7, « <span class="application">Cluster Status Tool</span> »</a>) through the <span class="guimenu"><strong>Cluster Management</strong></span> tab in Cluster Administration GUI.
			</div><div class="figure" id="fig-intro-clustatus-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/clustatus.png" width="444" alt="Cluster Status Tool" /><div class="longdesc"><div class="para">
							cluster status tool
						</div></div></div></div><h6>Figure 1.7. <span class="application">Cluster Status Tool</span></h6></div><br class="figure-break" /><div class="para">
				The nodes and services displayed in the <span class="application"><strong>Cluster Status Tool</strong></span> are determined by the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>). You can use the <span class="application"><strong>Cluster Status Tool</strong></span> to enable, disable, restart, or relocate a high-availability service.
			</div><a id="id830064" class="indexterm"></a><a id="id830076" class="indexterm"></a><a id="id830088" class="indexterm"></a></div></div><div class="section" id="s1-cmdlinetools-overview-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-cmdlinetools-overview-CA">1.4. Command Line Administration Tools</h2></div></div></div><a id="id830110" class="indexterm"></a><a id="id830122" class="indexterm"></a><div class="para">
			In addition to <span class="application"><strong>Conga</strong></span> and the <code class="command">system-config-cluster</code> Cluster Administration GUI, command line tools are available for administering the cluster infrastructure and the high-availability service management components. The command line tools are used by the Cluster Administration GUI and init scripts supplied by Red Hat. <a class="xref" href="#tb-commandline-tools-overview-CA">Tableau 1.1, « Command Line Tools »</a> summarizes the command line tools.
		</div><div class="table" id="tb-commandline-tools-overview-CA"><h6>Tableau 1.1. Command Line Tools</h6><div class="table-contents"><table summary="Command Line Tools" border="1"><colgroup><col width="20%" class="Command_Line_Tool" /><col width="20%" class="Used_With" /><col width="60%" class="Purpose" /></colgroup><thead><tr><th>
							Command Line Tool
						</th><th>
							Used With
						</th><th>
							Purpose
						</th></tr></thead><tbody><tr><td>
							<code class="command">ccs_tool</code> — Cluster Configuration System Tool
						</td><td>
							Cluster Infrastructure
						</td><td>
							<code class="command">ccs_tool</code> is a program for making online updates to the cluster configuration file. It provides the capability to create and modify cluster infrastructure components (for example, creating a cluster, adding and removing a node). For more information about this tool, refer to the ccs_tool(8) man page.
						</td></tr><tr><td>
							<code class="command">cman_tool</code> — Cluster Management Tool
						</td><td>
							Cluster Infrastructure
						</td><td>
							<code class="command">cman_tool</code> is a program that manages the CMAN cluster manager. It provides the capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in a cluster. For more information about this tool, refer to the cman_tool(8) man page.
						</td></tr><tr><td>
							<code class="command">fence_tool</code> — Fence Tool
						</td><td>
							Cluster Infrastructure
						</td><td>
							<code class="command">fence_tool</code> is a program used to join or leave the default fence domain. Specifically, it starts the fence daemon (<code class="command">fenced</code>) to join the domain and kills <code class="command">fenced</code> to leave the domain. For more information about this tool, refer to the fence_tool(8) man page.
						</td></tr><tr><td>
							<code class="command">clustat</code> — Cluster Status Utility
						</td><td>
							High-availability Service Management Components
						</td><td>
							The <code class="command">clustat</code> command displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services. For more information about this tool, refer to the clustat(8) man page.
						</td></tr><tr><td>
							<code class="command">clusvcadm</code> — Cluster User Service Administration Utility
						</td><td>
							High-availability Service Management Components
						</td><td>
							The <code class="command">clusvcadm</code> command allows you to enable, disable, relocate, and restart high-availability services in a cluster. For more information about this tool, refer to the clusvcadm(8) man page.
						</td></tr></tbody></table></div></div><br class="table-break" /></div></div><div xml:lang="fr-FR" class="chapter" id="ch-before-config-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 2. Before Configuring a Red Hat Cluster</h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-clust-config-considerations-CA">2.1. General Configuration Considerations</a></span></dt><dt><span class="section"><a href="#s1-hw-compat-CA">2.2. Compatible Hardware</a></span></dt><dt><span class="section"><a href="#s1-iptables-CA">2.3. Enabling IP Ports</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-iptables-clnodes-CA">2.3.1. Enabling IP Ports on Cluster Nodes</a></span></dt><dt><span class="section"><a href="#s2-iptables-conga-CA">2.3.2. Enabling IP Ports on Computers That Run <span class="application"><strong>luci</strong></span></a></span></dt></dl></dd><dt><span class="section"><a href="#s1-acpi-CA">2.4. Configuring ACPI For Use with Integrated Fence Devices</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-acpi-disable-chkconfig-CA">2.4.1. Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management</a></span></dt><dt><span class="section"><a href="#s2-bios-setting-CA">2.4.2. Disabling ACPI Soft-Off with the BIOS</a></span></dt><dt><span class="section"><a href="#s2-acpi-disable-boot-CA">2.4.3. Disabling ACPI Completely in the <code class="filename">grub.conf</code> File</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-clust-svc-ov-CA">2.5. Considerations for Configuring HA Services</a></span></dt><dt><span class="section"><a href="#s1-max-luns-CA">2.6. Configuring max_luns</a></span></dt><dt><span class="section"><a href="#s1-qdisk-considerations-CA">2.7. Considerations for Using Quorum Disk</a></span></dt><dt><span class="section"><a href="#s1-selinux-CA">2.8. Red Hat Cluster Suite and SELinux</a></span></dt><dt><span class="section"><a href="#s1-multicast-considerations-CA">2.9. Multicast Addresses</a></span></dt><dt><span class="section"><a href="#s1-iptables_firewall-CA">2.10. Configuring the iptables Firewall to Allow Cluster Components</a></span></dt><dt><span class="section"><a href="#s1-conga-considerations-CA">2.11. Considerations for Using <span class="application"><strong>Conga</strong></span></a></span></dt><dt><span class="section"><a href="#s1-vm-considerations-CA">2.12. Configuring Virtual Machines in a Clustered Environment</a></span></dt></dl></div><a id="id903002" class="indexterm"></a><a id="id781636" class="indexterm"></a><div class="para">
		This chapter describes tasks to perform and considerations to make before installing and configuring a Red Hat Cluster, and consists of the following sections.
	</div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
			Make sure that your deployment of Red Hat Cluster Suite meets your needs and can be supported. Consult with an authorized Red Hat representative to verify Cluster Suite and GFS configuration prior to deployment. In addition, allow time for a configuration burn-in period to test failure modes.
		</div></div></div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-config-considerations-CA">Section 2.1, « General Configuration Considerations »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-hw-compat-CA">Section 2.2, « Compatible Hardware »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-iptables-CA">Section 2.3, « Enabling IP Ports »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-acpi-CA">Section 2.4, « Configuring ACPI For Use with Integrated Fence Devices »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-max-luns-CA">Section 2.6, « Configuring max_luns »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-qdisk-considerations-CA">Section 2.7, « Considerations for Using Quorum Disk »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-selinux-CA">Section 2.8, « Red Hat Cluster Suite and SELinux »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-multicast-considerations-CA">Section 2.9, « Multicast Addresses »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-iptables_firewall-CA">Section 2.10, « Configuring the iptables Firewall to Allow Cluster Components »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-conga-considerations-CA">Section 2.11, « Considerations for Using <span class="application"><strong>Conga</strong></span> »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-vm-considerations-CA">Section 2.12, « Configuring Virtual Machines in a Clustered Environment »</a>
			</div></li></ul></div><div class="section" id="s1-clust-config-considerations-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-config-considerations-CA">2.1. General Configuration Considerations</h2></div></div></div><a id="id831699" class="indexterm"></a><a id="id831711" class="indexterm"></a><div class="para">
			You can configure a Red Hat Cluster in a variety of ways to suit your needs. Take into account the following general considerations when you plan, configure, and implement your Red Hat Cluster.
		</div><div class="variablelist"><dl><dt class="varlistentry"><span class="term"> Number of cluster nodes supported </span></dt><dd><div class="para">
						The maximum number of nodes supported in a Red Hat Cluster is 16.
					</div></dd><dt class="varlistentry"><span class="term"> GFS/GFS2 </span></dt><dd><div class="para">
						Although a GFS/GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the RHEL 5.5 release and later, Red Hat does not support the use of GFS/GFS2 as a single-node file system. Red Hat does support a number of high-performance single-node file systems that are optimized for single node, and thus have generally lower overhead than a cluster file system. Red Hat recommends using those file systems in preference to GFS/GFS2 in cases where only a single node needs to mount the file system. Red Hat will continue to support single-node GFS/GFS2 file systems for existing customers.
					</div><div class="para">
						When you configure a GFS/GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared file system. Asymmetric cluster configurations in which some nodes have access to the file system and others do not are not supported.This does not require that all nodes actually mount the GFS/GFS2 file system itself.
					</div></dd><dt class="varlistentry"><span class="term"> No-single-point-of-failure hardware configuration </span></dt><dd><div class="para">
						Clusters can include a dual-controller RAID array, multiple bonded network channels, multiple paths between cluster members and storage, and redundant un-interruptible power supply (UPS) systems to ensure that no single failure results in application down time or loss of data.
					</div><div class="para">
						Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-point-of-failure cluster. For example, you can set up a cluster with a single-controller RAID array and only a single Ethernet channel.
					</div><div class="para">
						Certain low-cost alternatives, such as host RAID controllers, software RAID without cluster support, and multi-initiator parallel SCSI configurations are not compatible or appropriate for use as shared cluster storage.
					</div></dd><dt class="varlistentry"><span class="term"> Data integrity assurance </span></dt><dd><div class="para">
						To ensure data integrity, only one node can run a cluster service and access cluster-service data at a time. The use of power switches in the cluster hardware configuration enables a node to power-cycle another node before restarting that node's HA services during a failover process. This prevents two nodes from simultaneously accessing the same data and corrupting it. It is strongly recommended that <em class="firstterm">fence devices</em> (hardware or software solutions that remotely power, shutdown, and reboot cluster nodes) are used to guarantee data integrity under all failure conditions. Watchdog timers provide an alternative way to to ensure correct operation of HA service failover.
					</div></dd><dt class="varlistentry"><span class="term"> Ethernet channel bonding </span></dt><dd><div class="para">
						Cluster quorum and node health is determined by communication of messages among cluster nodes via Ethernet. In addition, cluster nodes use Ethernet for a variety of other critical cluster functions (for example, fencing). With Ethernet channel bonding, multiple Ethernet interfaces are configured to behave as one, reducing the risk of a single-point-of-failure in the typical switched Ethernet connection among cluster nodes and other cluster hardware.
					</div></dd></dl></div></div><div class="section" id="s1-hw-compat-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-hw-compat-CA">2.2. Compatible Hardware</h2></div></div></div><a id="id780245" class="indexterm"></a><a id="id780257" class="indexterm"></a><div class="para">
			Before configuring Red Hat Cluster software, make sure that your cluster uses appropriate hardware (for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the hardware configuration guidelines at <a href="http://www.redhat.com/cluster_suite/hardware/">http://www.redhat.com/cluster_suite/hardware/</a> for the most current hardware compatibility information.
		</div></div><div class="section" id="s1-iptables-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-iptables-CA">2.3. Enabling IP Ports</h2></div></div></div><a id="id780289" class="indexterm"></a><a id="id904349" class="indexterm"></a><a id="id904361" class="indexterm"></a><a id="id904376" class="indexterm"></a><div class="para">
			Before deploying a Red Hat Cluster, you must enable certain IP ports on the cluster nodes and on computers that run <span class="application"><strong>luci</strong></span> (the <span class="application"><strong>Conga</strong></span> user interface server). The following sections identify the IP ports to be enabled:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<a class="xref" href="#s2-iptables-clnodes-CA">Section 2.3.1, « Enabling IP Ports on Cluster Nodes »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-iptables-conga-CA">Section 2.3.2, « Enabling IP Ports on Computers That Run <span class="application"><strong>luci</strong></span> »</a>
				</div></li></ul></div><div class="section" id="s2-iptables-clnodes-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-iptables-clnodes-CA">2.3.1. Enabling IP Ports on Cluster Nodes</h3></div></div></div><div class="para">
				To allow Red Hat Cluster nodes to communicate with each other, you must enable the IP ports assigned to certain Red Hat Cluster components. <a class="xref" href="#tb-iptables-rhel5-CA">Tableau 2.1, « Enabled IP Ports on Red Hat Cluster Nodes »</a> lists the IP port numbers, their respective protocols, and the components to which the port numbers are assigned. At each cluster node, enable IP ports according to <a class="xref" href="#tb-iptables-rhel5-CA">Tableau 2.1, « Enabled IP Ports on Red Hat Cluster Nodes »</a>.
			</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
					IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
				</div></div></div><div class="table" id="tb-iptables-rhel5-CA"><h6>Tableau 2.1. Enabled IP Ports on Red Hat Cluster Nodes</h6><div class="table-contents"><table summary="Enabled IP Ports on Red Hat Cluster Nodes" border="1"><colgroup><col width="20%" class="Port_Number" /><col width="20%" class="Protocol" /><col width="60%" class="Component" /></colgroup><thead><tr><th>
								IP Port Number
							</th><th>
								Protocol
							</th><th>
								Component
							</th></tr></thead><tbody><tr><td>
								5404, 5405
							</td><td>
								UDP
							</td><td>
								<code class="command">cman</code> (Cluster Manager)
							</td></tr><tr><td>
								11111
							</td><td>
								TCP
							</td><td>
								<code class="command">ricci</code> (part of <span class="application"><strong>Conga</strong></span> remote agent)
							</td></tr><tr><td>
								14567
							</td><td>
								TCP
							</td><td>
								<code class="command">gnbd</code> (Global Network Block Device)
							</td></tr><tr><td>
								16851
							</td><td>
								TCP
							</td><td>
								<code class="command">modclusterd</code> (part of <span class="application"><strong>Conga</strong></span> remote agent)
							</td></tr><tr><td>
								21064
							</td><td>
								TCP
							</td><td>
								<code class="command">dlm</code> (Distributed Lock Manager)
							</td></tr><tr><td>
								50006, 50008, 50009
							</td><td>
								TCP
							</td><td>
								<code class="command">ccsd</code> (Cluster Configuration System daemon)
							</td></tr><tr><td>
								50007
							</td><td>
								UDP
							</td><td>
								<code class="command">ccsd</code> (Cluster Configuration System daemon)
							</td></tr></tbody></table></div></div><br class="table-break" /><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
					<a class="xref" href="#tb-iptables-rhel5-CA">Tableau 2.1, « Enabled IP Ports on Red Hat Cluster Nodes »</a> shows no IP ports to enable for <code class="command">rgmanager</code>. For Red Hat Enterprise Linux 5.1 and later, <code class="command">rgmanager</code> does not use TCP or UDP sockets.
				</div></div></div></div><div class="section" id="s2-iptables-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-iptables-conga-CA">2.3.2. Enabling IP Ports on Computers That Run <span class="application"><strong>luci</strong></span></h3></div></div></div><div class="para">
				To allow client computers to communicate with a computer that runs <span class="application"><strong>luci</strong></span> (the <span class="application"><strong>Conga</strong></span> user interface server), and to allow a computer that runs <span class="application"><strong>luci</strong></span> to communicate with <span class="application"><strong>ricci</strong></span> in the cluster nodes, you must enable the IP ports assigned to <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span>. <a class="xref" href="#tb-iptables-rhel5-CA">Tableau 2.1, « Enabled IP Ports on Red Hat Cluster Nodes »</a> lists the IP port numbers, their respective protocols, and the components to which the port numbers are assigned. At each computer that runs <span class="application"><strong>luci</strong></span>, enable IP ports according to <a class="xref" href="#tb-iptables-conga-rhel5-CA">Tableau 2.2, « Enabled IP Ports on a Computer That Runs <span class="application">luci</span> »</a>.
			</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
					If a cluster node is running <span class="application"><strong>luci</strong></span>, port 11111 should already have been enabled.
				</div></div></div><div class="table" id="tb-iptables-conga-rhel5-CA"><h6>Tableau 2.2. Enabled IP Ports on a Computer That Runs <span class="application">luci</span></h6><div class="table-contents"><table summary="Enabled IP Ports on a Computer That Runs luci" border="1"><colgroup><col width="20%" class="Port_Number" /><col width="20%" class="Protocol" /><col width="60%" class="Component" /></colgroup><thead><tr><th>
								IP Port Number
							</th><th>
								Protocol
							</th><th>
								Component
							</th></tr></thead><tbody><tr><td>
								8084
							</td><td>
								TCP
							</td><td>
								<span class="application"><strong>luci</strong></span> (<span class="application"><strong>Conga</strong></span> user interface server)
							</td></tr><tr><td>
								11111
							</td><td>
								TCP
							</td><td>
								<code class="command">ricci</code> (<span class="application"><strong>Conga</strong></span> remote agent)
							</td></tr></tbody></table></div></div><br class="table-break" /><div class="para">
				If your server infrastructure incorporates more than one network and you want to access <span class="application"><strong>luci</strong></span> from the internal network only, you can configure the <span class="application"><strong>stunnel</strong></span> component to listen on one IP address only by editing the <code class="literal">LUCI_HTTPS_PORT</code> parameter in the <code class="filename">/etc/sysconfig/luci</code> file as follows:
			</div><pre class="screen">
LUCI_HTTPS_PORT=10.10.10.10:8084
</pre></div></div><div class="section" id="s1-acpi-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-acpi-CA">2.4. Configuring ACPI For Use with Integrated Fence Devices</h2></div></div></div><a id="id842314" class="indexterm"></a><a id="id842326" class="indexterm"></a><a id="id842338" class="indexterm"></a><div class="para">
			If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				For the most current information about integrated fence devices supported by Red Hat Cluster Suite, refer to <a href="http://www.redhat.com/cluster_suite/hardware/"> http://www.redhat.com/cluster_suite/hardware/</a>.
			</div></div></div><div class="para">
			If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, <code class="command">shutdown -h now</code>). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds.
			</div></div></div><div class="para">
			To disable ACPI Soft-Off, use <code class="command">chkconfig</code> management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with <code class="command">chkconfig</code> management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Disabling ACPI Soft-Off with the BIOS may not be possible with some computers.
					</div></div></div></li><li class="listitem"><div class="para">
					Appending <strong class="userinput"><code>acpi=off</code></strong> to the kernel boot command line of the <code class="filename">/boot/grub/grub.conf</code> file
				</div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
						This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method <span class="emphasis"><em>only</em></span> if the other methods are not effective for your cluster.
					</div></div></div></li></ul></div><div class="para">
			The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<a class="xref" href="#s2-acpi-disable-chkconfig-CA">Section 2.4.1, « Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management »</a> — Preferred method
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-bios-setting-CA">Section 2.4.2, « Disabling ACPI Soft-Off with the BIOS »</a> — First alternate method
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-acpi-disable-boot-CA">Section 2.4.3, « Disabling ACPI Completely in the <code class="filename">grub.conf</code> File »</a> — Second alternate method
				</div></li></ul></div><div class="section" id="s2-acpi-disable-chkconfig-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-acpi-disable-chkconfig-CA">2.4.1. Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management</h3></div></div></div><div class="para">
				You can use <code class="command">chkconfig</code> management to disable ACPI Soft-Off either by removing the ACPI daemon (<code class="command">acpid</code>) from <code class="command">chkconfig</code> management or by turning off <code class="command">acpid</code>.
			</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
					This is the preferred method of disabling ACPI Soft-Off.
				</div></div></div><div class="para">
				Disable ACPI Soft-Off with <code class="command">chkconfig</code> management at each cluster node as follows:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Run either of the following commands:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								<code class="command">chkconfig --del acpid</code> — This command removes <code class="command">acpid</code> from <code class="command">chkconfig</code> management.
							</div><div class="para">
								— OR —
							</div></li><li class="listitem"><div class="para">
								<code class="command">chkconfig --level 2345 acpid off</code> — This command turns off <code class="command">acpid</code>.
							</div></li></ul></div></li><li class="listitem"><div class="para">
						Reboot the node.
					</div></li><li class="listitem"><div class="para">
						When the cluster is configured and running, verify that the node turns off immediately when fenced.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							You can fence the node with the <code class="command">fence_node</code> command or <span class="application"><strong>Conga</strong></span>.
						</div></div></div></li></ol></div></div><div class="section" id="s2-bios-setting-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-bios-setting-CA">2.4.2. Disabling ACPI Soft-Off with the BIOS</h3></div></div></div><div class="para">
				The preferred method of disabling ACPI Soft-Off is with <code class="command">chkconfig</code> management (<a class="xref" href="#s2-acpi-disable-chkconfig-CA">Section 2.4.1, « Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management »</a>). However, if the preferred method is not effective for your cluster, follow the procedure in this section.
			</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
					Disabling ACPI Soft-Off with the BIOS may not be possible with some computers.
				</div></div></div><div class="para">
				You can disable ACPI Soft-Off by configuring the BIOS of each cluster node as follows:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Reboot the node and start the <code class="command">BIOS CMOS Setup Utility</code> program.
					</div></li><li class="listitem"><div class="para">
						Navigate to the <span class="guimenu"><strong>Power</strong></span> menu (or equivalent power management menu).
					</div></li><li class="listitem"><div class="para">
						At the <span class="guimenu"><strong>Power</strong></span> menu, set the <span class="guimenuitem"><strong>Soft-Off by PWR-BTTN</strong></span> function (or equivalent) to <span class="guimenuitem"><strong>Instant-Off</strong></span> (or the equivalent setting that turns off the node via the power button without delay). <a class="xref" href="#ex-bios-acpi-off-CA">Exemple 2.1, « <code class="command">BIOS CMOS Setup Utility</code>: <span class="guimenuitem">Soft-Off by PWR-BTTN</span> set to <span class="guimenuitem">Instant-Off</span> »</a> shows a <span class="guimenu"><strong>Power</strong></span> menu with <span class="guimenuitem"><strong>ACPI Function</strong></span> set to <span class="guimenuitem"><strong>Enabled</strong></span> and <span class="guimenuitem"><strong>Soft-Off by PWR-BTTN</strong></span> set to <span class="guimenuitem"><strong>Instant-Off</strong></span>.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The equivalents to <span class="guimenuitem"><strong>ACPI Function</strong></span>, <span class="guimenuitem"><strong>Soft-Off by PWR-BTTN</strong></span>, and <span class="guimenuitem"><strong>Instant-Off</strong></span> may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off via the power button without delay.
						</div></div></div></li><li class="listitem"><div class="para">
						Exit the <code class="command">BIOS CMOS Setup Utility</code> program, saving the BIOS configuration.
					</div></li><li class="listitem"><div class="para">
						When the cluster is configured and running, verify that the node turns off immediately when fenced.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							You can fence the node with the <code class="command">fence_node</code> command or <span class="application"><strong>Conga</strong></span>.
						</div></div></div></li></ol></div><div class="example" id="ex-bios-acpi-off-CA"><h6>Exemple 2.1. <code class="command">BIOS CMOS Setup Utility</code>: <span class="guimenuitem">Soft-Off by PWR-BTTN</span> set to <span class="guimenuitem">Instant-Off</span></h6><div class="example-contents"><pre class="screen">
+---------------------------------------------|-------------------+
|    ACPI Function             [Enabled]      |    Item Help      |
|    ACPI Suspend Type         [S1(POS)]      |-------------------|
|  x Run VGABIOS if S3 Resume   Auto          |   Menu Level   *  |
|    Suspend Mode              [Disabled]     |                   |
|    HDD Power Down            [Disabled]     |                   |
|    Soft-Off by PWR-BTTN      [Instant-Off   |                   |
|    CPU THRM-Throttling       [50.0%]        |                   |
|    Wake-Up by PCI card       [Enabled]      |                   |
|    Power On by Ring          [Enabled]      |                   |
|    Wake Up On LAN            [Enabled]      |                   |
|  x USB KB Wake-Up From S3     Disabled      |                   |
|    Resume by Alarm           [Disabled]     |                   |
|  x  Date(of Month) Alarm       0            |                   |
|  x  Time(hh:mm:ss) Alarm       0 :  0 :     |                   |
|    POWER ON Function         [BUTTON ONLY   |                   |
|  x KB Power ON Password       Enter         |                   |
|  x Hot Key Power ON           Ctrl-F1       |                   |
|                                             |                   |
|                                             |                   |
+---------------------------------------------|-------------------+
</pre><div class="para">
					This example shows <span class="guimenuitem"><strong>ACPI Function</strong></span> set to <span class="guimenuitem"><strong>Enabled</strong></span>, and <span class="guimenuitem"><strong>Soft-Off by PWR-BTTN</strong></span> set to <span class="guimenuitem"><strong>Instant-Off</strong></span>.
				</div></div></div><br class="example-break" /></div><div class="section" id="s2-acpi-disable-boot-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-acpi-disable-boot-CA">2.4.3. Disabling ACPI Completely in the <code class="filename">grub.conf</code> File</h3></div></div></div><div class="para">
				The preferred method of disabling ACPI Soft-Off is with <code class="command">chkconfig</code> management (<a class="xref" href="#s2-acpi-disable-chkconfig-CA">Section 2.4.1, « Disabling ACPI Soft-Off with <code class="command">chkconfig</code> Management »</a>). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management (<a class="xref" href="#s2-bios-setting-CA">Section 2.4.2, « Disabling ACPI Soft-Off with the BIOS »</a>). If neither of those methods is effective for your cluster, you can disable ACPI completely by appending <strong class="userinput"><code>acpi=off</code></strong> to the kernel boot command line in the <code class="filename">grub.conf</code> file.
			</div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
					This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method <span class="emphasis"><em>only</em></span> if the other methods are not effective for your cluster.
				</div></div></div><div class="para">
				You can disable ACPI completely by editing the <code class="filename">grub.conf</code> file of each cluster node as follows:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Open <code class="filename">/boot/grub/grub.conf</code> with a text editor.
					</div></li><li class="listitem"><div class="para">
						Append <strong class="userinput"><code>acpi=off</code></strong> to the kernel boot command line in <code class="filename">/boot/grub/grub.conf</code> (refer to <a class="xref" href="#ex-grub-acpi-off-CA">Exemple 2.2, « Kernel Boot Command Line with <code class="userinput">acpi=off</code> Appended to It »</a>).
					</div></li><li class="listitem"><div class="para">
						Reboot the node.
					</div></li><li class="listitem"><div class="para">
						When the cluster is configured and running, verify that the node turns off immediately when fenced.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							You can fence the node with the <code class="command">fence_node</code> command or <span class="application"><strong>Conga</strong></span>.
						</div></div></div></li></ol></div><div class="example" id="ex-grub-acpi-off-CA"><h6>Exemple 2.2. Kernel Boot Command Line with <code class="userinput">acpi=off</code> Appended to It</h6><div class="example-contents"><pre class="screen">
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/hda
default=0
timeout=5
serial --unit=0 --speed=115200
terminal --timeout=5 serial console
title Red Hat Enterprise Linux Server (2.6.18-36.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-36.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0,115200n8 acpi=off
        initrd /initrd-2.6.18-36.el5.img
</pre><div class="para">
					In this example, <strong class="userinput"><code>acpi=off</code></strong> has been appended to the kernel boot command line — the line starting with "kernel /vmlinuz-2.6.18-36.el5".
				</div></div></div><br class="example-break" /></div></div><div class="section" id="s1-clust-svc-ov-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-svc-ov-CA">2.5. Considerations for Configuring HA Services</h2></div></div></div><a id="id894759" class="indexterm"></a><a id="id894771" class="indexterm"></a><div class="para">
			You can create a cluster to suit your needs for high availability by configuring HA (high-availability) services. The key component for HA service management in a Red Hat cluster, <code class="command">rgmanager</code>, implements cold failover for off-the-shelf applications. In a Red Hat cluster, an application is configured with other cluster resources to form an HA service that can fail over from one cluster node to another with no apparent interruption to cluster clients. HA-service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another (for example, for a planned outage of a cluster node).
		</div><div class="para">
			To create an HA service, you must configure it in the cluster configuration file. An HA service comprises cluster <em class="firstterm">resources</em>. Cluster resources are building blocks that you create and manage in the cluster configuration file — for example, an IP address, an application initialization script, or a Red Hat GFS shared partition.
		</div><div class="para">
			An HA service can run on only one cluster node at a time to maintain data integrity. You can specify failover priority in a failover domain. Specifying failover priority consists of assigning a priority level to each node in a failover domain. The priority level determines the failover order — determining which node that an HA service should fail over to. If you do not specify failover priority, an HA service can fail over to any node in its failover domain. Also, you can specify if an HA service is restricted to run only on nodes of its associated failover domain. (When associated with an unrestricted failover domain, an HA service can start on any cluster node in the event no member of the failover domain is available.)
		</div><div class="para">
			<a class="xref" href="#fig-ha-svc-example-webserver-CA">Figure 2.1, « Web Server Cluster Service Example »</a> shows an example of an HA service that is a web server named "content-webserver". It is running in cluster node B and is in a failover domain that consists of nodes A, B, and D. In addition, the failover domain is configured with a failover priority to fail over to node D before node A and to restrict failover to nodes only in that failover domain. The HA service comprises these cluster resources:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					IP address resource — IP address 10.10.10.201.
				</div></li><li class="listitem"><div class="para">
					An application resource named "httpd-content" — a web server application init script <code class="filename">/etc/init.d/httpd</code> (specifying <code class="command">httpd</code>).
				</div></li><li class="listitem"><div class="para">
					A file system resource — Red Hat GFS named "gfs-content-webserver".
				</div></li></ul></div><div class="figure" id="fig-ha-svc-example-webserver-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/ha-svc-example-webserver.png" width="444" alt="Web Server Cluster Service Example" /><div class="longdesc"><div class="para">
						Web Server Cluster Service Example
					</div></div></div></div><h6>Figure 2.1. Web Server Cluster Service Example</h6></div><br class="figure-break" /><div class="para">
			Clients access the HA service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. The httpd-content application uses the gfs-content-webserver file system. If node B were to fail, the content-webserver HA service would fail over to node D. If node D were not available or also failed, the service would fail over to node A. Failover would occur with minimal service interruption to the cluster clients. For example, in an HTTP service, certain state information may be lost (like session data). The HA service would be accessible from another cluster node via the same IP address as it was before failover.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				For more information about HA services and failover domains, refer to <em class="citetitle">Red Hat Cluster Suite Overview</em>. For information about configuring failover domains, refer to <a class="xref" href="#s1-config-failover-domain-conga-CA">Section 3.7, « Configuring a Failover Domain »</a> (using <span class="application"><strong>Conga</strong></span>) or <a class="xref" href="#s1-config-failover-domain-CA">Section 5.6, « Configuring a Failover Domain »</a> (using <code class="command">system-config-cluster</code>).
			</div></div></div><div class="para"> An HA service is a group of cluster resources
configured into a coherent entity that provides specialized services
to clients. An HA service is represented as a resource tree in the
cluster configuration file,
<code class="filename">/etc/cluster/cluster.conf</code> (in each cluster
node). In the cluster configuration file, each resource tree is an XML
representation that specifies each resource, its attributes, and its
relationship among other resources in the resource tree (parent,
child, and sibling relationships).</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
	Because an HA service consists of resources organized into a
	hierarchical tree, a service is sometimes referred to as a
	<em class="firstterm">resource tree</em> or <em class="firstterm">resource
	group</em>. Both phrases are synonymous with
	<span class="emphasis"><em>HA service</em></span>.
      </div></div></div><div class="para">
       At the root of each resource tree is a special type of resource
       — a <em class="firstterm">service resource</em>. Other types of resources comprise
       the rest of a service, determining its
       characteristics. Configuring an HA service consists of
       creating a service resource, creating subordinate cluster
       resources, and organizing them into a coherent entity that
       conforms to hierarchical restrictions of the service.
    </div><div class="para">
			Red Hat Cluster supports the following HA services:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Apache
				</div></li><li class="listitem"><div class="para">
					Application (Script)
				</div></li><li class="listitem"><div class="para">
					LVM (HA LVM)
				</div></li><li class="listitem"><div class="para">
					MySQL
				</div></li><li class="listitem"><div class="para">
					NFS
				</div></li><li class="listitem"><div class="para">
					Open LDAP
				</div></li><li class="listitem"><div class="para">
					Oracle
				</div></li><li class="listitem"><div class="para">
					PostgreSQL 8
				</div></li><li class="listitem"><div class="para">
					Samba
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Red Hat Enterprise Linux 5 does not support running Clustered Samba in an active/active configuration, in which Samba serves the same shared file system from multiple nodes. Red Hat Enterprise Linux 5 does support running Samba in a cluster in active/passive mode, with failover from one node to the other nodes in a cluster. Note that if failover occurs, locking states are lost and active connections to Samba are severed so that the clients must reconnect.
					</div></div></div></li><li class="listitem"><div class="para">
					SAP
				</div></li><li class="listitem"><div class="para">
					Tomcat 5
				</div></li></ul></div><div class="para">
			There are two major considerations to take into account when configuring an HA service:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					The types of resources needed to create a service
				</div></li><li class="listitem"><div class="para">
					Parent, child, and sibling relationships among resources
				</div></li></ul></div><div class="para">
			The types of resources and the hierarchy of resources depend on the type of service you are configuring.
		</div><a id="id836567" class="indexterm"></a><a id="id836575" class="indexterm"></a><div class="para">
			The types of cluster resources are listed in <a class="xref" href="#ap-ha-resource-params-CA">Annexe C, <em>HA Resource Parameters</em></a>. Information about parent, child, and sibling relationships among resources is described in <a class="xref" href="#ap-ha-resource-behavior-CA">Annexe D, <em>HA Resource Behavior</em></a>.
		</div></div><div class="section" id="s1-max-luns-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-max-luns-CA">2.6. Configuring max_luns</h2></div></div></div><a id="id836627" class="indexterm"></a><a id="id836642" class="indexterm"></a><div class="para">
			It is <span class="emphasis"><em>not</em></span> necessary to configure <code class="command">max_luns</code> in Red Hat Enterprise Linux 5.
		</div><div class="para">
			In Red Hat Enterprise Linux releases prior to Red Hat Enterprise Linux 5, if RAID storage in a cluster presents multiple LUNs, it is necessary to enable access to those LUNs by configuring <code class="command">max_luns</code> (or <code class="command">max_scsi_luns</code> for 2.4 kernels) in the <code class="filename">/etc/modprobe.conf</code> file of each node. In Red Hat Enterprise Linux 5, cluster nodes detect multiple LUNs without intervention required; it is <span class="emphasis"><em>not</em></span> necessary to configure <code class="command">max_luns</code> to detect multiple LUNs.
		</div></div><div class="section" id="s1-qdisk-considerations-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-qdisk-considerations-CA">2.7. Considerations for Using Quorum Disk</h2></div></div></div><a id="id836736" class="indexterm"></a><a id="id836748" class="indexterm"></a><a id="id836760" class="indexterm"></a><a id="id836774" class="indexterm"></a><div class="para">
			Quorum Disk is a disk-based quorum daemon, <code class="command">qdiskd</code>, that provides supplemental heuristics to determine node fitness. With heuristics you can determine factors that are important to the operation of the node in the event of a network partition. For example, in a four-node cluster with a 3:1 split, ordinarily, the three nodes automatically "win" because of the three-to-one majority. Under those circumstances, the one node is fenced. With <code class="command">qdiskd</code> however, you can set up heuristics that allow the one node to win based on access to a critical resource (for example, a critical network path). If your cluster requires additional methods of determining node health, then you should configure <code class="command">qdiskd</code> to meet those needs.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Configuring <code class="command">qdiskd</code> is not required unless you have special requirements for node health. An example of a special requirement is an "all-but-one" configuration. In an all-but-one configuration, <code class="command">qdiskd</code> is configured to provide enough quorum votes to maintain quorum even though only one node is working.
			</div></div></div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
				Overall, heuristics and other <code class="command">qdiskd</code> parameters for your Red Hat Cluster depend on the site environment and special requirements needed. To understand the use of heuristics and other <code class="command">qdiskd</code> parameters, refer to the <span class="citerefentry"><span class="refentrytitle">qdisk</span>(5)</span> man page. If you require assistance understanding and using <code class="command">qdiskd</code> for your site, contact an authorized Red Hat support representative.
			</div></div></div><div class="para">
			If you need to use <code class="command">qdiskd</code>, you should take into account the following considerations:
		</div><div class="variablelist"><dl><dt class="varlistentry"><span class="term"> Cluster node votes </span></dt><dd><div class="para">
						Each cluster node should have the same number of votes.
					</div></dd><dt class="varlistentry"><span class="term"> CMAN membership timeout value </span></dt><dd><div class="para">
						The CMAN membership timeout value (the time a node needs to be unresponsive before CMAN considers that node to be dead, and not a member) should be at least two times that of the <code class="command">qdiskd</code> membership timeout value. The reason is because the quorum daemon must detect failed nodes on its own, and can take much longer to do so than CMAN. The default value for CMAN membership timeout is 10 seconds. Other site-specific conditions may affect the relationship between the membership timeout values of CMAN and <code class="command">qdiskd</code>. For assistance with adjusting the CMAN membership timeout value, contact an authorized Red Hat support representative.
					</div></dd><dt class="varlistentry"><span class="term"> Fencing </span></dt><dd><div class="para">
						To ensure reliable fencing when using <code class="command">qdiskd</code>, use power fencing. While other types of fencing (such as watchdog timers and software-based solutions to reboot a node internally) can be reliable for clusters not configured with <code class="command">qdiskd</code>, they are not reliable for a cluster configured with <code class="command">qdiskd</code>.
					</div></dd><dt class="varlistentry"><span class="term"> Maximum nodes </span></dt><dd><div class="para">
						A cluster configured with <code class="command">qdiskd</code> supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device.
					</div></dd><dt class="varlistentry"><span class="term"> Quorum disk device </span></dt><dd><div class="para">
						A quorum disk device should be a shared block device with concurrent read/write access by all nodes in a cluster. The minimum size of the block device is 10 Megabytes. Examples of shared block devices that can be used by <code class="command">qdiskd</code> are a multi-port SCSI RAID array, a Fibre Channel RAID SAN, or a RAID-configured iSCSI target. You can create a quorum disk device with <code class="command">mkqdisk</code>, the Cluster Quorum Disk Utility. For information about using the utility refer to the mkqdisk(8) man page.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							Using JBOD as a quorum disk is not recommended. A JBOD cannot provide dependable performance and therefore may not allow a node to write to it quickly enough. If a node is unable to write to a quorum disk device quickly enough, the node is falsely evicted from a cluster.
						</div></div></div></dd></dl></div><div class="para">

		</div><div class="para">

		</div></div><div class="section" id="s1-selinux-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-selinux-CA">2.8. Red Hat Cluster Suite and SELinux</h2></div></div></div><a id="id837059" class="indexterm"></a><a id="id837071" class="indexterm"></a><div class="para">
			Red Hat Cluster Suite supports SELinux states according to the Red Hat Enterprise Linux release level deployed in your cluster as follows: 
			<div class="itemizedlist"><ul><li class="listitem"><div class="para">
						Red Hat Enterprise Linux 5.4 and earlier — <code class="command">disabled</code> state only.
					</div></li><li class="listitem"><div class="para">
						Red Hat Enterprise Linux 5.5 and later — <code class="command">enforcing</code> or <code class="command">permissive</code> state with the SELinux policy type set to <code class="command">targeted</code> (<span class="emphasis"><em>or</em></span> with the <code class="command">state</code> set to <code class="command">disabled</code>).
					</div></li></ul></div>

		</div><div class="para">
			For more information about SELinux, refer to <em class="citetitle">Deployment Guide</em> for Red Hat Enterprise Linux 5.
		</div></div><div class="section" id="s1-multicast-considerations-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-multicast-considerations-CA">2.9. Multicast Addresses</h2></div></div></div><a id="id837204" class="indexterm"></a><a id="id837216" class="indexterm"></a><div class="para">
			Red Hat Cluster nodes communicate among each other using multicast addresses. Therefore, each network switch and associated networking equipment in a Red Hat Cluster must be configured to enable multicast addresses and support IGMP (Internet Group Management Protocol). Ensure that each network switch and associated networking equipment in a Red Hat Cluster are capable of supporting multicast addresses and IGMP; if they are, ensure that multicast addressing and IGMP are enabled. Without multicast and IGMP, not all nodes can participate in a cluster, causing the cluster to fail.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Procedures for configuring network switches and associated networking equipment vary according each product. Refer to the appropriate vendor documentation or other information about configuring network switches and associated networking equipment to enable multicast addresses and IGMP.
			</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
			</div></div></div></div><div class="section" id="s1-iptables_firewall-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-iptables_firewall-CA">2.10. Configuring the iptables Firewall to Allow Cluster Components</h2></div></div></div><a id="id837275" class="indexterm"></a><a id="id837283" class="indexterm"></a><div class="para">
			You can use the following filtering to allow multicast traffic through the <code class="literal">iptables</code> firewall for the various cluster components.
		</div><div class="para">
			For <code class="command">openais</code>, use the following filtering. Port 5405 is used to receive multicast traffic.
		</div><pre class="screen">
iptables -I INPUT -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT
</pre><div class="para">
			For <code class="command">ricci</code>:
		</div><pre class="screen">
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 11111 -j ACCEPT
</pre><div class="para">
			For <code class="command">modcluster</code>:
		</div><pre class="screen">
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 16851 -j ACCEPT
</pre><div class="para">
			For <code class="command">gnbd</code>:
		</div><pre class="screen">
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 14567 -j ACCEPT
</pre><div class="para">
			For <code class="command">luci</code>:
		</div><pre class="screen">
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 8084 -j ACCEPT
</pre><div class="para">
			For <code class="command">DLM</code>:
		</div><pre class="screen">
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 21064 -j ACCEPT
</pre><div class="para">
			For <code class="command">ccsd</code>:
		</div><pre class="screen">
iptables -I INPUT -p udp -m state --state NEW -m multiport --dports 50007 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 50008 -j ACCEPT
</pre><div class="para">
			After executing these commands, run the following command.
		</div><pre class="screen">
service iptables save ; service iptables restart
</pre><div class="para">
			In Red Hat Enterprise Linux 5, <code class="command">rgmanager</code> does not access the network directly; <code class="command">rgmanager</code> communication happens by means of <code class="command">openais</code> network transport. Enabling <code class="command">openais</code> allows <code class="command">rgmanager</code> (or any <code class="command">openais</code> clients) to work automatically.
		</div></div><div class="section" id="s1-conga-considerations-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-conga-considerations-CA">2.11. Considerations for Using <span class="application"><strong>Conga</strong></span></h2></div></div></div><a id="id837430" class="indexterm"></a><a id="id837445" class="indexterm"></a><div class="para">
			When using <span class="application"><strong>Conga</strong></span> to configure and manage your Red Hat Cluster, make sure that each computer running <span class="application"><strong>luci</strong></span> (the <span class="application"><strong>Conga</strong></span> user interface server) is running on the same network that the cluster is using for cluster communication. Otherwise, <span class="application"><strong>luci</strong></span> cannot configure the nodes to communicate on the right network. If the computer running <span class="application"><strong>luci</strong></span> is on another network (for example, a public network rather than a private network that the cluster is communicating on), contact an authorized Red Hat support representative to make sure that the appropriate host name is configured for each cluster node.
		</div></div><div class="section" id="s1-vm-considerations-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-vm-considerations-CA">2.12. Configuring Virtual Machines in a Clustered Environment</h2></div></div></div><a id="id837502" class="indexterm"></a><a id="id837514" class="indexterm"></a><div class="para">
			When you configure your cluster with virtual machine resources, you should use the <code class="command">rgmanager</code> tools to start and stop the virtual machines. Using <code class="command">virsh</code> or <code class="command">libvirt</code> tools to start the machine can result in the virtual machine running in more than one place, which can cause data corruption in the virtual machine.
		</div><div class="para">
			To reduce the chances of administrators accidentally "double-starting" virtual machines by using both cluster and non-cluster tools in a clustered environment, you can configure your system as follows:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Ensure that you are using using the <code class="literal">rgmanager 2.0.52-1.el5_4.3</code> or later package release.
				</div></li><li class="listitem"><div class="para">
					Store the virtual machine configuration files in a non-default location.
				</div></li></ul></div><div class="para">
			Storing the virtual machine configuration files somewhere other than their default location makes it more difficult to accidentally start a virtual machine using <code class="command">xm</code> or <code class="command">virsh</code>, as the configuration file will be unknown out of the box to <code class="command">libvirt</code> or the <code class="command">xm</code> tool.
		</div><div class="para">
			The non-default location for virtual machine configuration files may be anywhere. The advantage of using an NFS share or a shared GFS or GFS2 file system is that the administrator does not need to keep the configuration files in sync across the cluster members. However, it is also permissible to use a local directory as long as the administrator keeps the contents synchronized somehow cluster-wide.
		</div><div class="para">
			In the cluster configuration, virtual machines may reference this non-default location by using the <code class="literal">path</code> attribute of a virtual machine resource. Note that the <code class="literal">path</code> attribute is a directory or set of directories separated by the colon ':' character, not a path to a specific file.
		</div><div class="para">
			For more information on the attributes of a virtual machine resources, refer to <a class="xref" href="#tb-vm-resource-CA">Tableau C.21, « Virtual Machine »</a>.
		</div></div></div><div xml:lang="fr-FR" class="chapter" id="ch-config-conga-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 3. Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-config-tasks-conga-CA">3.1. Configuration Tasks</a></span></dt><dt><span class="section"><a href="#s1-start-luci-ricci-conga-CA">3.2. Starting <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span></a></span></dt><dt><span class="section"><a href="#s1-creating-cluster-conga-CA">3.3. Creating A Cluster</a></span></dt><dt><span class="section"><a href="#s1-general-prop-conga-CA">3.4. Global Cluster Properties</a></span></dt><dt><span class="section"><a href="#s1-config-fence-devices-conga-CA">3.5. Configuring Fence Devices</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-create-fence-devices-conga-CA">3.5.1. Creating a Shared Fence Device</a></span></dt><dt><span class="section"><a href="#s2-modify-delete-fence-devices-conga-CA">3.5.2. Modifying or Deleting a Fence Device</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-member-conga-CA">3.6. Configuring Cluster Members</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-init-member-conga-CA">3.6.1. Initially Configuring Members</a></span></dt><dt><span class="section"><a href="#s2-add-member-running-conga-CA">3.6.2. Adding a Member to a Running Cluster</a></span></dt><dt><span class="section"><a href="#s2-delete-member-conga-CA">3.6.3. Deleting a Member from a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-failover-domain-conga-CA">3.7. Configuring a Failover Domain</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-config-add-failoverdm-conga-CA">3.7.1. Adding a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-modify-failoverdm-conga-CA">3.7.2. Modifying a Failover Domain</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-add-resource-conga-CA">3.8. Adding Cluster Resources</a></span></dt><dt><span class="section"><a href="#s1-add-service-conga-CA">3.9. Adding a Cluster Service to the Cluster</a></span></dt><dt><span class="section"><a href="#s1-config-storage-conga-CA">3.10. Configuring Cluster Storage</a></span></dt></dl></div><a id="id751620" class="indexterm"></a><a id="id802157" class="indexterm"></a><div class="para">
		This chapter describes how to configure Red Hat Cluster software using <span class="application"><strong>Conga</strong></span>, and consists of the following sections:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-tasks-conga-CA">Section 3.1, « Configuration Tasks »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-start-luci-ricci-conga-CA">Section 3.2, « Starting <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span> »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-creating-cluster-conga-CA">Section 3.3, « Creating A Cluster »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-general-prop-conga-CA">Section 3.4, « Global Cluster Properties »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-fence-devices-conga-CA">Section 3.5, « Configuring Fence Devices »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-member-conga-CA">Section 3.6, « Configuring Cluster Members »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-failover-domain-conga-CA">Section 3.7, « Configuring a Failover Domain »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-add-service-conga-CA">Section 3.9, « Adding a Cluster Service to the Cluster »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-storage-conga-CA">Section 3.10, « Configuring Cluster Storage »</a>
			</div></li></ul></div><div class="section" id="s1-config-tasks-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-tasks-conga-CA">3.1. Configuration Tasks</h2></div></div></div><div class="para">
			Configuring Red Hat Cluster software with <span class="application"><strong>Conga</strong></span> consists of the following steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Configuring and running the <span class="application"><strong>Conga</strong></span> configuration user interface — the <span class="application"><strong>luci</strong></span> server. Refer to <a class="xref" href="#s1-start-luci-ricci-conga-CA">Section 3.2, « Starting <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span> »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating a cluster. Refer to <a class="xref" href="#s1-creating-cluster-conga-CA">Section 3.3, « Creating A Cluster »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring global cluster properties. Refer to <a class="xref" href="#s1-general-prop-conga-CA">Section 3.4, « Global Cluster Properties »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring fence devices. Refer to <a class="xref" href="#s1-config-fence-devices-conga-CA">Section 3.5, « Configuring Fence Devices »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring cluster members. Refer to <a class="xref" href="#s1-config-member-conga-CA">Section 3.6, « Configuring Cluster Members »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating failover domains. Refer to <a class="xref" href="#s1-config-failover-domain-conga-CA">Section 3.7, « Configuring a Failover Domain »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating resources. Refer to <a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating cluster services. Refer to <a class="xref" href="#s1-add-service-conga-CA">Section 3.9, « Adding a Cluster Service to the Cluster »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring storage. Refer to <a class="xref" href="#s1-config-storage-conga-CA">Section 3.10, « Configuring Cluster Storage »</a>.
				</div></li></ol></div></div><div class="section" id="s1-start-luci-ricci-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-start-luci-ricci-conga-CA">3.2. Starting <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span></h2></div></div></div><div class="para">
			To administer Red Hat Clusters with <span class="application"><strong>Conga</strong></span>, install and run <span class="application"><strong>luci</strong></span> and <span class="application"><strong>ricci</strong></span> as follows:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At each node to be administered by <span class="application"><strong>Conga</strong></span>, install the <span class="application"><strong>ricci</strong></span> agent. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>yum install ricci</code></strong></pre></li><li class="listitem"><div class="para">
					At each node to be administered by <span class="application"><strong>Conga</strong></span>, start <span class="application"><strong>ricci</strong></span>. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>service ricci start</code></strong>
Starting ricci:                                            [  OK  ]
</pre></li><li class="listitem"><div class="para">
					Select a computer to host <span class="application"><strong>luci</strong></span> and install the <span class="application"><strong>luci</strong></span> software on that computer. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>yum install luci</code></strong></pre><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Typically, a computer in a server cage or a data center hosts <span class="application"><strong>luci</strong></span>; however, a cluster computer can host <span class="application"><strong>luci</strong></span>.
					</div></div></div></li><li class="listitem"><div class="para">
					At the computer running <span class="application"><strong>luci</strong></span>, initialize the <span class="application"><strong>luci</strong></span> server using the <code class="command">luci_admin init</code> command. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>luci_admin init</code></strong>
Initializing the Luci server


Creating the 'admin' user

Enter password:  &lt;Type password and press ENTER.&gt;
Confirm password: &lt;Re-type password and press ENTER.&gt;

Please wait...
The admin password has been successfully set.
Generating SSL certificates...
Luci server has been successfully initialized


Restart the Luci server for changes to take effect
eg. service luci restart

</pre></li><li class="listitem"><div class="para">
					Start <span class="application"><strong>luci</strong></span> using <code class="command">service luci restart</code>. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>service luci restart</code></strong>
Shutting down luci:                                        [  OK  ]
Starting luci: generating https SSL certificates...  done
                                                           [  OK  ]

Please, point your web browser to https://nano-01:8084 to access luci
</pre></li><li class="listitem"><div class="para">
					At a Web browser, place the URL of the <span class="application"><strong>luci</strong></span> server into the URL address box and click <span class="guiicon"><strong>Go </strong></span> (or the equivalent). The URL syntax for the <span class="application"><strong>luci</strong></span> server is <strong class="userinput"><code>https://<em class="replaceable"><code>luci_server_hostname</code></em>:8084</code></strong>. The first time you access <span class="application"><strong>luci</strong></span>, two SSL certificate dialog boxes are displayed. Upon acknowledging the dialog boxes, your Web browser displays the <span class="application"><strong>luci</strong></span> login page.
				</div></li></ol></div></div><div class="section" id="s1-creating-cluster-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-creating-cluster-conga-CA">3.3. Creating A Cluster</h2></div></div></div><div class="para">
			Creating a cluster with <span class="application"><strong>luci</strong></span> consists of selecting cluster nodes, entering their passwords, and submitting the request to create a cluster. If the node information and passwords are correct, <span class="application"><strong>Conga</strong></span> automatically installs software into the cluster nodes and starts the cluster. Create a cluster as follows:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					As administrator of <span class="application"><strong>luci</strong></span>, select the <span class="guimenu"><strong>cluster</strong></span> tab.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guimenu"><strong>Create a New Cluster</strong></span>.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guimenu"><strong>Cluster Name</strong></span> text box, enter a cluster name. The cluster name cannot exceed 15 characters. Add the node name and password for each cluster node. Enter the node name for each node in the <span class="guimenu"><strong>Node Hostname</strong></span> column; enter the root password for each node in the in the <span class="guimenu"><strong>Root Password</strong></span> column. Check the <span class="guimenu"><strong>Enable Shared Storage Support</strong></span> checkbox if clustered storage is required.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes the following actions:
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Cluster software packages to be downloaded onto each cluster node.
						</div></li><li class="listitem"><div class="para">
							Cluster software to be installed onto each cluster node.
						</div></li><li class="listitem"><div class="para">
							Cluster configuration file to be created and propagated to each node in the cluster.
						</div></li><li class="listitem"><div class="para">
							Starting the cluster.
						</div></li></ol></div><div class="para">
					A progress page shows the progress of those actions for each node in the cluster.
				</div><div class="para">
					When the process of creating a new cluster is complete, a page is displayed providing a configuration interface for the newly created cluster.
				</div></li></ol></div></div><div class="section" id="s1-general-prop-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-general-prop-conga-CA">3.4. Global Cluster Properties</h2></div></div></div><div class="para">
			When a cluster is created, or if you select a cluster to configure, a cluster-specific page is displayed. The page provides an interface for configuring cluster-wide properties and detailed properties. You can configure cluster-wide properties with the tabbed interface below the cluster name. The interface provides the following tabs: <span class="guimenu"><strong>General</strong></span>, <span class="guimenu"><strong>Fence</strong></span>, <span class="guimenu"><strong>Multicast</strong></span>, and <span class="guimenu"><strong>Quorum Partition</strong></span>. To configure the parameters in those tabs, follow the steps in this section. If you do not need to configure parameters in a tab, skip the step for that tab.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					<span class="guimenu"><strong>General</strong></span> tab — This tab displays cluster name and provides an interface for configuring the configuration version and advanced cluster properties. The parameters are summarized as follows:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Cluster Name</strong></span> text box displays the cluster name; it does not accept a cluster name change. You cannot change the cluster name. The only way to change the name of a Red Hat cluster is to create a new cluster configuration with the new name.
						</div></li><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Configuration Version</strong></span> value is set to <strong class="userinput"><code>1</code></strong> by default and is automatically incremented each time you modify your cluster configuration. However, if you need to set it to another value, you can specify it at the <span class="guimenu"><strong>Configuration Version</strong></span> text box.
						</div></li><li class="listitem"><div class="para">
							You can enter advanced cluster properties by clicking <span class="guimenu"><strong>Show advanced cluster properties</strong></span>. Clicking <span class="guimenu"><strong>Show advanced cluster properties</strong></span> reveals a list of advanced properties. You can click any advanced property for online help about the property.
						</div></li></ul></div><div class="para">
					Enter the values required and click <span class="guibutton"><strong>Apply</strong></span> for changes to take effect.
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>Fence</strong></span> tab — This tab provides an interface for configuring these <span class="guimenu"><strong>Fence Daemon Properties</strong></span> parameters: <span class="guimenu"><strong>Post-Fail Delay</strong></span> and <span class="guimenu"><strong>Post-Join Delay</strong></span>. The parameters are summarized as follows:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Post-Fail Delay</strong></span> parameter is the number of seconds the fence daemon (<code class="command">fenced</code>) waits before fencing a node (a member of the fence domain) after the node has failed. The <span class="guimenu"><strong>Post-Fail Delay</strong></span> default value is <strong class="userinput"><code>0</code></strong>. Its value may be varied to suit cluster and network performance.
						</div></li><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Post-Join Delay</strong></span> parameter is the number of seconds the fence daemon (<code class="command">fenced</code>) waits before fencing a node after the node joins the fence domain. The <span class="guimenu"><strong>Post-Join Delay</strong></span> default value is <strong class="userinput"><code>3</code></strong>. A typical setting for <span class="guimenu"><strong>Post-Join Delay</strong></span> is between 20 and 30 seconds, but can vary according to cluster and network performance.
						</div></li></ul></div><div class="para">
					Enter values required and Click <span class="guibutton"><strong>Apply</strong></span> for changes to take effect.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						For more information about <span class="guimenu"><strong>Post-Join Delay</strong></span> and <span class="guimenu"><strong>Post-Fail Delay</strong></span>, refer to the <span class="citerefentry"><span class="refentrytitle">fenced</span>(8)</span> man page.
					</div></div></div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>Multicast</strong></span> tab — This tab provides an interface for configuring these <span class="guimenu"><strong>Multicast Configuration</strong></span> parameters: <span class="guimenu"><strong>Let cluster choose the multicast address</strong></span> and <span class="guimenu"><strong>Specify the multicast address manually</strong></span>. The default setting is <span class="guimenu"><strong>Let cluster choose the multicast address</strong></span>. If you need to use a specific multicast address, click <span class="guimenu"><strong>Specify the multicast address manually</strong></span>, enter a multicast address into the text box, and click <span class="guibutton"><strong>Apply</strong></span> for changes to take effect.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
					</div></div></div><div class="para">
					If you do not specify a multicast address, the Red Hat Cluster software (specifically, <code class="command">cman</code>, the Cluster Manager) creates one. It forms the upper 16 bits of the multicast address with 239.192 and forms the lower 16 bits based on the cluster ID.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The cluster ID is a unique identifier that <code class="command">cman</code> generates for each cluster. To view the cluster ID, run the <code class="command">cman_tool status</code> command on a cluster node.
					</div></div></div><div class="para">
					If you do specify a multicast address, you should use the 239.192.x.x series that <code class="command">cman</code> uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance.
					</div></div></div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>Quorum Partition</strong></span> tab — This tab provides an interface for configuring these <span class="guimenu"><strong>Quorum Partition Configuration</strong></span> parameters: <span class="guimenu"><strong>Do not use a Quorum Partition</strong></span>, <span class="guimenu"><strong>Use a Quorum Partition</strong></span>, <span class="guimenu"><strong>Interval</strong></span>, <span class="guimenu"><strong>Votes</strong></span>, <span class="guimenu"><strong>TKO</strong></span>, <span class="guimenu"><strong>Minimum Score</strong></span>, <span class="guimenu"><strong>Device</strong></span>, <span class="guimenu"><strong>Label</strong></span>, and <span class="guimenu"><strong>Heuristics</strong></span>. The <span class="guimenu"><strong>Do not use a Quorum Partition</strong></span> parameter is enabled by default. <a class="xref" href="#tb-qdisk-params-rhel5-conga-CA">Tableau 3.1, « Quorum-Disk Parameters »</a> describes the parameters. If you need to use a quorum disk, click <span class="guimenu"><strong>Use a Quorum Partition</strong></span>, enter quorum disk parameters, click <span class="guibutton"><strong>Apply</strong></span>, and restart the cluster for the changes to take effect.
				</div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
						Quorum-disk parameters and heuristics depend on the site environment and the special requirements needed. To understand the use of quorum-disk parameters and heuristics, refer to the <span class="citerefentry"><span class="refentrytitle">qdisk</span>(5)</span> man page. If you require assistance understanding and using quorum disk, contact an authorized Red Hat support representative.
					</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Clicking <span class="guimenu"><strong>Apply</strong></span> on the <span class="guimenu"><strong>Quorum Partition</strong></span> tab propagates changes to the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>) in each cluster node. However, for the quorum disk to operate, you must restart the cluster (refer to <a class="xref" href="#s1-admin-start-conga-CA">Section 4.1, « Starting, Stopping, and Deleting Clusters »</a>).
					</div></div></div></li></ol></div><div class="table" id="tb-qdisk-params-rhel5-conga-CA"><h6>Tableau 3.1. Quorum-Disk Parameters</h6><div class="table-contents"><table summary="Quorum-Disk Parameters" border="1"><colgroup><col width="25%" class="Parameter" /><col width="75%" class="Description" /></colgroup><thead><tr><th>
							Parameter
						</th><th>
							Description
						</th></tr></thead><tbody><tr><td>
							<span class="guimenu"><strong>Do not use a Quorum Partition</strong></span>
						</td><td>
							Disables quorum partition. Disables quorum-disk parameters in the <span class="guimenu"><strong>Quorum Partition</strong></span> tab.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Use a Quorum Partition</strong></span>
						</td><td>
							Enables quorum partition. Enables quorum-disk parameters in the <span class="guimenu"><strong>Quorum Partition</strong></span> tab.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Interval</strong></span>
						</td><td>
							The frequency of read/write cycles, in seconds.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Votes</strong></span>
						</td><td>
							The number of votes the quorum daemon advertises to CMAN when it has a high enough score.
						</td></tr><tr><td>
							<span class="guimenu"><strong>TKO</strong></span>
						</td><td>
							The number of cycles a node must miss to be declared dead.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Minimum Score</strong></span>
						</td><td>
							The minimum score for a node to be considered "alive". If omitted or set to 0, the default function, <code class="command">floor((<em class="replaceable"><code>n</code></em>+1)/2)</code>, is used, where <em class="replaceable"><code>n</code></em> is the sum of the heuristics scores. The <span class="guimenu"><strong>Minimum Score</strong></span> value must never exceed the sum of the heuristic scores; otherwise, the quorum disk cannot be available.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Device</strong></span>
						</td><td>
							The storage device the quorum daemon uses. The device must be the same on all nodes.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Label</strong></span>
						</td><td>
							Specifies the quorum disk label created by the <code class="command">mkqdisk</code> utility. If this field contains an entry, the label overrides the <span class="guimenu"><strong>Device</strong></span> field. If this field is used, the quorum daemon reads <code class="filename">/proc/partitions</code> and checks for qdisk signatures on every block device found, comparing the label against the specified label. This is useful in configurations where the quorum device name differs among nodes.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Heuristics</strong></span>
						</td><td>
							<table border="0" summary="Simple list" class="simplelist"><tr><td><span class="guimenu"><strong>Path to Program</strong></span> — The program used to determine if this heuristic is alive. This can be anything that can be executed by <code class="command">/bin/sh -c</code>. A return value of <span class="returnvalue">0</span> indicates success; anything else indicates failure. This field is required.</td></tr><tr><td><span class="guimenu"><strong>Interval</strong></span> — The frequency (in seconds) at which the heuristic is polled. The default interval for every heuristic is 2 seconds.</td></tr><tr><td><span class="guimenu"><strong>Score</strong></span> — The weight of this heuristic. Be careful when determining scores for heuristics. The default score for each heuristic is 1. </td></tr></table>

						</td></tr><tr><td>
							<span class="guimenu"><strong>Apply</strong></span>
						</td><td>
							Propagates the changes to the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>) in each cluster node.
						</td></tr></tbody></table></div></div><br class="table-break" /></div><div class="section" id="s1-config-fence-devices-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-fence-devices-conga-CA">3.5. Configuring Fence Devices</h2></div></div></div><div class="para">
			Configuring fence devices consists of creating, modifying, and deleting fence devices. Creating a fence device consists of selecting a fence device type and entering parameters for that fence device (for example, name, IP address, login, and password). Modifying a fence device consists of selecting an existing fence device and changing parameters for that fence device. Deleting a fence device consists of selecting an existing fence device and deleting it.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				If you are creating a new cluster, you can create fence devices when you configure cluster nodes. Refer to <a class="xref" href="#s1-config-member-conga-CA">Section 3.6, « Configuring Cluster Members »</a>.
			</div></div></div><div class="para">
			With <span class="application"><strong>Conga</strong></span> you can create shared and non-shared fence devices. For information on supported fence devices and there parameters, refer to <a class="xref" href="#ap-fence-device-param-CA">Annexe B, <em>Fence Device Parameters</em></a>.
		</div><div class="para">
			This section provides procedures for the following tasks:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Creating <span class="emphasis"><em>shared</em></span> fence devices — Refer to <a class="xref" href="#s2-create-fence-devices-conga-CA">Section 3.5.1, « Creating a Shared Fence Device »</a>. The procedures apply <span class="emphasis"><em>only</em></span> to creating shared fence devices. You can create <span class="emphasis"><em>non-shared</em></span> (and shared) fence devices while configuring nodes (refer to <a class="xref" href="#s1-config-member-conga-CA">Section 3.6, « Configuring Cluster Members »</a>).
				</div></li><li class="listitem"><div class="para">
					Modifying or deleting fence devices — Refer to <a class="xref" href="#s2-modify-delete-fence-devices-conga-CA">Section 3.5.2, « Modifying or Deleting a Fence Device »</a>. The procedures apply to both shared and non-shared fence devices.
				</div></li></ul></div><div class="para">
			The starting point of each procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
		</div><div class="section" id="s2-create-fence-devices-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-create-fence-devices-conga-CA">3.5.1. Creating a Shared Fence Device</h3></div></div></div><div class="para">
				To create a shared fence device, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Shared Fence Devices</strong></span>. Clicking <span class="guimenu"><strong>Shared Fence Devices</strong></span> causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: <span class="guimenu"><strong>Add a Fence Device</strong></span> and <span class="guimenu"><strong>Configure a Fence Device</strong></span>.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							If this is an initial cluster configuration, no fence devices have been created, and therefore none are displayed.
						</div></div></div></li><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Add a Fence Device</strong></span>. Clicking <span class="guimenu"><strong>Add a Fence Device</strong></span> causes the <span class="guilabel"><strong>Add a Sharable Fence Device</strong></span> page to be displayed (refer to <a class="xref" href="#fig-fence-device-config-dbox-conga-CA">Figure 3.1, « Fence Device Configuration »</a>).
					</div><div class="figure" id="fig-fence-device-config-dbox-conga-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/fence-device-config-dbox-conga.png" width="444" alt="Fence Device Configuration" /><div class="longdesc"><div class="para">
									fence configuration dialog box
								</div></div></div></div><h6>Figure 3.1. Fence Device Configuration</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
						At the <span class="guilabel"><strong>Add a Sharable Fence Device</strong></span> page, click the drop-down box under <span class="guimenu"><strong>Fencing Type</strong></span> and select the type of fence device to configure.
					</div></li><li class="listitem"><div class="para">
						Specify the information in the <span class="guilabel"><strong>Fencing Type </strong></span> dialog box according to the type of fence device. Refer to <a class="xref" href="#ap-fence-device-param-CA">Annexe B, <em>Fence Device Parameters</em></a> for more information about fence device parameters.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Add this shared fence device</strong></span>.
					</div><div class="para">
						Clicking <span class="guibutton"><strong>Add this shared fence device</strong></span> causes a progress page to be displayed temporarily. After the fence device has been added, the detailed cluster properties menu is updated with the fence device under <span class="guimenu"><strong>Configure a Fence Device</strong></span>.
					</div></li></ol></div></div><div class="section" id="s2-modify-delete-fence-devices-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-modify-delete-fence-devices-conga-CA">3.5.2. Modifying or Deleting a Fence Device</h3></div></div></div><div class="para">
				To modify or delete a fence device, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Shared Fence Devices</strong></span>. Clicking <span class="guimenu"><strong>Shared Fence Devices</strong></span> causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: <span class="guimenu"><strong>Add a Fence Device</strong></span> and <span class="guimenu"><strong>Configure a Fence Device</strong></span>.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Configure a Fence Device</strong></span>. Clicking <span class="guimenu"><strong>Configure a Fence Device</strong></span> causes the display of a list of fence devices under <span class="guimenu"><strong>Configure a Fence Device</strong></span>.
					</div></li><li class="listitem"><div class="para">
						Click a fence device in the list. Clicking a fence device in the list causes the display of a <span class="guilabel"><strong>Fence Device Form</strong></span> page for the fence device selected from the list.
					</div></li><li class="listitem"><div class="para">
						Either modify or delete the fence device as follows:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								To modify the fence device, enter changes to the parameters displayed. Refer to <a class="xref" href="#ap-fence-device-param-CA">Annexe B, <em>Fence Device Parameters</em></a> for more information about fence device parameters. Click <span class="guibutton"><strong>Update this fence device</strong></span> and wait for the configuration to be updated.
							</div></li><li class="listitem"><div class="para">
								To delete the fence device, click <span class="guibutton"><strong>Delete this fence device</strong></span> and wait for the configuration to be updated.
							</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
									You can create shared fence devices on the node configuration page, also. However, you can only modify or delete a shared fence device via <span class="guimenu"><strong>Shared Fence Devices</strong></span> at the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu).
								</div></div></div></li></ul></div></li></ol></div></div></div><div class="section" id="s1-config-member-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-member-conga-CA">3.6. Configuring Cluster Members</h2></div></div></div><div class="para">
			Configuring cluster members consists of initially configuring nodes in a newly configured cluster, adding members, and deleting members. The following sections provide procedures for initial configuration of nodes, adding nodes, and deleting nodes:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<a class="xref" href="#s2-init-member-conga-CA">Section 3.6.1, « Initially Configuring Members »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-add-member-running-conga-CA">Section 3.6.2, « Adding a Member to a Running Cluster »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-delete-member-conga-CA">Section 3.6.3, « Deleting a Member from a Cluster »</a>
				</div></li></ul></div><div class="section" id="s2-init-member-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-init-member-conga-CA">3.6.1. Initially Configuring Members</h3></div></div></div><div class="para">
				Creating a cluster consists of selecting a set of nodes (or members) to be part of the cluster. Once you have completed the initial step of creating a cluster and creating fence devices, you need to configure cluster nodes. To initially configure cluster nodes after creating a new cluster, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Nodes</strong></span>. Clicking <span class="guimenu"><strong>Nodes</strong></span> causes the display of an <span class="guimenu"><strong>Add a Node</strong></span> element and a <span class="guimenu"><strong>Configure</strong></span> element with a list of the nodes already configured in the cluster.
					</div></li><li class="listitem"><div class="para">
						Click a link for a node at either the list in the center of the page or in the list in the detailed menu under the <span class="guimenu"><strong>clusters</strong></span> menu. Clicking a link for a node causes a page to be displayed for that link showing how that node is configured.
					</div></li><li class="listitem"><div class="para">
						At the bottom of the page, under <span class="guimenu"><strong>Main Fencing Method</strong></span>, click <span class="guibutton"><strong>Add a fence device to this level</strong></span>.
					</div></li><li class="listitem"><div class="para">
						Select a fence device and provide parameters for the fence device (for example port number).
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							You can choose from an existing fence device or create a new fence device.
						</div></div></div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Update main fence properties</strong></span> and wait for the change to take effect.
					</div></li></ol></div></div><div class="section" id="s2-add-member-running-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-add-member-running-conga-CA">3.6.2. Adding a Member to a Running Cluster</h3></div></div></div><div class="para">
				To add a member to a running cluster, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Nodes</strong></span>. Clicking <span class="guimenu"><strong>Nodes</strong></span> causes the display of an <span class="guimenu"><strong>Add a Node</strong></span> element and a <span class="guimenu"><strong>Configure</strong></span> element with a list of the nodes already configured in the cluster. (In addition, a list of the cluster nodes is displayed in the center of the page.)
					</div></li><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Add a Node</strong></span>. Clicking <span class="guimenu"><strong>Add a Node</strong></span> causes the display of the <span class="guilabel"><strong>Add a node to <em class="replaceable"><code>cluster name</code></em></strong></span> page.
					</div></li><li class="listitem"><div class="para">
						At that page, enter the node name in the <span class="guimenu"><strong>Node Hostname</strong></span> text box; enter the root password in the <span class="guimenu"><strong>Root Password</strong></span> text box. Check the <span class="guimenu"><strong>Enable Shared Storage Support</strong></span> checkbox if clustered storage is required. If you want to add more nodes, click <span class="guibutton"><strong>Add another entry</strong></span> and enter node name and password for the each additional node.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes the following actions:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								Cluster software packages to be downloaded onto the added node.
							</div></li><li class="listitem"><div class="para">
								Cluster software to be installed (or verification that the appropriate software packages are installed) onto the added node.
							</div></li><li class="listitem"><div class="para">
								Cluster configuration file to be updated and propagated to each node in the cluster — including the added node.
							</div></li><li class="listitem"><div class="para">
								Joining the added node to cluster.
							</div></li></ol></div><div class="para">
						A progress page shows the progress of those actions for each added node.
					</div></li><li class="listitem"><div class="para">
						When the process of adding a node is complete, a page is displayed providing a configuration interface for the cluster.
					</div></li><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Nodes</strong></span>. Clicking <span class="guimenu"><strong>Nodes</strong></span> causes the following displays:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								A list of cluster nodes in the center of the page
							</div></li><li class="listitem"><div class="para">
								The <span class="guimenu"><strong>Add a Node</strong></span> element and the <span class="guimenu"><strong>Configure</strong></span> element with a list of the nodes configured in the cluster at the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu)
							</div></li></ul></div></li><li class="listitem"><div class="para">
						Click the link for an added node at either the list in the center of the page or in the list in the detailed menu under the <span class="guimenu"><strong>clusters</strong></span> menu. Clicking the link for the added node causes a page to be displayed for that link showing how that node is configured.
					</div></li><li class="listitem"><div class="para">
						At the bottom of the page, under <span class="guimenu"><strong>Main Fencing Method</strong></span>, click <span class="guibutton"><strong>Add a fence device to this level</strong></span>.
					</div></li><li class="listitem"><div class="para">
						Select a fence device and provide parameters for the fence device (for example port number).
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							You can choose from an existing fence device or create a new fence device.
						</div></div></div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Update main fence properties</strong></span> and wait for the change to take effect.
					</div></li></ol></div></div><div class="section" id="s2-delete-member-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-delete-member-conga-CA">3.6.3. Deleting a Member from a Cluster</h3></div></div></div><div class="para">
				To delete a member from an existing cluster that is currently in operation, follow the steps in this section. The starting point of the procedure is at the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page (displayed on the <span class="guimenu"><strong>cluster</strong></span> tab).
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Click the link of the node to be deleted. Clicking the link of the node to be deleted causes a page to be displayed for that link showing how that node is configured.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							To allow services running on a node to fail over when the node is deleted, skip the next step.
						</div></div></div></li><li class="listitem"><div class="para">
						Disable or relocate each service that is running on the node to be deleted:
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							Repeat this step for each service that needs to be disabled or started on another node.
						</div></div></div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								Under <span class="guimenu"><strong>Services on this Node</strong></span>, click the link for a service. Clicking that link cause a configuration page for that service to be displayed.
							</div></li><li class="listitem"><div class="para">
								On that page, at the <span class="guimenu"><strong>Choose a task</strong></span>drop-down box, choose to either disable the service are start it on another node and click <span class="guibutton"><strong>Go</strong></span>.
							</div></li><li class="listitem"><div class="para">
								Upon confirmation that the service has been disabled or started on another node, click the <span class="guimenu"><strong>cluster</strong></span> tab. Clicking the <span class="guimenu"><strong>cluster</strong></span> tab causes the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page to be displayed.
							</div></li><li class="listitem"><div class="para">
								At the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page, click the link of the node to be deleted. Clicking the link of the node to be deleted causes a page to be displayed for that link showing how that node is configured.
							</div></li></ol></div></li><li class="listitem"><div class="para">
						On that page, at the <span class="guimenu"><strong>Choose a task</strong></span>drop-down box, choose <span class="guimenuitem"><strong>Delete this node</strong></span> and click <span class="guibutton"><strong>Go</strong></span>. When the node is deleted, a page is displayed that lists the nodes in the cluster. Check the list to make sure that the node has been deleted.
					</div></li></ol></div></div></div><div class="section" id="s1-config-failover-domain-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-failover-domain-conga-CA">3.7. Configuring a Failover Domain</h2></div></div></div><div class="para">
			A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure. A failover domain can have the following characteristics:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Unrestricted — Allows you to specify that a subset of members are preferred, but that a cluster service assigned to this domain can run on any available member.
				</div></li><li class="listitem"><div class="para">
					Restricted — Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started (either manually or by the cluster software).
				</div></li><li class="listitem"><div class="para">
					Unordered — When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering.
				</div></li><li class="listitem"><div class="para">
					Ordered — Allows you to specify a preference order among the members of a failover domain. The member at the top of the list is the most preferred, followed by the second member in the list, and so on.
				</div></li><li class="listitem"><div class="para">
					Failback — Allows you to specify whether a service in the failover domain should fail back to the node that it was originally running on before that node failed. Configuring this characteristic is useful in circumstances where a node repeatedly fails and is part of an ordered failover domain. In that circumstance, if a node is the preferred node in a failover domain, it is possible for a service to fail over and fail back repeatedly between the preferred node and another node, causing severe impact on performance.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The failback characteristic is applicable only if ordered failover is configured.
					</div></div></div></li></ul></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Changing a failover domain configuration has no effect on currently running services.
			</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Failover domains are <span class="emphasis"><em>not</em></span> required for operation.
			</div></div></div><div class="para">
			By default, failover domains are unrestricted and unordered.
		</div><div class="para">
			In a cluster with several members, using a restricted failover domain can minimize the work to set up the cluster to run a cluster service (such as <code class="filename">httpd</code>), which requires you to set up the configuration identically on all members that run the cluster service). Instead of setting up the entire cluster to run the cluster service, you must set up only the members in the restricted failover domain that you associate with the cluster service.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				To configure a preferred member, you can create an unrestricted failover domain comprising only one cluster member. Doing that causes a cluster service to run on that cluster member primarily (the preferred member), but allows the cluster service to fail over to any of the other members.
			</div></div></div><div class="para">
			The following sections describe adding a failover domain and modifying a failover domain:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<a class="xref" href="#s2-config-add-failoverdm-conga-CA">Section 3.7.1, « Adding a Failover Domain »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-config-modify-failoverdm-conga-CA">Section 3.7.2, « Modifying a Failover Domain »</a>
				</div></li></ul></div><div class="section" id="s2-config-add-failoverdm-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-add-failoverdm-conga-CA">3.7.1. Adding a Failover Domain</h3></div></div></div><div class="para">
				To add a failover domain, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Failover Domains</strong></span>. Clicking <span class="guimenu"><strong>Failover Domains</strong></span> causes the display of failover domains with related services and the display of menu items for failover domains: <span class="guimenu"><strong>Add a Failover Domain</strong></span> and <span class="guimenu"><strong>Configure a Failover Domain </strong></span>.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Add a Failover Domain</strong></span>. Clicking <span class="guimenu"><strong>Add a Failover Domain</strong></span> causes the display of the <span class="guilabel"><strong>Add a Failover Domain</strong></span> page.
					</div></li><li class="listitem"><div class="para">
						At the <span class="guilabel"><strong>Add a Failover Domain</strong></span> page, specify a failover domain name at the <span class="guimenu"><strong>Failover Domain Name</strong></span> text box.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The name should be descriptive enough to distinguish its purpose relative to other names used in your cluster.
						</div></div></div></li><li class="listitem"><div class="para">
						To enable setting failover priority of the members in the failover domain, click the <span class="guimenu"><strong>Prioritized</strong></span> checkbox. With <span class="guimenu"><strong>Prioritized</strong></span> checked, you can set the priority value, <span class="guimenu"><strong>Priority</strong></span>, for each node selected as members of the failover domain.
					</div></li><li class="listitem"><div class="para">
						To restrict failover to members in this failover domain, click the checkbox next to <span class="guimenu"><strong>Restrict failover to this domain's members</strong></span>. With <span class="guimenu"><strong>Restrict failover to this domain's members</strong></span> checked, services assigned to this failover domain fail over only to nodes in this failover domain.
					</div></li><li class="listitem"><div class="para">
						To specify that a node does not fail back in this failover domain, click the checkbox next to <span class="guimenu"><strong>Do not fail back services in this domain</strong></span>. With <span class="guimenu"><strong>Do not fail back services in this domain</strong></span> checked, if a service fails over from a preferred node, the service does not fail back to the original node once it has recovered.
					</div></li><li class="listitem"><div class="para">
						Configure members for this failover domain. Under <span class="guimenu"><strong>Failover domain membership</strong></span>, click the <span class="guimenu"><strong>Member</strong></span> checkbox for each node that is to be a member of the failover domain. If <span class="guimenu"><strong>Prioritized</strong></span> is checked, set the priority in the <span class="guimenu"><strong>Priority</strong></span> text box for each member of the failover domain.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes a progress page to be displayed followed by the display of the <span class="guilabel"><strong>Failover Domain Form</strong></span> page. That page displays the added resource and includes the failover domain in the cluster menu to the left under <span class="guimenu"><strong>Domain</strong></span>.
					</div></li><li class="listitem"><div class="para">
						To make additional changes to the failover domain, continue modifications at the <span class="guilabel"><strong>Failover Domain Form</strong></span> page and click <span class="guibutton"><strong>Submit</strong></span> when you are done.
					</div></li></ol></div></div><div class="section" id="s2-config-modify-failoverdm-conga-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-modify-failoverdm-conga-CA">3.7.2. Modifying a Failover Domain</h3></div></div></div><div class="para">
				To modify a failover domain, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Failover Domains</strong></span>. Clicking <span class="guimenu"><strong>Failover Domains</strong></span> causes the display of failover domains with related services and the display of menu items for failover domains: <span class="guimenu"><strong>Add a Failover Domain</strong></span> and <span class="guimenu"><strong>Configure a Failover Domain </strong></span>.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Configure a Failover Domain</strong></span>. Clicking <span class="guimenu"><strong>Configure a Failover Domain</strong></span> causes the display of failover domains under <span class="guimenu"><strong>Configure a Failover Domain</strong></span> at the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu).
					</div></li><li class="listitem"><div class="para">
						At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click the failover domain to modify. Clicking the failover domain causes the display of the <span class="guilabel"><strong>Failover Domain Form</strong></span> page. At the <span class="guilabel"><strong>Failover Domain Form</strong></span> page, you can modify the failover domain name, prioritize failover, restrict failover to this domain, and modify failover domain membership.
					</div></li><li class="listitem"><div class="para">
						Modifying failover name — To change the failover domain name, modify the text at the <span class="guimenu"><strong>Failover Domain Name</strong></span> text box.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The name should be descriptive enough to distinguish its purpose relative to other names used in your cluster.
						</div></div></div></li><li class="listitem"><div class="para">
						Failover priority — To enable or disable prioritized failover in this failover domain, click the <span class="guimenu"><strong>Prioritized</strong></span> checkbox. With <span class="guimenu"><strong>Prioritized</strong></span> checked, you can set the priority value, <span class="guimenu"><strong>Priority</strong></span>, for each node selected as members of the failover domain. With <span class="guimenu"><strong>Prioritized</strong></span> <span class="emphasis"><em>not</em></span> checked, setting priority levels is disabled for this failover domain.
					</div></li><li class="listitem"><div class="para">
						Restricted failover — To enable or disable restricted failover for members in this failover domain, click the checkbox next to <span class="guimenu"><strong>Restrict failover to this domain's members</strong></span>. With <span class="guimenu"><strong>Restrict failover to this domain's members</strong></span> checked, services assigned to this failover domain fail over only to nodes in this failover domain. With <span class="guimenu"><strong>Restrict failover to this domain's members</strong></span> <span class="emphasis"><em>not</em></span> checked, services assigned to this failover domain can fail over to nodes outside this failover domain.
					</div></li><li class="listitem"><div class="para">
						Failback — To enable or disable failback in a failover domain, click the checkbox next to <span class="guimenu"><strong>Do not fail back services in this domain</strong></span>. With <span class="guimenu"><strong>Do not fail back services in this domain</strong></span> checked, if a service fails over from a preferred node, the service does not fail back to the original node once it has recovered.
					</div></li><li class="listitem"><div class="para">
						Modifying failover domain membership — Under <span class="guimenu"><strong>Failover domain membership</strong></span>, click the <span class="guimenu"><strong>Member</strong></span>checkbox for each node that is to be a member of the failover domain. A checked box for a node means that the node is a member of the failover domain. If <span class="guimenu"><strong>Prioritized</strong></span> is checked, you can adjust the priority in the <span class="guimenu"><strong>Priority</strong></span> text box for each member of the failover domain.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes a progress page to be displayed followed by the display of the <span class="guilabel"><strong>Failover Domain Form</strong></span> page. That page displays the added resource and includes the failover domain in the cluster menu to the left under <span class="guimenu"><strong>Domain</strong></span>.
					</div></li><li class="listitem"><div class="para">
						To make additional changes to the failover domain, continue modifications at the <span class="guilabel"><strong>Failover Domain Form</strong></span> page and click <span class="guibutton"><strong>Submit</strong></span> when you are done.
					</div></li></ol></div></div></div><div class="section" id="s1-config-add-resource-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-add-resource-conga-CA">3.8. Adding Cluster Resources</h2></div></div></div><div class="para">
			To add a cluster resource, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Resources</strong></span>. Clicking <span class="guimenu"><strong>Resources</strong></span> causes the display of resources in the center of the page and causes the display of menu items for resource configuration: <span class="guimenu"><strong>Add a Resource</strong></span> and <span class="guimenu"><strong>Configure a Resource</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guimenu"><strong>Add a Resource</strong></span>. Clicking <span class="guimenu"><strong>Add a Resource</strong></span> causes the <span class="guilabel"><strong>Add a Resource</strong></span> page to be displayed.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>Add a Resource</strong></span> page, click the drop-down box under <span class="guimenu"><strong>Select a Resource Type</strong></span> and select the type of resource to configure. <a class="xref" href="#ap-ha-resource-params-CA">Annexe C, <em>HA Resource Parameters</em></a> describes resource parameters.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes a progress page to be displayed followed by the display of <span class="guilabel"><strong>Resources for<em class="replaceable"><code>cluster name</code></em></strong></span> page. That page displays the added resource (and other resources).
				</div></li></ol></div></div><div class="section" id="s1-add-service-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-add-service-conga-CA">3.9. Adding a Cluster Service to the Cluster</h2></div></div></div><a id="id893943" class="indexterm"></a><a id="id893954" class="indexterm"></a><div class="para">
			To add a cluster service to the cluster, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Services</strong></span>. Clicking <span class="guimenu"><strong>Services</strong></span> causes the display of services in the center of the page and causes the display of menu items for services configuration: <span class="guimenu"><strong>Add a Service</strong></span> and <span class="guimenu"><strong>Configure a Service</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guimenu"><strong>Add a Service</strong></span>. Clicking <span class="guimenu"><strong>Add a Service</strong></span> causes the <span class="guilabel"><strong>Add a Service</strong></span> page to be displayed.
				</div></li><li class="listitem"><div class="para">
					On the <span class="guilabel"><strong>Add a Service</strong></span> page, at the <span class="guimenu"><strong>Service name</strong></span> text box, type the name of the service. Below the <span class="guimenu"><strong>Service name</strong></span> text box is an checkbox labeled <span class="guimenu"><strong>Automatically start this service</strong></span>. The checkbox is checked by default. When the checkbox is checked, the service is started automatically when a cluster is started and running. If the checkbox is <span class="emphasis"><em>not</em></span> checked, the service must be started manually any time the cluster comes up from the stopped state.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Use a descriptive name that clearly distinguishes the service from other services in the cluster.
					</div></div></div></li><li class="listitem"><div class="para">
					Add a resource to the service; click <span class="guibutton"><strong>Add a resource to this service</strong></span>. Clicking <span class="guibutton"><strong>Add a resource to this service</strong></span> causes the display of two drop-down boxes: <span class="guimenu"><strong>Add a new local resource</strong></span> and <span class="guimenu"><strong>Use an existing global resource</strong></span>. Adding a new local resource adds a resource that is available <span class="emphasis"><em>only</em></span> to this service. The process of adding a local resource is the same as adding a global resource described in <a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a>. Adding a global resource adds a resource that has been previously added as a global resource (refer to <a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a>).
				</div></li><li class="listitem"><div class="para">
					At the drop-down box of either <span class="guimenu"><strong>Add a new local resource</strong></span> or <span class="guimenu"><strong>Use an existing global resource</strong></span>, select the resource to add and configure it according to the options presented. (The options are the same as described in <a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a>.)
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						If you are adding a Samba-service resource, connect a Samba-service resource directly to the service, <span class="emphasis"><em>not</em></span> to a resource within a service.
					</div></div></div></li><li class="listitem"><div class="para">
					If you want to add resources to that resource, click <span class="guibutton"><strong>Add a child</strong></span>. Clicking <span class="guibutton"><strong>Add a child</strong></span> causes the display of additional options to local and global resources. You can continue adding children resources to the resource to suit your requirements. To view children resources, click the triangle icon to the left of <span class="guimenu"><strong>Show Children</strong></span>.
				</div></li><li class="listitem"><div class="para">
					When you have completed adding resources to the service, and have completed adding children resources to resources, click <span class="guibutton"><strong>Submit</strong></span>. Clicking <span class="guibutton"><strong>Submit</strong></span> causes a progress page to be displayed followed by a page displaying the added service (and other services).
				</div></li></ol></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				To verify the existence of the IP service resource used in a cluster service, you must use the <code class="command">/sbin/ip addr list</code> command on a cluster node. The following output shows the <code class="command">/sbin/ip addr list</code> command executed on a node running a cluster service:
			</div><pre class="screen">
1: lo: &lt;LOOPBACK,UP&gt; mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: &lt;BROADCAST,MULTICAST,UP&gt; mtu 1356 qdisc pfifo_fast qlen 1000
    link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff
    inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0
    inet6 fe80::205:5dff:fe9a:d891/64 scope link
    inet 10.11.4.240/22 scope global secondary eth0
       valid_lft forever preferred_lft forever
</pre></div></div></div><div class="section" id="s1-config-storage-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-storage-conga-CA">3.10. Configuring Cluster Storage</h2></div></div></div><a id="id890520" class="indexterm"></a><a id="id890532" class="indexterm"></a><div class="para">
			To configure storage for a cluster, click the <span class="guimenu"><strong>storage</strong></span> tab. Clicking that tab causes the display of the <span class="guilabel"><strong>Welcome to Storage Configuration Interface</strong></span> page.
		</div><div class="para">
			The <span class="guimenu"><strong>storage</strong></span> tab allows you to monitor and configure storage on remote systems. It provides a means for configuring disk partitions, logical volumes (clustered and single system use), file system parameters, and mount points. The <span class="guimenu"><strong>storage</strong></span> tab provides an interface for setting up shared storage for clusters and offers GFS and other file systems as file system options. When a you select the <span class="guimenu"><strong>storage</strong></span> tab, the <span class="guilabel"><strong>Welcome to Storage Configuration Interface</strong></span> page shows a list of systems available to the you in a navigation table to the left. A small form allows you to choose a storage unit size to suit your preference. That choice is persisted and can be changed at any time by returning to this page. In addition, you can change the unit type on specific configuration forms throughout the storage user interface. This general choice allows you to avoid difficult decimal representations of storage size (for example, if you know that most of your storage is measured in gigabytes, terabytes, or other more familiar representations).
		</div><div class="para">
			Additionally, the <span class="guilabel"><strong>Welcome to Storage Configuration Interface</strong></span> page lists systems that you are authorized to access, but currently are unable to administer because of a problem. Examples of problems:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					A computer is unreachable via the network.
				</div></li><li class="listitem"><div class="para">
					A computer has been re-imaged and the <span class="application"><strong>luci</strong></span> server admin must re-authenticate with the <span class="application"><strong>ricci</strong></span> agent on the computer.
				</div></li></ul></div><div class="para">
			A reason for the trouble is displayed if the storage user interface can determine it.
		</div><div class="para">
			Only those computers that the user is privileged to administer is shown in the main navigation table. If you have no permissions on any computers, a message is displayed.
		</div><div class="para">
			After you select a computer to administer, a general properties page is displayed for the computer. This page is divided into three sections:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<span class="guimenu"><strong> Hard Drives </strong></span>
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong> Partitions </strong></span>
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong> Volume Groups </strong></span>
				</div></li></ul></div><div class="para">
			Each section is set up as an expandable tree, with links to property sheets for specific devices, partitions, and storage entities.
		</div><div class="para">
			Configure the storage for your cluster to suit your cluster requirements. If you are configuring Red Hat GFS, configure clustered logical volumes first, using CLVM. For more information about CLVM and GFS refer to Red Hat documentation for those products.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon (<code class="command">clvmd</code>) or the High Availability Logical Volume Management agents (HA-LVM). If you are not able to use either the <code class="command">clvmd</code> daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative.
			</div></div></div></div></div><div xml:lang="fr-FR" class="chapter" id="ch-mgmt-conga-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 4. Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-admin-start-conga-CA">4.1. Starting, Stopping, and Deleting Clusters</a></span></dt><dt><span class="section"><a href="#s1-admin-manage-nodes-conga-CA">4.2. Managing Cluster Nodes</a></span></dt><dt><span class="section"><a href="#s1-admin-manage-ha-services-conga-CA">4.3. Managing High-Availability Services</a></span></dt><dt><span class="section"><a href="#s1-admin-problems-conga-CA">4.4. Diagnosing and Correcting Problems in a Cluster</a></span></dt></dl></div><a id="id814640" class="indexterm"></a><a id="id817087" class="indexterm"></a><div class="para">
		This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-start-conga-CA">Section 4.1, « Starting, Stopping, and Deleting Clusters »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-manage-nodes-conga-CA">Section 4.2, « Managing Cluster Nodes »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-manage-ha-services-conga-CA">Section 4.3, « Managing High-Availability Services »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-problems-conga-CA">Section 4.4, « Diagnosing and Correcting Problems in a Cluster »</a>
			</div></li></ul></div><div class="section" id="s1-admin-start-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-start-conga-CA">4.1. Starting, Stopping, and Deleting Clusters</h2></div></div></div><a id="id841509" class="indexterm"></a><a id="id830804" class="indexterm"></a><div class="para">
			You can perform the following cluster-management functions through the <span class="application"><strong>luci</strong></span> server component of <span class="application"><strong>Conga</strong></span>:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Restart a cluster.
				</div></li><li class="listitem"><div class="para">
					Start a cluster.
				</div></li><li class="listitem"><div class="para">
					Stop a cluster.
				</div></li><li class="listitem"><div class="para">
					Delete a cluster.
				</div></li></ul></div><div class="para">
			To perform one of the functions in the preceding list, follow the steps in this section. The starting point of the procedure is at the <span class="guimenu"><strong>cluster</strong></span> tab (at the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page).
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the right of the <span class="guimenu"><strong>Cluster Name</strong></span> for each cluster listed on the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page is a drop-down box. By default, the drop-down box is set to <span class="guimenu"><strong>Restart this cluster</strong></span>. Clicking the drop-down box box reveals all the selections available: <span class="guimenu"><strong>Restart this cluster</strong></span>, <span class="guimenu"><strong>Stop this cluster</strong></span>/<span class="guimenu"><strong>Start this cluster</strong></span>, and <span class="guimenu"><strong>Delete this cluster</strong></span>. The actions of each function are summarized as follows:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Restart this cluster</strong></span> — Selecting this action causes the cluster to be restarted. You can select this action for any state the cluster is in.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Stop this cluster</strong></span>/<span class="guimenu"><strong>Start this cluster</strong></span> — <span class="guimenu"><strong>Stop this cluster</strong></span> is available when a cluster is running. <span class="guimenu"><strong>Start this cluster</strong></span> is available when a cluster is stopped.
						</div><div class="para">
							Selecting <span class="guimenu"><strong>Stop this cluster</strong></span> shuts down cluster software in all cluster nodes.
						</div><div class="para">
							Selecting <span class="guimenu"><strong>Start this cluster</strong></span> starts cluster software.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Delete this cluster</strong></span> — Selecting this action halts a running cluster, disables cluster software from starting automatically, and removes the cluster configuration file from each node. You can select this action for any state the cluster is in. Deleting a cluster frees each node in the cluster for use in another cluster.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Select one of the functions and click <span class="guibutton"><strong>Go</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Go</strong></span> causes a progress page to be displayed. When the action is complete, a page is displayed showing either of the following pages according to the action selected:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							For <span class="guimenu"><strong>Restart this cluster</strong></span> and <span class="guimenu"><strong>Stop this cluster</strong></span>/<span class="guimenu"><strong>Start this cluster</strong></span> — Displays a page with the list of nodes for the cluster.
						</div></li><li class="listitem"><div class="para">
							For <span class="guimenu"><strong>Delete this cluster</strong></span> — Displays the <span class="guilabel"><strong>Choose a cluster to administer</strong></span> page in the <span class="guimenu"><strong>cluster</strong></span> tab, showing a list of clusters.
						</div></li></ul></div></li></ol></div></div><div class="section" id="s1-admin-manage-nodes-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-manage-nodes-conga-CA">4.2. Managing Cluster Nodes</h2></div></div></div><a id="id827427" class="indexterm"></a><a id="id827439" class="indexterm"></a><div class="para">
			You can perform the following node-management functions through the <span class="application"><strong>luci</strong></span> server component of <span class="application"><strong>Conga</strong></span>:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Make a node leave or join a cluster.
				</div></li><li class="listitem"><div class="para">
					Fence a node.
				</div></li><li class="listitem"><div class="para">
					Reboot a node.
				</div></li><li class="listitem"><div class="para">
					Delete a node.
				</div></li></ul></div><div class="para">
			To perform one the functions in the preceding list, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Nodes</strong></span>. Clicking <span class="guimenu"><strong>Nodes</strong></span> causes the display of nodes in the center of the page and causes the display of an <span class="guimenu"><strong>Add a Node</strong></span> element and a <span class="guimenu"><strong>Configure</strong></span> element with a list of the nodes already configured in the cluster.
				</div></li><li class="listitem"><div class="para">
					At the right of each node listed on the page displayed from the preceding step, click the <span class="guimenu"><strong>Choose a task</strong></span> drop-down box. Clicking <span class="guimenu"><strong>Choose a task</strong></span> drop-down box reveals the following selections: <span class="guimenu"><strong>Have node leave cluster</strong></span>/<span class="guimenu"><strong>Have node join cluster</strong></span>, <span class="guimenu"><strong>Fence this node</strong></span>, <span class="guimenu"><strong>Reboot this node</strong></span>, and <span class="guimenu"><strong>Delete</strong></span>. The actions of each function are summarized as follows:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Have node leave cluster</strong></span>/<span class="guimenu"><strong>Have node join cluster</strong></span> — <span class="guimenu"><strong>Have node leave cluster</strong></span> is available when a node has joined of a cluster. <span class="guimenu"><strong>Have node join cluster</strong></span> is available when a node has left a cluster.
						</div><div class="para">
							Selecting <span class="guimenu"><strong>Have node leave cluster</strong></span> shuts down cluster software and makes the node leave the cluster. Making a node leave a cluster prevents the node from automatically joining the cluster when it is rebooted.
						</div><div class="para">
							Selecting <span class="guimenu"><strong>Have node join cluster</strong></span> starts cluster software and makes the node join the cluster. Making a node join a cluster allows the node to automatically join the cluster when it is rebooted.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Fence this node</strong></span> — Selecting this action causes the node to be fenced according to how the node is configured to be fenced.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Reboot this node</strong></span> — Selecting this action causes the node to be rebooted.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Delete</strong></span> — Selecting this action causes the node to be deleted from the cluster configuration. It also stops all cluster services on the node, and deletes the <code class="filename">cluster.conf</code> file from <code class="filename">/etc/cluster/</code>.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Select one of the functions and click <span class="guibutton"><strong>Go</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Go</strong></span> causes a progress page to be displayed. When the action is complete, a page is displayed showing the list of nodes for the cluster.
				</div></li></ol></div></div><div class="section" id="s1-admin-manage-ha-services-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-manage-ha-services-conga-CA">4.3. Managing High-Availability Services</h2></div></div></div><a id="id746474" class="indexterm"></a><div class="para">
			You can perform the following management functions for high-availability services through the <span class="application"><strong>luci</strong></span> server component of <span class="application"><strong>Conga</strong></span>:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Configure a service.
				</div></li><li class="listitem"><div class="para">
					Stop or start a service.
				</div></li><li class="listitem"><div class="para">
					Restart a service.
				</div></li><li class="listitem"><div class="para">
					Delete a service
				</div></li></ul></div><div class="para">
			To perform one the functions in the preceding list, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from <span class="guilabel"><strong>Choose a cluster to administer</strong></span> displayed on the <span class="guimenu"><strong>cluster</strong></span> tab.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the detailed menu for the cluster (below the <span class="guimenu"><strong>clusters</strong></span> menu), click <span class="guimenu"><strong>Services</strong></span>. Clicking <span class="guimenu"><strong>Services</strong></span> causes the display of services for the cluster in the center of the page.
				</div></li><li class="listitem"><div class="para">
					At the right of each service listed on the page, click the <span class="guimenu"><strong>Choose a task</strong></span> drop-down box. Clicking <span class="guimenu"><strong>Choose a task</strong></span> drop-down box reveals the following selections depending on if the service is running:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							If service is running — <span class="guimenu"><strong>Configure this service</strong></span>, <span class="guimenu"><strong>Restart this service</strong></span>, and <span class="guimenu"><strong>Stop this service</strong></span>.
						</div></li><li class="listitem"><div class="para">
							If service is not running — <span class="guimenu"><strong>Configure this service</strong></span>, <span class="guimenu"><strong>Start this service</strong></span>, and <span class="guimenu"><strong>Delete this service</strong></span>.
						</div></li></ul></div><div class="para">
					The actions of each function are summarized as follows:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Configure this service</strong></span> — <span class="guimenu"><strong>Configure this service</strong></span> is available when the service is running or not running. Selecting <span class="guimenu"><strong>Configure this service</strong></span> causes the services configuration page for the service to be displayed. On that page, you can change the configuration of the service. For example, you can add a resource to the service. (For more information about adding resources and services, refer to <a class="xref" href="#s1-config-add-resource-conga-CA">Section 3.8, « Adding Cluster Resources »</a> and <a class="xref" href="#s1-add-service-conga-CA">Section 3.9, « Adding a Cluster Service to the Cluster »</a>.) In addition, a drop-down box on the page provides other functions depending on if the service is running.
						</div><div class="para">
							When a service is running, the drop-down box provides the following functions: restarting, disabling, and relocating the service.
						</div><div class="para">
							When a service is not running, the drop-down box on the configuration page provides the following functions: enabling and deleting the service.
						</div><div class="para">
							If you are making configuration changes, save the changes by clicking <span class="guibutton"><strong>Save</strong></span>. Clicking <span class="guibutton"><strong>Save</strong></span> causes a progress page to be displayed. When the change is complete, another page is displayed showing a list of services for the cluster.
						</div><div class="para">
							If you have selected one of the functions in the drop-down box on the configuration page, click <span class="guibutton"><strong>Go</strong></span>. Clicking <span class="guibutton"><strong>Go</strong></span> causes a progress page to be displayed. When the change is complete, another page is displayed showing a list of services for the cluster.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Restart this service</strong></span> and <span class="guimenu"><strong>Stop this service</strong></span> — These selections are available when the service is running. Select either function and click <span class="guibutton"><strong>Go</strong></span> to make the change take effect. Clicking <span class="guibutton"><strong>Go</strong></span> causes a progress page to be displayed. When the change is complete, another page is displayed showing a list of services for the cluster.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Start this service</strong></span> and <span class="guimenu"><strong>Delete this service</strong></span> — These selections are available when the service is not running. Select either function and click <span class="guibutton"><strong>Go</strong></span> to make the change take effect. Clicking <span class="guibutton"><strong>Go</strong></span> causes a progress page to be displayed. When the change is complete, another page is displayed showing a list of services for the cluster.
						</div></li></ul></div></li></ol></div></div><div class="section" id="s1-admin-problems-conga-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-problems-conga-CA">4.4. Diagnosing and Correcting Problems in a Cluster</h2></div></div></div><a id="id805921" class="indexterm"></a><a id="id805933" class="indexterm"></a><a id="id805945" class="indexterm"></a><div class="para">
			For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative.
		</div></div></div><div xml:lang="fr-FR" class="chapter" id="ch-config-scc-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 5. Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-config-tasks-CA">5.1. Configuration Tasks</a></span></dt><dt><span class="section"><a href="#s1-start-clustertool-CA">5.2. Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span></a></span></dt><dt><span class="section"><a href="#s1-naming-cluster-CA">5.3. Configuring Cluster Properties</a></span></dt><dt><span class="section"><a href="#s1-config-fence-devices-CA">5.4. Configuring Fence Devices</a></span></dt><dt><span class="section"><a href="#s1-add-delete-member-CA">5.5. Adding and Deleting Members</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-add-member-new-CA">5.5.1. Adding a Member to a Cluster</a></span></dt><dt><span class="section"><a href="#s2-add-member-running-CA">5.5.2. Adding a Member to a Running Cluster</a></span></dt><dt><span class="section"><a href="#s2-delete-member-CA">5.5.3. Deleting a Member from a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-failover-domain-CA">5.6. Configuring a Failover Domain</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-config-add-failoverdm-CA">5.6.1. Adding a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-remove-failoverdm-CA">5.6.2. Removing a Failover Domain</a></span></dt><dt><span class="section"><a href="#s2-config-remove-member-failoverdm-CA">5.6.3. Removing a Member from a Failover Domain</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-config-service-dev-CA">5.7. Adding Cluster Resources</a></span></dt><dt><span class="section"><a href="#s1-add-service-CA">5.8. Adding a Cluster Service to the Cluster</a></span></dt><dd><dl><dt><span class="section"><a href="#s2-add-service-CA-relocate">5.8.1. Relocating a Service in a Cluster</a></span></dt></dl></dd><dt><span class="section"><a href="#s1-propagate-config-CA">5.9. Propagating The Configuration File: New Cluster</a></span></dt><dt><span class="section"><a href="#s1-starting-cluster-CA">5.10. Starting the Cluster Software</a></span></dt></dl></div><a id="id841307" class="indexterm"></a><a id="id789366" class="indexterm"></a><div class="para">
		This chapter describes how to configure Red Hat Cluster software using <code class="command">system-config-cluster</code>, and consists of the following sections:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-tasks-CA">Section 5.1, « Configuration Tasks »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-start-clustertool-CA">Section 5.2, « Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span> »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-naming-cluster-CA">Section 5.3, « Configuring Cluster Properties »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-fence-devices-CA">Section 5.4, « Configuring Fence Devices »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-add-delete-member-CA">Section 5.5, « Adding and Deleting Members »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-failover-domain-CA">Section 5.6, « Configuring a Failover Domain »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-config-service-dev-CA">Section 5.7, « Adding Cluster Resources »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-add-service-CA">Section 5.8, « Adding a Cluster Service to the Cluster »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-propagate-config-CA">Section 5.9, « Propagating The Configuration File: New Cluster »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-starting-cluster-CA">Section 5.10, « Starting the Cluster Software »</a>
			</div></li></ul></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			While <code class="command">system-config-cluster</code> provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, <span class="application"><strong>Conga</strong></span>, provides more convenience and flexibility than <code class="command">system-config-cluster</code>. You may want to consider using <span class="application"><strong>Conga</strong></span> instead (refer to <a class="xref" href="#ch-config-conga-CA">Chapitre 3, <em>Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a> and <a class="xref" href="#ch-mgmt-conga-CA">Chapitre 4, <em>Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>).
		</div></div></div><div class="section" id="s1-config-tasks-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-tasks-CA">5.1. Configuration Tasks</h2></div></div></div><div class="para">
			Configuring Red Hat Cluster software with <code class="command">system-config-cluster</code> consists of the following steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span>, <code class="command">system-config-cluster</code>. Refer to <a class="xref" href="#s1-start-clustertool-CA">Section 5.2, « Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span> »</a>.
				</div></li><li class="listitem"><div class="para">
					Configuring cluster properties. Refer to <a class="xref" href="#s1-naming-cluster-CA">Section 5.3, « Configuring Cluster Properties »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating fence devices. Refer to <a class="xref" href="#s1-config-fence-devices-CA">Section 5.4, « Configuring Fence Devices »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating cluster members. Refer to <a class="xref" href="#s1-add-delete-member-CA">Section 5.5, « Adding and Deleting Members »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating failover domains. Refer to <a class="xref" href="#s1-config-failover-domain-CA">Section 5.6, « Configuring a Failover Domain »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating resources. Refer to <a class="xref" href="#s1-config-service-dev-CA">Section 5.7, « Adding Cluster Resources »</a>.
				</div></li><li class="listitem"><div class="para">
					Creating cluster services.
				</div><div class="para">
					Refer to <a class="xref" href="#s1-add-service-CA">Section 5.8, « Adding a Cluster Service to the Cluster »</a>.
				</div></li><li class="listitem"><div class="para">
					Propagating the configuration file to the other nodes in the cluster.
				</div><div class="para">
					Refer to <a class="xref" href="#s1-propagate-config-CA">Section 5.9, « Propagating The Configuration File: New Cluster »</a>.
				</div></li><li class="listitem"><div class="para">
					Starting the cluster software. Refer to <a class="xref" href="#s1-starting-cluster-CA">Section 5.10, « Starting the Cluster Software »</a>.
				</div></li></ol></div></div><div class="section" id="s1-start-clustertool-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-start-clustertool-CA">5.2. Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span></h2></div></div></div><div class="para">
			You can start the <span class="application"><strong>Cluster Configuration Tool</strong></span> by logging in to a cluster node as root with the <code class="command">ssh -Y</code> command and issuing the <code class="command">system-config-cluster</code> command. For example, to start the <span class="application"><strong>Cluster Configuration Tool</strong></span> on cluster node nano-01, do the following:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Log in to a cluster node and run <code class="command">system-config-cluster</code>. For example:
				</div><pre class="screen">
$ <strong class="userinput"><code> ssh -Y root@nano-01</code></strong>
  .
  .
  .
# <strong class="userinput"><code>system-config-cluster</code></strong></pre></li><li class="listitem"><div class="para">
					If this is the first time you have started the <span class="application"><strong>Cluster Configuration Tool</strong></span>, the program prompts you to either open an existing configuration or create a new one. Click <span class="guibutton"><strong>Create New Configuration</strong></span> to start a new configuration file (refer to <a class="xref" href="#fig-software-clustertool-new-CA">Figure 5.1, « Starting a New Configuration File »</a>).
				</div><div class="figure" id="fig-software-clustertool-new-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/cluconfig-new-5.0.png" alt="Starting a New Configuration File" /><div class="longdesc"><div class="para">
								Starting a New Configuration File.
							</div></div></div></div><h6>Figure 5.1. Starting a New Configuration File</h6></div><br class="figure-break" /><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The <span class="guimenu"><strong>Cluster Management</strong></span> tab for the Red Hat Cluster Suite management GUI is available after you save the configuration file with the <span class="application"><strong>Cluster Configuration Tool</strong></span>, exit, and restart the Red Hat Cluster Suite management GUI (<code class="command">system-config-cluster</code>). (The <span class="guimenu"><strong>Cluster Management</strong></span> tab displays the status of the cluster service manager, cluster nodes, and resources, and shows statistics concerning cluster service operation. To manage the cluster system further, choose the <span class="guimenu"><strong>Cluster Configuration</strong></span> tab.)
					</div></div></div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Create New Configuration</strong></span> causes the <span class="guilabel"><strong>New Configuration</strong></span> dialog box to be displayed (refer to <a class="xref" href="#fig-software-clustertool-newconfig-CA">Figure 5.2, « Creating A New Configuration »</a>). The <span class="guilabel"><strong>New Configuration</strong></span> dialog box provides a text box for cluster name and the following checkboxes: <span class="guimenu"><strong>Custom Configure Multicast</strong></span> and <span class="guimenu"><strong>Use a Quorum Disk</strong></span>. In most circumstances you only need to configure the cluster name.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Choose the cluster name carefully. The only way to change the name of a Red Hat cluster is to create a new cluster configuration with the new name.
					</div></div></div><h3 id="id745884">Custom Configure Multicast</h3><div class="para">
					Red Hat Cluster software chooses a multicast address for cluster management communication among cluster nodes. If you need to use a specific multicast address, click the <span class="guimenu"><strong>Custom Configure Multicast</strong></span> checkbox and enter a multicast address in the <span class="guimenu"><strong>Address</strong></span> text boxes.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
					</div></div></div><div class="para">
					If you do not specify a multicast address, the Red Hat Cluster software (specifically, <code class="command">cman</code>, the Cluster Manager) creates one. It forms the upper 16 bits of the multicast address with 239.192 and forms the lower 16 bits based on the cluster ID.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The cluster ID is a unique identifier that <code class="command">cman</code> generates for each cluster. To view the cluster ID, run the <code class="command">cman_tool status</code> command on a cluster node.
					</div></div></div><div class="para">
					If you do specify a multicast address, you should use the 239.192.x.x series that <code class="command">cman</code> uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance.
					</div></div></div><h3 id="id746004">Use a Quorum Disk</h3><div class="para">
					If you need to use a quorum disk, click the <span class="guimenu"><strong>Use a Quorum disk</strong></span> checkbox and enter quorum disk parameters. The following quorum-disk parameters are available in the dialog box if you enable <span class="guimenu"><strong>Use a Quorum disk</strong></span>: <span class="guimenu"><strong>Interval</strong></span>, <span class="guimenu"><strong>TKO</strong></span>, <span class="guimenu"><strong>Votes</strong></span>, <span class="guimenu"><strong>Minimum Score</strong></span>, <span class="guimenu"><strong>Device</strong></span>, <span class="guimenu"><strong>Label</strong></span>, and <span class="guimenu"><strong>Quorum Disk Heuristic</strong></span>. <a class="xref" href="#tb-qdisk-params-rhel5-CA">Tableau 5.1, « Quorum-Disk Parameters »</a> describes the parameters.
				</div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
						Quorum-disk parameters and heuristics depend on the site environment and special requirements needed. To understand the use of quorum-disk parameters and heuristics, refer to the <span class="citerefentry"><span class="refentrytitle">qdisk</span>(5)</span> man page. If you require assistance understanding and using quorum disk, contact an authorized Red Hat support representative.
					</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						It is probable that configuring a quorum disk requires changing quorum-disk parameters after the initial configuration. The <span class="application"><strong>Cluster Configuration Tool</strong></span> (<code class="command">system-config-cluster</code>) provides only the display of quorum-disk parameters after initial configuration. If you need to configure quorum disk, consider using <span class="application"><strong>Conga</strong></span> instead; <span class="application"><strong>Conga</strong></span> allows modification of quorum disk parameters.
					</div><div class="para">
						<span class="emphasis"><em>Overall:</em></span>
					</div><div class="para">
						While <code class="command">system-config-cluster</code> provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, <span class="application"><strong>Conga</strong></span>, provides more convenience and flexibility than <code class="command">system-config-cluster</code>. You may want to consider using <span class="application"><strong>Conga</strong></span> instead (refer to <a class="xref" href="#ch-config-conga-CA">Chapitre 3, <em>Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a> and <a class="xref" href="#ch-mgmt-conga-CA">Chapitre 4, <em>Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>).
					</div></div></div><div class="figure" id="fig-software-clustertool-newconfig-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/cluconfig-newconfig-5.0.png" alt="Creating A New Configuration" /><div class="longdesc"><div class="para">
								New Configuration
							</div></div></div></div><h6>Figure 5.2. Creating A New Configuration</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
					When you have completed entering the cluster name and other parameters in the <span class="guilabel"><strong>New Configuration</strong></span> dialog box, click <span class="guibutton"><strong>OK</strong></span>. Clicking <span class="guibutton"><strong>OK</strong></span> starts the <span class="application"><strong>Cluster Configuration Tool</strong></span>, displaying a graphical representation of the configuration (<a class="xref" href="#fig-software-clusterstart-CA">Figure 5.3, « The <span class="application">Cluster Configuration Tool</span> »</a>).
				</div><div class="figure" id="fig-software-clusterstart-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/cluconfig-start.png" width="444" alt="The Cluster Configuration Tool" /><div class="longdesc"><div class="para">
								The <span class="application"><strong>Cluster Configuration Tool</strong></span>.
							</div></div></div></div><h6>Figure 5.3. The <span class="application">Cluster Configuration Tool</span></h6></div><br class="figure-break" /></li></ol></div><div class="table" id="tb-qdisk-params-rhel5-CA"><h6>Tableau 5.1. Quorum-Disk Parameters</h6><div class="table-contents"><table summary="Quorum-Disk Parameters" border="1"><colgroup><col width="25%" class="Parameter" /><col width="75%" class="Description" /></colgroup><thead><tr><th>
							Parameter
						</th><th>
							Description
						</th></tr></thead><tbody><tr><td>
							<span class="guimenu"><strong>Use a Quorum Disk</strong></span>
						</td><td>
							Enables quorum disk. Enables quorum-disk parameters in the <span class="guilabel"><strong>New Configuration</strong></span> dialog box.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Interval</strong></span>
						</td><td>
							The frequency of read/write cycles, in seconds.
						</td></tr><tr><td>
							<span class="guimenu"><strong>TKO</strong></span>
						</td><td>
							The number of cycles a node must miss in order to be declared dead.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Votes</strong></span>
						</td><td>
							The number of votes the quorum daemon advertises to CMAN when it has a high enough score.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Minimum Score</strong></span>
						</td><td>
							The minimum score for a node to be considered "alive". If omitted or set to 0, the default function, <code class="command">floor((<em class="replaceable"><code>n</code></em>+1)/2)</code>, is used, where <em class="replaceable"><code>n</code></em> is the sum of the heuristics scores. The <span class="guimenu"><strong>Minimum Score</strong></span> value must never exceed the sum of the heuristic scores; otherwise, the quorum disk cannot be available.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Device</strong></span>
						</td><td>
							The storage device the quorum daemon uses. The device must be the same on all nodes.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Label</strong></span>
						</td><td>
							Specifies the quorum disk label created by the <code class="command">mkqdisk</code> utility. If this field contains an entry, the label overrides the <span class="guimenu"><strong>Device</strong></span> field. If this field is used, the quorum daemon reads <code class="filename">/proc/partitions</code> and checks for qdisk signatures on every block device found, comparing the label against the specified label. This is useful in configurations where the quorum device name differs among nodes.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Quorum Disk Heuristics</strong></span>
						</td><td>
							<table border="0" summary="Simple list" class="simplelist"><tr><td><span class="guimenu"><strong>Program</strong></span> — The program used to determine if this heuristic is alive. This can be anything that can be executed by <code class="command">/bin/sh -c</code>. A return value of <span class="returnvalue">0</span> indicates success; anything else indicates failure. This field is required.</td></tr><tr><td><span class="guimenu"><strong>Score</strong></span> — The weight of this heuristic. Be careful when determining scores for heuristics. The default score for each heuristic is 1. </td></tr><tr><td><span class="guimenu"><strong>Interval</strong></span> — The frequency (in seconds) at which the heuristic is polled. The default interval for every heuristic is 2 seconds.</td></tr></table>

						</td></tr></tbody></table></div></div><br class="table-break" /></div><div class="section" id="s1-naming-cluster-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-naming-cluster-CA">5.3. Configuring Cluster Properties</h2></div></div></div><div class="para">
			In addition to configuring cluster parameters in the preceding section (<a class="xref" href="#s1-start-clustertool-CA">Section 5.2, « Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span> »</a>), you can configure the following cluster properties: <span class="guimenu"><strong>Cluster Alias</strong></span> (optional), a <span class="guimenu"><strong>Config Version</strong></span> (optional), and <span class="guimenu"><strong>Fence Daemon Properties</strong></span>. To configure cluster properties, follow these steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the left frame, click <span class="guimenu"><strong>Cluster</strong></span>.
				</div></li><li class="listitem"><div class="para">
					At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Edit Cluster Properties</strong></span> button. Clicking that button causes a <span class="guilabel"><strong>Cluster Properties</strong></span> dialog box to be displayed. The <span class="guilabel"><strong>Cluster Properties</strong></span> dialog box presents text boxes for <span class="guimenu"><strong>Cluster Alias</strong></span>, <span class="guimenu"><strong>Config Version</strong></span>, and two <span class="guimenu"><strong>Fence Daemon Properties</strong></span> parameters: <span class="guimenu"><strong>Post-Join Delay</strong></span> and <span class="guimenu"><strong>Post-Fail Delay</strong></span>.
				</div></li><li class="listitem"><div class="para">
					(Optional) At the <span class="guimenu"><strong>Cluster Alias</strong></span> text box, specify a cluster alias for the cluster. The default cluster alias is set to the true cluster name provided when the cluster is set up (refer to <a class="xref" href="#s1-start-clustertool-CA">Section 5.2, « Starting the <span class="application"><strong>Cluster Configuration Tool</strong></span> »</a>). The cluster alias should be descriptive enough to distinguish it from other clusters and systems on your network (for example, <strong class="userinput"><code>nfs_cluster</code></strong> or <strong class="userinput"><code>httpd_cluster</code></strong>). The cluster alias cannot exceed 15 characters.
				</div></li><li class="listitem"><div class="para">
					(Optional) The <span class="guimenu"><strong>Config Version</strong></span> value is set to <strong class="userinput"><code>1</code></strong> by default and is automatically incremented each time you save your cluster configuration. However, if you need to set it to another value, you can specify it at the <span class="guimenu"><strong>Config Version</strong></span> text box.
				</div></li><li class="listitem"><div class="para">
					Specify the <span class="guimenu"><strong>Fence Daemon Properties</strong></span> parameters: <span class="guimenu"><strong>Post-Join Delay</strong></span> and <span class="guimenu"><strong>Post-Fail Delay</strong></span>.
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Post-Join Delay</strong></span> parameter is the number of seconds the fence daemon (<code class="command">fenced</code>) waits before fencing a node after the node joins the fence domain. The <span class="guimenu"><strong>Post-Join Delay</strong></span> default value is <strong class="userinput"><code>3</code></strong>. A typical setting for <span class="guimenu"><strong>Post-Join Delay</strong></span> is between 20 and 30 seconds, but can vary according to cluster and network performance.
						</div></li><li class="listitem"><div class="para">
							The <span class="guimenu"><strong>Post-Fail Delay</strong></span> parameter is the number of seconds the fence daemon (<code class="command">fenced</code>) waits before fencing a node (a member of the fence domain) after the node has failed.The <span class="guimenu"><strong>Post-Fail Delay</strong></span> default value is <strong class="userinput"><code>0</code></strong>. Its value may be varied to suit cluster and network performance.
						</div></li></ol></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						For more information about <span class="guimenu"><strong>Post-Join Delay</strong></span> and <span class="guimenu"><strong>Post-Fail Delay</strong></span>, refer to the <span class="citerefentry"><span class="refentrytitle">fenced</span>(8)</span> man page.
					</div></div></div></li><li class="listitem"><div class="para">
					Save cluster configuration changes by selecting <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span>.
				</div></li></ol></div></div><div class="section" id="s1-config-fence-devices-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-fence-devices-CA">5.4. Configuring Fence Devices</h2></div></div></div><div class="para">
			Configuring fence devices for the cluster consists of selecting one or more fence devices and specifying fence-device-dependent parameters (for example, name, IP address, login, and password).
		</div><div class="para">
			To configure fence devices, follow these steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Click <span class="guimenu"><strong>Fence Devices</strong></span>. At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Add a Fence Device</strong></span> button. Clicking <span class="guibutton"><strong> Add a Fence Device</strong></span> causes the <span class="guilabel"><strong>Fence Device Configuration</strong></span> dialog box to be displayed (refer to <a class="xref" href="#fig-fence-device-config-dbox-CA">Figure 5.4, « Fence Device Configuration »</a>).
				</div><div class="figure" id="fig-fence-device-config-dbox-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/fence-device-config-dbox.png" alt="Fence Device Configuration" /><div class="longdesc"><div class="para">
								fence configuration dialog box
							</div></div></div></div><h6>Figure 5.4. Fence Device Configuration</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>Fence Device Configuration</strong></span> dialog box, click the drop-down box under <span class="guimenu"><strong>Add a New Fence Device</strong></span> and select the type of fence device to configure.
				</div></li><li class="listitem"><div class="para">
					Specify the information in the <span class="guilabel"><strong>Fence Device Configuration</strong></span> dialog box according to the type of fence device. Refer to <a class="xref" href="#ap-fence-device-param-CA">Annexe B, <em>Fence Device Parameters</em></a> for more information about fence device parameters.
				</div></li><li class="listitem"><div class="para">
					Click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
				</div></li></ol></div></div><div class="section" id="s1-add-delete-member-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-add-delete-member-CA">5.5. Adding and Deleting Members</h2></div></div></div><div class="para">
			The procedure to add a member to a cluster varies depending on whether the cluster is a newly-configured cluster or a cluster that is already configured and running. To add a member to a new cluster, refer to <a class="xref" href="#s2-add-member-new-CA">Section 5.5.1, « Adding a Member to a Cluster »</a>. To add a member to an existing cluster, refer to <a class="xref" href="#s2-add-member-running-CA">Section 5.5.2, « Adding a Member to a Running Cluster »</a>. To delete a member from a cluster, refer to <a class="xref" href="#s2-delete-member-CA">Section 5.5.3, « Deleting a Member from a Cluster »</a>.
		</div><div class="section" id="s2-add-member-new-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-add-member-new-CA">5.5.1. Adding a Member to a Cluster</h3></div></div></div><div class="para">
				To add a member to a new cluster, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Click <span class="guimenu"><strong>Cluster Node</strong></span>.
					</div></li><li class="listitem"><div class="para">
						At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Add a Cluster Node</strong></span> button. Clicking that button causes a <span class="guilabel"><strong>Node Properties</strong></span> dialog box to be displayed. The <span class="guilabel"><strong>Node Properties</strong></span> dialog box presents text boxes for <span class="guimenu"><strong>Cluster Node Name</strong></span> and <span class="guimenu"><strong>Quorum Votes</strong></span> (refer to <a class="xref" href="#fig-soft-newmember-dlm-CA">Figure 5.5, « Adding a Member to a New Cluster »</a>).
					</div><div class="figure" id="fig-soft-newmember-dlm-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/newmember-dlm.png" alt="Adding a Member to a New Cluster" /><div class="longdesc"><div class="para">
									new member dialog
								</div></div></div></div><h6>Figure 5.5. Adding a Member to a New Cluster</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
						At the <span class="guimenu"><strong>Cluster Node Name</strong></span> text box, specify a node name. The entry can be a name or an IP address of the node on the cluster subnet.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							Each node must be on the same subnet as the node from which you are running the <span class="application"><strong>Cluster Configuration Tool</strong></span> and must be defined either in DNS or in the <code class="filename">/etc/hosts</code> file of each cluster node.
						</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The node on which you are running the <span class="application"><strong>Cluster Configuration Tool</strong></span> must be explicitly added as a cluster member; the node is not automatically added to the cluster configuration as a result of running the <span class="application"><strong>Cluster Configuration Tool</strong></span>.
						</div></div></div></li><li class="listitem"><div class="para">
						Optionally, at the <span class="guimenu"><strong>Quorum Votes</strong></span> text box, you can specify a value; however in most configurations you can leave it blank. Leaving the <span class="guimenu"><strong>Quorum Votes</strong></span> text box blank causes the quorum votes value for that node to be set to the default value of <strong class="userinput"><code>1</code></strong>.
					</div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>OK</strong></span>.
					</div></li><li class="listitem"><div class="para">
						Configure fencing for the node:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								Click the node that you added in the previous step.
							</div></li><li class="listitem"><div class="para">
								At the bottom of the right frame (below <span class="guimenu"><strong>Properties</strong></span>), click <span class="guibutton"><strong>Manage Fencing For This Node</strong></span>. Clicking <span class="guibutton"><strong>Manage Fencing For This Node</strong></span> causes the <span class="guilabel"><strong>Fence Configuration</strong></span> dialog box to be displayed.
							</div></li><li class="listitem"><div class="para">
								At the <span class="guilabel"><strong>Fence Configuration</strong></span> dialog box, bottom of the right frame (below <span class="guimenu"><strong>Properties</strong></span>), click <span class="guibutton"><strong>Add a New Fence Level</strong></span>. Clicking <span class="guibutton"><strong>Add a New Fence Level</strong></span> causes a fence-level element (for example, <span class="guimenu"><strong>Fence-Level-1</strong></span>, <span class="guimenu"><strong>Fence-Level-2</strong></span>, and so on) to be displayed below the node in the left frame of the <span class="guilabel"><strong>Fence Configuration</strong></span> dialog box.
							</div></li><li class="listitem"><div class="para">
								Click the fence-level element.
							</div></li><li class="listitem"><div class="para">
								At the bottom of the right frame (below <span class="guimenu"><strong>Properties</strong></span>), click <span class="guibutton"><strong>Add a New Fence to this Level</strong></span>. Clicking <span class="guibutton"><strong>Add a New Fence to this Level</strong></span> causes the <span class="guilabel"><strong>Fence Properties</strong></span> dialog box to be displayed.
							</div></li><li class="listitem"><div class="para">
								At the <span class="guilabel"><strong>Fence Properties</strong></span> dialog box, click the <span class="guimenu"><strong>Fence Device Type</strong></span> drop-down box and select the fence device for this node. Also, provide additional information required (for example, <span class="guimenu"><strong>Port</strong></span> and <span class="guimenu"><strong>Switch</strong></span> for an APC Power Device).
							</div></li><li class="listitem"><div class="para">
								At the <span class="guilabel"><strong>Fence Properties</strong></span> dialog box, click <span class="guibutton"><strong>OK</strong></span>. Clicking <span class="guibutton"><strong>OK</strong></span> causes a fence device element to be displayed below the fence-level element.
							</div></li><li class="listitem"><div class="para">
								To create additional fence devices at this fence level, return to step 6d. Otherwise, proceed to the next step.
							</div></li><li class="listitem"><div class="para">
								To create additional fence levels, return to step 6c. Otherwise, proceed to the next step.
							</div></li><li class="listitem"><div class="para">
								If you have configured all the fence levels and fence devices for this node, click <span class="guibutton"><strong>Close</strong></span>.
							</div></li></ol></div></li><li class="listitem"><div class="para">
						Choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
					</div></li></ol></div></div><div class="section" id="s2-add-member-running-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-add-member-running-CA">5.5.2. Adding a Member to a Running Cluster</h3></div></div></div><div class="para">
				The procedure for adding a member to a running cluster depends on whether the cluster contains only two nodes or more than two nodes. To add a member to a running cluster, follow the steps in one of the following sections according to the number of nodes in the cluster:
			</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
						For clusters with <span class="emphasis"><em>only</em></span> two nodes —
					</div><div class="para">
						<a class="xref" href="#s3-add-member-running-2node-CA">Section 5.5.2.1, « Adding a Member to a Running Cluster That Contains <span class="emphasis"><em>Only</em></span> Two Nodes »</a>
					</div></li><li class="listitem"><div class="para">
						For clusters with <span class="emphasis"><em>more than</em></span> two nodes —
					</div><div class="para">
						<a class="xref" href="#s3-add-member-running-more-than-2nodes-CA">Section 5.5.2.2, « Adding a Member to a Running Cluster That Contains <span class="emphasis"><em>More Than</em></span> Two Nodes »</a>
					</div></li></ul></div><div class="section" id="s3-add-member-running-2node-CA"><div class="titlepage"><div><div><h4 class="title" id="s3-add-member-running-2node-CA">5.5.2.1. Adding a Member to a Running Cluster That Contains <span class="emphasis"><em>Only</em></span> Two Nodes</h4></div></div></div><div class="para">
					To add a member to an existing cluster that is currently in operation, and contains <span class="emphasis"><em>only</em></span> two nodes, follow these steps:
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Add the node and configure fencing for it as in
						</div><div class="para">
							<a class="xref" href="#s2-add-member-new-CA">Section 5.5.1, « Adding a Member to a Cluster »</a>.
						</div></li><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>Send to Cluster</strong></span> to propagate the updated configuration to other running nodes in the cluster.
						</div></li><li class="listitem"><div class="para">
							Use the <code class="command">scp</code> command to send the updated <code class="filename">/etc/cluster/cluster.conf</code> file from one of the existing cluster nodes to the new node.
						</div></li><li class="listitem"><div class="para">
							At the Red Hat Cluster Suite management GUI <span class="application"><strong>Cluster Status Tool</strong></span> tab, disable each service listed under <span class="guimenu"><strong>Services</strong></span>.
						</div></li><li class="listitem"><div class="para">
							Stop the cluster software on the two running nodes by running the following commands at each node in this order:
						</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
									<code class="command">service rgmanager stop</code>
								</div></li><li class="listitem"><div class="para">
									<code class="command">service gfs stop</code>, if you are using Red Hat GFS
								</div></li><li class="listitem"><div class="para">
									<code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes
								</div></li><li class="listitem"><div class="para">
									<code class="command">service cman stop</code>
								</div></li></ol></div></li><li class="listitem"><div class="para">
							Start cluster software on all cluster nodes (including the added one) by running the following commands in this order:
						</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
									<code class="command">service cman start</code>
								</div></li><li class="listitem"><div class="para">
									<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
								</div></li><li class="listitem"><div class="para">
									<code class="command">service gfs start</code>, if you are using Red Hat GFS
								</div></li><li class="listitem"><div class="para">
									<code class="command">service rgmanager start</code>
								</div></li></ol></div></li><li class="listitem"><div class="para">
							Start the Red Hat Cluster Suite management GUI. At the <span class="application"><strong>Cluster Configuration Tool</strong></span> tab, verify that the configuration is correct. At the <span class="application"><strong>Cluster Status Tool</strong></span> tab verify that the nodes and services are running as expected.
						</div></li></ol></div></div><div class="section" id="s3-add-member-running-more-than-2nodes-CA"><div class="titlepage"><div><div><h4 class="title" id="s3-add-member-running-more-than-2nodes-CA">5.5.2.2. Adding a Member to a Running Cluster That Contains <span class="emphasis"><em>More Than</em></span> Two Nodes</h4></div></div></div><div class="para">
					To add a member to an existing cluster that is currently in operation, and contains <span class="emphasis"><em>more than</em></span> two nodes, follow these steps:
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Add the node and configure fencing for it as in
						</div><div class="para">
							<a class="xref" href="#s2-add-member-new-CA">Section 5.5.1, « Adding a Member to a Cluster »</a>.
						</div></li><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>Send to Cluster</strong></span> to propagate the updated configuration to other running nodes in the cluster.
						</div></li><li class="listitem"><div class="para">
							Use the <code class="command">scp</code> command to send the updated <code class="filename">/etc/cluster/cluster.conf</code> file from one of the existing cluster nodes to the new node.
						</div></li><li class="listitem"><div class="para">
							Start cluster services on the new node by running the following commands in this order:
						</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
									<code class="command"> service cman start</code>
								</div></li><li class="listitem"><div class="para">
									<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
								</div></li><li class="listitem"><div class="para">
									<code class="command">service gfs start</code>, if you are using Red Hat GFS
								</div></li><li class="listitem"><div class="para">
									<code class="command">service rgmanager start</code>
								</div></li></ol></div></li><li class="listitem"><div class="para">
							Start the Red Hat Cluster Suite management GUI. At the <span class="application"><strong>Cluster Configuration Tool</strong></span> tab, verify that the configuration is correct. At the <span class="application"><strong>Cluster Status Tool</strong></span> tab verify that the nodes and services are running as expected.
						</div></li></ol></div></div></div><div class="section" id="s2-delete-member-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-delete-member-CA">5.5.3. Deleting a Member from a Cluster</h3></div></div></div><div class="para">
				To delete a member from an existing cluster that is currently in operation, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At one of the running nodes (not to be removed), run the Red Hat Cluster Suite management GUI. At the <span class="application"><strong>Cluster Status Tool</strong></span> tab, under <span class="guimenu"><strong>Services</strong></span>, disable or relocate each service that is running on the node to be deleted.
					</div></li><li class="listitem"><div class="para">
						Stop the cluster software on the node to be deleted by running the following commands at that node in this order:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								<code class="command">service rgmanager stop</code>
							</div></li><li class="listitem"><div class="para">
								<code class="command">service gfs stop</code>, if you are using Red Hat GFS
							</div></li><li class="listitem"><div class="para">
								<code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes
							</div></li><li class="listitem"><div class="para">
								<code class="command"> service cman stop</code>
							</div></li></ol></div></li><li class="listitem"><div class="para">
						At the <span class="application"><strong>Cluster Configuration Tool</strong></span> (on one of the running members), delete the member as follows:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								If necessary, click the triangle icon to expand the <span class="guimenu"><strong>Cluster Nodes</strong></span> property.
							</div></li><li class="listitem"><div class="para">
								Select the cluster node to be deleted. At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Delete Node</strong></span> button.
							</div></li><li class="listitem"><div class="para">
								Clicking the <span class="guibutton"><strong>Delete Node</strong></span> button causes a warning dialog box to be displayed requesting confirmation of the deletion (<a class="xref" href="#fig-soft-deletemember-CA">Figure 5.6, « Confirm Deleting a Member »</a>).
							</div><div class="figure" id="fig-soft-deletemember-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/deletemember.png" alt="Confirm Deleting a Member" /><div class="longdesc"><div class="para">
											delete member box
										</div></div></div></div><h6>Figure 5.6. Confirm Deleting a Member</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
								At that dialog box, click <span class="guibutton"><strong>Yes</strong></span> to confirm deletion.
							</div></li><li class="listitem"><div class="para">
								Propagate the updated configuration by clicking the <span class="guibutton"><strong>Send to Cluster</strong></span> button. (Propagating the updated configuration automatically saves the configuration.)
							</div></li></ol></div></li><li class="listitem"><div class="para">
						Stop the cluster software on the remaining running nodes by running the following commands at each node in this order:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								<code class="command">service rgmanager stop</code>
							</div></li><li class="listitem"><div class="para">
								<code class="command">service gfs stop</code>, if you are using Red Hat GFS
							</div></li><li class="listitem"><div class="para">
								<code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes
							</div></li><li class="listitem"><div class="para">
								<code class="command"> service cman stop</code>
							</div></li></ol></div></li><li class="listitem"><div class="para">
						Start cluster software on all remaining cluster nodes by running the following commands in this order:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								<code class="command"> service cman start</code>
							</div></li><li class="listitem"><div class="para">
								<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
							</div></li><li class="listitem"><div class="para">
								<code class="command">service gfs start</code>, if you are using Red Hat GFS
							</div></li><li class="listitem"><div class="para">
								<code class="command">service rgmanager start</code>
							</div></li></ol></div></li><li class="listitem"><div class="para">
						Start the Red Hat Cluster Suite management GUI. At the <span class="application"><strong>Cluster Configuration Tool</strong></span> tab, verify that the configuration is correct. At the <span class="application"><strong>Cluster Status Tool</strong></span> tab verify that the nodes and services are running as expected.
					</div></li></ol></div><div class="section" id="s3-delete-member-CA-cmd"><div class="titlepage"><div><div><h4 class="title" id="s3-delete-member-CA-cmd">5.5.3.1. Removing a Member from a Cluster at the Command-Line</h4></div></div></div><div class="para">
					If desired, you can also manually relocate and remove cluster members by using the <code class="command">clusvcadm</code> commmand at a shell prompt.
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Any services running on the member to be removed must be relocated to another node on the cluster, to prevent service downtime by running the following command:
						</div><pre class="screen">
clusvcadm -r cluster_service_name -m cluster_node_name
</pre><div class="para">
							Where <code class="option">cluster_service_name</code> is the name of the service to be relocated and <code class="option">cluster_member_name</code> is the name of the member to which the service will be relocated.
						</div></li><li class="listitem"><div class="para">
							Stop the cluster software on the node to be removed by running the following commands at that node in this order:
						</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
									<code class="command">service rgmanager stop</code>
								</div></li><li class="listitem"><div class="para">
									<code class="command">service gfs stop</code> and/or <code class="command">service gfs2 stop</code>, if you are using <code class="command">gfs</code>, <code class="command">gfs2</code> or both
								</div></li><li class="listitem"><div class="para">
									<code class="command">umount -a -t gfs</code> and/or <code class="command">umount -a -t gfs2</code>, if you are using either (or both) in conjunction with <code class="command">rgmanager</code>
								</div></li><li class="listitem"><div class="para">
									<code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes
								</div></li><li class="listitem"><div class="para">
									<code class="command">service cman stop remove</code>
								</div></li></ol></div></li><li class="listitem"><div class="para">
							To ensure that the removed member does not rejoin the cluster after it reboots, run the following sets of commands:
						</div><pre class="screen">
chkconfig cman off
chkconfig rgmanager off
chkconfig clvmd off
chkconfig gfs off
chkconfig gfs2 off
</pre></li></ol></div></div></div></div><div class="section" id="s1-config-failover-domain-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-failover-domain-CA">5.6. Configuring a Failover Domain</h2></div></div></div><div class="para">
			A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure. A failover domain can have the following characteristics:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Unrestricted — Allows you to specify that a subset of members are preferred, but that a cluster service assigned to this domain can run on any available member.
				</div></li><li class="listitem"><div class="para">
					Restricted — Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started (either manually or by the cluster software).
				</div></li><li class="listitem"><div class="para">
					Unordered — When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering.
				</div></li><li class="listitem"><div class="para">
					Ordered — Allows you to specify a preference order among the members of a failover domain. The member at the top of the list is the most preferred, followed by the second member in the list, and so on.
				</div></li></ul></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Changing a failover domain configuration has no effect on currently running services.
			</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				Failover domains are <span class="emphasis"><em>not</em></span> required for operation.
			</div></div></div><div class="para">
			By default, failover domains are unrestricted and unordered.
		</div><div class="para">
			In a cluster with several members, using a restricted failover domain can minimize the work to set up the cluster to run a cluster service (such as <code class="filename">httpd</code>), which requires you to set up the configuration identically on all members that run the cluster service). Instead of setting up the entire cluster to run the cluster service, you must set up only the members in the restricted failover domain that you associate with the cluster service.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				To configure a preferred member, you can create an unrestricted failover domain comprising only one cluster member. Doing that causes a cluster service to run on that cluster member primarily (the preferred member), but allows the cluster service to fail over to any of the other members.
			</div></div></div><div class="para">
			The following sections describe adding a failover domain, removing a failover domain, and removing members from a failover domain:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<a class="xref" href="#s2-config-add-failoverdm-CA">Section 5.6.1, « Adding a Failover Domain »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-config-remove-failoverdm-CA">Section 5.6.2, « Removing a Failover Domain »</a>
				</div></li><li class="listitem"><div class="para">
					<a class="xref" href="#s2-config-remove-member-failoverdm-CA">Section 5.6.3, « Removing a Member from a Failover Domain »</a>
				</div></li></ul></div><div class="section" id="s2-config-add-failoverdm-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-add-failoverdm-CA">5.6.1. Adding a Failover Domain</h3></div></div></div><div class="para">
				To add a failover domain, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the left frame of the <span class="application"><strong>Cluster Configuration Tool</strong></span>, click <span class="guimenu"><strong>Failover Domains</strong></span>.
					</div></li><li class="listitem"><div class="para">
						At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Create a Failover Domain</strong></span> button. Clicking the <span class="guibutton"><strong>Create a Failover Domain</strong></span> button causes the <span class="guilabel"><strong>Add Failover Domain</strong></span> dialog box to be displayed.
					</div></li><li class="listitem"><div class="para">
						At the <span class="guilabel"><strong>Add Failover Domain</strong></span> dialog box, specify a failover domain name at the <span class="guimenu"><strong>Name for new Failover Domain</strong></span> text box and click <span class="guibutton"><strong>OK</strong></span>. Clicking <span class="guibutton"><strong>OK</strong></span> causes the <span class="guilabel"><strong>Failover Domain Configuration</strong></span> dialog box to be displayed (<a class="xref" href="#fig-soft-failoverdn-CA">Figure 5.7, « <span class="guimenu">Failover Domain Configuration</span>: Configuring a Failover Domain »</a>).
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The name should be descriptive enough to distinguish its purpose relative to other names used in your cluster.
						</div></div></div><div class="figure" id="fig-soft-failoverdn-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/failoverdn.png" alt="Failover Domain Configuration: Configuring a Failover Domain" /><div class="longdesc"><div class="para">
									failover dialog box
								</div></div></div></div><h6>Figure 5.7. <span class="guimenu">Failover Domain Configuration</span>: Configuring a Failover Domain</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
						Click the <span class="guimenu"><strong>Available Cluster Nodes</strong></span> drop-down box and select the members for this failover domain.
					</div></li><li class="listitem"><div class="para">
						To restrict failover to members in this failover domain, click (check) the <span class="guimenu"><strong>Restrict Failover To This Domains Members</strong></span> checkbox. (With <span class="guimenu"><strong>Restrict Failover To This Domains Members</strong></span> checked, services assigned to this failover domain fail over only to nodes in this failover domain.)
					</div></li><li class="listitem"><div class="para">
						To prioritize the order in which the members in the failover domain assume control of a failed cluster service, follow these steps:
					</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
								Click (check) the <span class="guimenu"><strong>Prioritized List</strong></span> checkbox (<a class="xref" href="#fig-soft-failoverdn-pri-CA">Figure 5.8, « <span class="guimenu">Failover Domain Configuration</span>: Adjusting Priority »</a>). Clicking <span class="guimenu"><strong>Prioritized List</strong></span> causes the <span class="guimenu"><strong>Priority</strong></span> column to be displayed next to the <span class="guimenu"><strong>Member Node</strong></span> column.
							</div><div class="figure" id="fig-soft-failoverdn-pri-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/failoverdn-pri.png" alt="Failover Domain Configuration: Adjusting Priority" /><div class="longdesc"><div class="para">
											failover dialog box priority
										</div></div></div></div><h6>Figure 5.8. <span class="guimenu">Failover Domain Configuration</span>: Adjusting Priority</h6></div><br class="figure-break" /></li><li class="listitem"><div class="para">
								For each node that requires a priority adjustment, click the node listed in the <span class="guimenu"><strong>Member Node/Priority</strong></span> columns and adjust priority by clicking one of the <span class="guimenu"><strong>Adjust Priority</strong></span> arrows. Priority is indicated by the position in the <span class="guimenu"><strong>Member Node</strong></span> column and the value in the <span class="guimenu"><strong>Priority</strong></span> column. The node priorities are listed highest to lowest, with the highest priority node at the top of the <span class="guimenu"><strong>Member Node</strong></span> column (having the lowest <span class="guimenu"><strong>Priority</strong></span> number).
							</div></li></ol></div></li><li class="listitem"><div class="para">
						Click <span class="guibutton"><strong>Close</strong></span> to create the domain.
					</div></li><li class="listitem"><div class="para">
						At the <span class="application"><strong>Cluster Configuration Tool</strong></span>, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								New cluster — If this is a new cluster, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li><li class="listitem"><div class="para">
								Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the <span class="guibutton"><strong>Send to Cluster</strong></span> button. Clicking <span class="guibutton"><strong>Send to Cluster</strong></span> automatically saves the configuration change. If you do not want to propagate the change immediately, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li></ul></div></li></ol></div></div><div class="section" id="s2-config-remove-failoverdm-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-remove-failoverdm-CA">5.6.2. Removing a Failover Domain</h3></div></div></div><div class="para">
				To remove a failover domain, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the left frame of the <span class="application"><strong>Cluster Configuration Tool</strong></span>, click the failover domain that you want to delete (listed under <span class="guimenu"><strong>Failover Domains</strong></span>).
					</div></li><li class="listitem"><div class="para">
						At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Delete Failover Domain</strong></span> button. Clicking the <span class="guibutton"><strong>Delete Failover Domain</strong></span> button causes a warning dialog box do be displayed asking if you want to remove the failover domain. Confirm that the failover domain identified in the warning dialog box is the one you want to delete and click <span class="guibutton"><strong>Yes</strong></span>. Clicking <span class="guibutton"><strong>Yes</strong></span> causes the failover domain to be removed from the list of failover domains under <span class="guimenu"><strong>Failover Domains</strong></span> in the left frame of the <span class="application"><strong>Cluster Configuration Tool</strong></span>.
					</div></li><li class="listitem"><div class="para">
						At the <span class="application"><strong>Cluster Configuration Tool</strong></span>, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								New cluster — If this is a new cluster, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li><li class="listitem"><div class="para">
								Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the <span class="guibutton"><strong>Send to Cluster</strong></span> button. Clicking <span class="guibutton"><strong>Send to Cluster</strong></span> automatically saves the configuration change. If you do not want to propagate the change immediately, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li></ul></div></li></ol></div></div><div class="section" id="s2-config-remove-member-failoverdm-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-config-remove-member-failoverdm-CA">5.6.3. Removing a Member from a Failover Domain</h3></div></div></div><div class="para">
				To remove a member from a failover domain, follow these steps:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At the left frame of the <span class="application"><strong>Cluster Configuration Tool</strong></span>, click the failover domain that you want to change (listed under <span class="guimenu"><strong>Failover Domains</strong></span>).
					</div></li><li class="listitem"><div class="para">
						At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Edit Failover Domain Properties</strong></span> button. Clicking the <span class="guibutton"><strong>Edit Failover Domain Properties</strong></span> button causes the <span class="guilabel"><strong>Failover Domain Configuration</strong></span> dialog box to be displayed (<a class="xref" href="#fig-soft-failoverdn-CA">Figure 5.7, « <span class="guimenu">Failover Domain Configuration</span>: Configuring a Failover Domain »</a>).
					</div></li><li class="listitem"><div class="para">
						At the <span class="guilabel"><strong>Failover Domain Configuration</strong></span> dialog box, in the <span class="guimenu"><strong>Member Node</strong></span> column, click the node name that you want to delete from the failover domain and click the <span class="guibutton"><strong>Remove Member from Domain</strong></span> button. Clicking <span class="guibutton"><strong>Remove Member from Domain</strong></span> removes the node from the <span class="guimenu"><strong>Member Node</strong></span> column. Repeat this step for each node that is to be deleted from the failover domain. (Nodes must be deleted one at a time.)
					</div></li><li class="listitem"><div class="para">
						When finished, click <span class="guibutton"><strong>Close</strong></span>.
					</div></li><li class="listitem"><div class="para">
						At the <span class="application"><strong>Cluster Configuration Tool</strong></span>, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								New cluster — If this is a new cluster, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li><li class="listitem"><div class="para">
								Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the <span class="guibutton"><strong>Send to Cluster</strong></span> button. Clicking <span class="guibutton"><strong>Send to Cluster</strong></span> automatically saves the configuration change. If you do not want to propagate the change immediately, choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
							</div></li></ul></div></li></ol></div></div></div><div class="section" id="s1-config-service-dev-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-config-service-dev-CA">5.7. Adding Cluster Resources</h2></div></div></div><div class="para">
			To specify a resource for a cluster service, follow these steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					On the <span class="guimenu"><strong>Resources</strong></span> property of the <span class="application"><strong>Cluster Configuration Tool</strong></span>, click the <span class="guibutton"><strong>Create a Resource</strong></span> button. Clicking the <span class="guibutton"><strong>Create a Resource</strong></span> button causes the <span class="guilabel"><strong>Resource Configuration</strong></span> dialog box to be displayed.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>Resource Configuration</strong></span> dialog box, under <span class="guimenu"><strong>Select a Resource Type</strong></span>, click the drop-down box. At the drop-down box, select a resource to configure. <a class="xref" href="#ap-ha-resource-params-CA">Annexe C, <em>HA Resource Parameters</em></a> describes resource parameters.
				</div></li><li class="listitem"><div class="para">
					When finished, click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the change to the <code class="filename">/etc/cluster/cluster.conf</code> configuration file.
				</div></li></ol></div></div><div class="section" id="s1-add-service-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-add-service-CA">5.8. Adding a Cluster Service to the Cluster</h2></div></div></div><a id="id854385" class="indexterm"></a><a id="id854396" class="indexterm"></a><div class="para">
			To add a cluster service to the cluster, follow these steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the left frame, click <span class="guimenu"><strong>Services</strong></span>.
				</div></li><li class="listitem"><div class="para">
					At the bottom of the right frame (labeled <span class="guimenu"><strong>Properties</strong></span>), click the <span class="guibutton"><strong>Create a Service</strong></span> button. Clicking <span class="guibutton"><strong>Create a Service</strong></span> causes the <span class="guilabel"><strong>Add a Service</strong></span> dialog box to be displayed.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>Add a Service</strong></span> dialog box, type the name of the service in the <span class="guimenu"><strong>Name</strong></span> text box and click <span class="guibutton"><strong>OK</strong></span>. Clicking <span class="guibutton"><strong>OK</strong></span> causes the <span class="guilabel"><strong>Service Management</strong></span> dialog box to be displayed (refer to <a class="xref" href="#fig-soft-addsvc-CA">Figure 5.9, « Adding a Cluster Service »</a>).
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Use a descriptive name that clearly distinguishes the service from other services in the cluster.
					</div></div></div><div class="figure" id="fig-soft-addsvc-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/service-management-dbox.png" width="444" alt="Adding a Cluster Service" /><div class="longdesc"><div class="para">
								Add Cluster Service dialog box
							</div></div></div></div><h6>Figure 5.9. Adding a Cluster Service</h6></div><br class="figure-break" /><div class="para">
				</div></li><li class="listitem"><div class="para">
					If you want to restrict the members on which this cluster service is able to run, choose a failover domain from the <span class="guimenu"><strong>Failover Domain</strong></span> drop-down box. (Refer to <a class="xref" href="#s1-config-failover-domain-CA">Section 5.6, « Configuring a Failover Domain »</a> for instructions on how to configure a failover domain.)
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>Autostart This Service</strong></span> checkbox — This is checked by default. If <span class="guimenu"><strong>Autostart This Service</strong></span> is checked, the service is started automatically when a cluster is started and running. If <span class="guimenu"><strong>Autostart This Service</strong></span> is <span class="emphasis"><em>not</em></span> checked, the service must be started manually any time the cluster comes up from stopped state.
				</div></li><li class="listitem"><div class="para">
					<span class="guimenu"><strong>Run Exclusive</strong></span> checkbox — This sets a policy wherein the service only runs on nodes that have <span class="emphasis"><em>no other</em></span> services running on them. For example, for a very busy web server that is clustered for high availability, it would would be advisable to keep that service on a node alone with no other services competing for his resources — that is, <span class="guimenu"><strong>Run Exclusive</strong></span> checked. On the other hand, services that consume few resources (like NFS and Samba), can run together on the same node without little concern over contention for resources. For those types of services you can leave the <span class="guimenu"><strong>Run Exclusive</strong></span> unchecked.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Circumstances that require enabling <span class="guimenu"><strong>Run Exclusive</strong></span> are rare. Enabling <span class="guimenu"><strong>Run Exclusive</strong></span> can render a service offline if the node it is running on fails and no other nodes are empty.
					</div></div></div></li><li class="listitem"><div class="para">
					Select a recovery policy to specify how the resource manager should recover from a service failure. At the upper right of the <span class="guilabel"><strong>Service Management</strong></span> dialog box, there are three <span class="guimenu"><strong>Recovery Policy</strong></span> options available:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Restart</strong></span> — Restart the service in the node the service is currently located. The default setting is <span class="guimenu"><strong>Restart</strong></span>. If the service cannot be restarted in the current node, the service is relocated.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Relocate</strong></span> — Relocate the service before restarting. Do not restart the node where the service is currently located.
						</div></li><li class="listitem"><div class="para">
							<span class="guimenu"><strong>Disable</strong></span> — Do not restart the service at all.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Click the <span class="guibutton"><strong>Add a Shared Resource to this service</strong></span> button and choose the a resource listed that you have configured in <a class="xref" href="#s1-config-service-dev-CA">Section 5.7, « Adding Cluster Resources »</a>.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						If you are adding a Samba-service resource, connect a Samba-service resource directly to the service, <span class="emphasis"><em>not</em></span> to a resource within a service. That is, at the <span class="guilabel"><strong>Service Management</strong></span> dialog box, use either <span class="guibutton"><strong>Create a new resource for this service</strong></span> or <span class="guibutton"><strong>Add a Shared Resource to this service</strong></span>; do <span class="emphasis"><em>not</em></span> use <span class="guibutton"><strong>Attach a new Private Resource to the Selection</strong></span> or <span class="guibutton"><strong>Attach a Shared Resource to the selection</strong></span>.
					</div></div></div></li><li class="listitem"><div class="para">
					If needed, you may also create a <em class="firstterm">private</em> resource that you can create that becomes a subordinate resource by clicking on the <span class="guibutton"><strong>Attach a new Private Resource to the Selection</strong></span> button. The process is the same as creating a shared resource described in <a class="xref" href="#s1-config-service-dev-CA">Section 5.7, « Adding Cluster Resources »</a>. The private resource will appear as a child to the shared resource to which you associated with the shared resource. Click the triangle icon next to the shared resource to display any private resources associated.
				</div></li><li class="listitem"><div class="para">
					When finished, click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save the changes to the cluster configuration.
				</div></li></ol></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				To verify the existence of the IP service resource used in a cluster service, you must use the <code class="command">/sbin/ip addr list</code> command on a cluster node. The following output shows the <code class="command">/sbin/ip addr list</code> command executed on a node running a cluster service:
			</div><pre class="screen">
1: lo: &lt;LOOPBACK,UP&gt; mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: &lt;BROADCAST,MULTICAST,UP&gt; mtu 1356 qdisc pfifo_fast qlen 1000
    link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff
    inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0
    inet6 fe80::205:5dff:fe9a:d891/64 scope link
    inet 10.11.4.240/22 scope global secondary eth0
       valid_lft forever preferred_lft forever
</pre></div></div><div class="section" id="s2-add-service-CA-relocate"><div class="titlepage"><div><div><h3 class="title" id="s2-add-service-CA-relocate">5.8.1. Relocating a Service in a Cluster</h3></div></div></div><div class="para">
				Service relocation functionality allows you to perform maintenance on a cluster member while maintaining application and data availability.
			</div><div class="para">
				To relocate a service, drag the service icon from the <span class="guilabel"><strong>Services</strong></span> Tab onto the member icon in the <span class="guilabel"><strong>Members</strong></span> tab. The cluster manager stops the service on the member on which it was running and restarts it on the new member.
			</div></div></div><div class="section" id="s1-propagate-config-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-propagate-config-CA">5.9. Propagating The Configuration File: New Cluster</h2></div></div></div><a id="id854862" class="indexterm"></a><a id="id854874" class="indexterm"></a><div class="para">
			For newly defined clusters, you must propagate the configuration file to the cluster nodes as follows:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Log in to the node where you created the configuration file.
				</div></li><li class="listitem"><div class="para">
					Using the <code class="command">scp</code> command, copy the <code class="filename">/etc/cluster/cluster.conf</code> file to all nodes in the cluster.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Propagating the cluster configuration file this way is necessary for the first time a cluster is created. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI <span class="guibutton"><strong>Send to Cluster</strong></span> button. For more information about propagating the cluster configuration using the GUI <span class="guibutton"><strong>Send to Cluster</strong></span> button, refer to <a class="xref" href="#s1-admin-modify-CA">Section 6.3, « Modifying the Cluster Configuration »</a>.
					</div></div></div></li></ol></div></div><div class="section" id="s1-starting-cluster-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-starting-cluster-CA">5.10. Starting the Cluster Software</h2></div></div></div><a id="id854957" class="indexterm"></a><a id="id854965" class="indexterm"></a><div class="para">
			After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					<code class="command">service cman start</code>
				</div></li><li class="listitem"><div class="para">
					<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon (<code class="literal">clvmd</code>) or the High Availability Logical Volume Management agents (HA-LVM). If you are not able to use either the <code class="literal">clvmd</code> daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative.
					</div></div></div></li><li class="listitem"><div class="para">
					<code class="command">service gfs start</code>, if you are using Red Hat GFS
				</div></li><li class="listitem"><div class="para">
					<code class="command">service rgmanager start</code>
				</div></li><li class="listitem"><div class="para">
					Start the Red Hat Cluster Suite management GUI. At the <span class="application"><strong>Cluster Configuration Tool</strong></span> tab, verify that the configuration is correct. At the <span class="application"><strong>Cluster Status Tool</strong></span> tab verify that the nodes and services are running as expected.
				</div></li></ol></div></div></div><div xml:lang="fr-FR" class="chapter" id="ch-mgmt-scc-CA" lang="fr-FR"><div class="titlepage"><div><div><h2 class="title">Chapitre 6. Managing Red Hat Cluster With <code class="command">system-config-cluster</code></h2></div></div></div><div class="toc"><dl><dt><span class="section"><a href="#s1-admin-start-CA">6.1. Starting and Stopping the Cluster Software</a></span></dt><dt><span class="section"><a href="#s1-admin-service-CA">6.2. Managing High-Availability Services</a></span></dt><dt><span class="section"><a href="#s1-admin-modify-CA">6.3. Modifying the Cluster Configuration</a></span></dt><dt><span class="section"><a href="#s1-admin-backup-CA">6.4. Backing Up and Restoring the Cluster Database</a></span></dt><dt><span class="section"><a href="#s1-admin-disable-resource-CA">6.5. Disabling Resources of a Clustered Service for Maintenance</a></span></dt><dt><span class="section"><a href="#s1-admin-disable-CA">6.6. Disabling the Cluster Software</a></span></dt><dt><span class="section"><a href="#s1-admin-problems-CA">6.7. Diagnosing and Correcting Problems in a Cluster</a></span></dt></dl></div><a id="id835361" class="indexterm"></a><a id="id829271" class="indexterm"></a><div class="para">
		This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-start-CA">Section 6.1, « Starting and Stopping the Cluster Software »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-service-CA">Section 6.2, « Managing High-Availability Services »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-backup-CA">Section 6.4, « Backing Up and Restoring the Cluster Database »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-disable-CA">Section 6.6, « Disabling the Cluster Software »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-admin-problems-CA">Section 6.7, « Diagnosing and Correcting Problems in a Cluster »</a>
			</div></li></ul></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			While <code class="command">system-config-cluster</code> provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, <span class="application"><strong>Conga</strong></span>, provides more convenience and flexibility than <code class="command">system-config-cluster</code>. You may want to consider using <span class="application"><strong>Conga</strong></span> instead (refer to <a class="xref" href="#ch-config-conga-CA">Chapitre 3, <em>Configuring Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a> and <a class="xref" href="#ch-mgmt-conga-CA">Chapitre 4, <em>Managing Red Hat Cluster With <span class="application"><strong>Conga</strong></span></em></a>).
		</div></div></div><div class="section" id="s1-admin-start-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-start-CA">6.1. Starting and Stopping the Cluster Software</h2></div></div></div><a id="id841166" class="indexterm"></a><a id="id841178" class="indexterm"></a><a id="id841190" class="indexterm"></a><div class="para">
			To start the cluster software on a member, type the following commands in this order:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					<code class="command"> service cman start</code>
				</div></li><li class="listitem"><div class="para">
					<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
				</div></li><li class="listitem"><div class="para">
					<code class="command">service gfs start</code>, if you are using Red Hat GFS
				</div></li><li class="listitem"><div class="para">
					<code class="command">service rgmanager start</code>
				</div></li></ol></div><div class="para">
			To stop the cluster software on a member, type the following commands in this order:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					<code class="command">service rgmanager stop</code>
				</div></li><li class="listitem"><div class="para">
					<code class="command">service gfs stop</code>, if you are using Red Hat GFS
				</div></li><li class="listitem"><div class="para">
					<code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes
				</div></li><li class="listitem"><div class="para">
					<code class="command">service cman stop</code>
				</div></li></ol></div><div class="para">
			Stopping the cluster services on a member causes its services to fail over to an active member.
		</div></div><div class="section" id="s1-admin-service-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-service-CA">6.2. Managing High-Availability Services</h2></div></div></div><div class="para">
			You can manage cluster services with the <span class="application"><strong>Cluster Status Tool</strong></span> (<a class="xref" href="#fig-intro-clustatus-service-CA">Figure 6.1, « <span class="application">Cluster Status Tool</span> »</a>) through the <span class="guimenu"><strong>Cluster Management</strong></span> tab in Cluster Administration GUI.
		</div><div class="figure" id="fig-intro-clustatus-service-CA"><div class="figure-contents"><div class="mediaobject"><img src="./images/clustatus.png" width="444" alt="Cluster Status Tool" /><div class="longdesc"><div class="para">
						cluster status tool
					</div></div></div></div><h6>Figure 6.1. <span class="application">Cluster Status Tool</span></h6></div><br class="figure-break" /><div class="para">
			You can use the <span class="application"><strong>Cluster Status Tool</strong></span> to enable, disable, restart, or relocate a high-availability service. The <span class="application"><strong>Cluster Status Tool</strong></span> displays the current cluster status in the <span class="guimenu"><strong>Services </strong></span> area and automatically updates the status every 10 seconds.
		</div><div class="para">
			To enable a service, you can select the service in the <span class="guimenu"><strong>Services</strong></span> area and click <span class="guiicon"><strong>Enable</strong></span>. To disable a service, you can select the service in the <span class="guimenu"><strong>Services</strong></span> area and click <span class="guiicon"><strong>Disable</strong></span>. To restart a service, you can select the service in the <span class="guimenu"><strong>Services</strong></span> area and click <span class="guiicon"><strong>Restart</strong></span>. To relocate a service from one node to another, you can drag the service to another node and drop the service onto that node. Relocating a service restarts the service on that node. (Relocating a service to its current node — that is, dragging a service to its current node and dropping the service onto that node — restarts the service.)
		</div><a id="id857464" class="indexterm"></a><a id="id857476" class="indexterm"></a><a id="id857488" class="indexterm"></a><div class="para">
			The following tables describe the members and services status information displayed by the <span class="application"><strong>Cluster Status Tool</strong></span>.
		</div><div class="table" id="tb-admin-memberstattool-CA"><h6>Tableau 6.1. Members Status</h6><div class="table-contents"><table summary="Members Status" border="1"><colgroup><col width="25%" class="MemberStatus" /><col width="75%" class="Description" /></colgroup><thead><tr><th>
							Members Status
						</th><th>
							Description
						</th></tr></thead><tbody><tr><td>
							<span class="guimenu"><strong>Member</strong></span>
						</td><td>
							<table border="0" summary="Simple list" class="simplelist"><tr><td>The node is part of the cluster.</td></tr><tr><td>Note: A node can be a member of a cluster; however, the node may be inactive and incapable of running services. For example, if <code class="command">rgmanager</code> is not running on the node, but all other cluster software components are running in the node, the node appears as a <span class="guimenu"><strong>Member</strong></span> in the <span class="application"><strong>Cluster Status Tool</strong></span>. </td></tr></table>
						</td></tr><tr><td>
							<span class="guimenu"><strong>Dead</strong></span>
						</td><td>
							The node is unable to participate as a cluster member. The most basic cluster software is not running on the node.
						</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-admin-servicestat-CA"><h6>Tableau 6.2. Services Status</h6><div class="table-contents"><table summary="Services Status" border="1"><colgroup><col width="25%" class="ServiceStatus" /><col width="75%" class="Description" /></colgroup><thead><tr><th>
							Services Status
						</th><th>
							Description
						</th></tr></thead><tbody><tr><td>
							<span class="guimenu"><strong>Started</strong></span>
						</td><td>
							The service resources are configured and available on the cluster system that owns the service.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Pending</strong></span>
						</td><td>
							The service has failed on a member and is pending start on another member.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Disabled</strong></span>
						</td><td>
							The service has been disabled, and does not have an assigned owner. A disabled service is never restarted automatically by the cluster.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Stopped</strong></span>
						</td><td>
							The service is not running; it is waiting for a member capable of starting the service. A service remains in the stopped state if autostart is disabled.
						</td></tr><tr><td>
							<span class="guimenu"><strong>Failed</strong></span>
						</td><td>
							The service has failed to start on the cluster and cannot successfully stop the service. A failed service is never restarted automatically by the cluster.
						</td></tr></tbody></table></div></div><br class="table-break" /></div><div class="section" id="s1-admin-modify-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-modify-CA">6.3. Modifying the Cluster Configuration</h2></div></div></div><a id="id813345" class="indexterm"></a><a id="id813357" class="indexterm"></a><div class="para">
			To modify the cluster configuration (the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>), use the <span class="application"><strong>Cluster Configuration Tool</strong></span>. For more information about using the <span class="application"><strong>Cluster Configuration Tool</strong></span>, refer to <a class="xref" href="#ch-config-scc-CA">Chapitre 5, <em>Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>.
		</div><div class="warning"><div class="admonition_header"><h2>Warning</h2></div><div class="admonition"><div class="para">
				Do not manually edit the contents of the <code class="filename">/etc/cluster/cluster.conf</code> file without guidance from an authorized Red Hat representative or unless you fully understand the consequences of editing the <code class="filename">/etc/cluster/cluster.conf</code> file manually.
			</div></div></div><div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
				Although the <span class="application"><strong>Cluster Configuration Tool</strong></span> provides a <span class="guimenu"><strong>Quorum Votes</strong></span> parameter in the <span class="guilabel"><strong>Properties</strong></span> dialog box of each cluster member, that parameter is intended <span class="emphasis"><em>only</em></span> for use during initial cluster configuration. Furthermore, it is recommended that you retain the default <span class="guimenu"><strong>Quorum Votes</strong></span> value of <strong class="userinput"><code>1</code></strong>. For more information about using the <span class="application"><strong>Cluster Configuration Tool</strong></span>, refer to <a class="xref" href="#ch-config-scc-CA">Chapitre 5, <em>Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>.
			</div></div></div><div class="para">
			To edit the cluster configuration file, click the <span class="guimenu"><strong>Cluster Configuration</strong></span> tab in the cluster configuration GUI. Clicking the <span class="guimenu"><strong>Cluster Configuration</strong></span> tab displays a graphical representation of the cluster configuration. Change the configuration file according the following steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Make changes to cluster elements (for example, create a service).
				</div></li><li class="listitem"><div class="para">
					Propagate the updated configuration file throughout the cluster by clicking <span class="guibutton"><strong>Send to Cluster</strong></span>.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The <span class="application"><strong>Cluster Configuration Tool</strong></span> does not display the <span class="guibutton"><strong>Send to Cluster</strong></span> button if the cluster is new and has not been started yet, or if the node from which you are running the <span class="application"><strong>Cluster Configuration Tool</strong></span> is not a member of the cluster. If the <span class="guibutton"><strong>Send to Cluster</strong></span> button is not displayed, you can still use the <span class="application"><strong>Cluster Configuration Tool</strong></span>; however, you cannot propagate the configuration. You can still <span class="emphasis"><em>save</em></span> the configuration file. For information about using the <span class="application"><strong>Cluster Configuration Tool</strong></span> for a new cluster configuration, refer to <a class="xref" href="#ch-config-scc-CA">Chapitre 5, <em>Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>.
					</div></div></div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Send to Cluster</strong></span> causes a <span class="guilabel"><strong>Warning</strong></span> dialog box to be displayed. Click <span class="guibutton"><strong>Yes</strong></span> to save and propagate the configuration.
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Yes</strong></span> causes an <span class="guilabel"><strong>Information</strong></span> dialog box to be displayed, confirming that the current configuration has been propagated to the cluster. Click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Click the <span class="guimenu"><strong>Cluster Management</strong></span> tab and verify that the changes have been propagated to the cluster members.
				</div></li></ol></div></div><div class="section" id="s1-admin-backup-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-backup-CA">6.4. Backing Up and Restoring the Cluster Database</h2></div></div></div><a id="id838655" class="indexterm"></a><a id="id838667" class="indexterm"></a><a id="id838679" class="indexterm"></a><a id="id838691" class="indexterm"></a><div class="para">
			The <span class="application"><strong>Cluster Configuration Tool</strong></span> automatically retains backup copies of the three most recently used configuration files (besides the currently used configuration file). Retaining the backup copies is useful if the cluster does not function correctly because of misconfiguration and you need to return to a previous working configuration.
		</div><div class="para">
			Each time you save a configuration file, the <span class="application"><strong>Cluster Configuration Tool</strong></span> saves backup copies of the three most recently used configuration files as <code class="filename">/etc/cluster/cluster.conf.bak.1</code>, <code class="filename">/etc/cluster/cluster.conf.bak.2</code>, and <code class="filename">/etc/cluster/cluster.conf.bak.3</code>. The backup file <code class="filename">/etc/cluster/cluster.conf.bak.1</code> is the newest backup, <code class="filename">/etc/cluster/cluster.conf.bak.2</code> is the second newest backup, and <code class="filename">/etc/cluster/cluster.conf.bak.3</code> is the third newest backup.
		</div><div class="para">
			If a cluster member becomes inoperable because of misconfiguration, restore the configuration file according to the following steps:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					At the <span class="application"><strong>Cluster Configuration Tool</strong></span> tab of the Red Hat Cluster Suite management GUI, click <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Open</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guimenu"><strong>File =&gt; Open</strong></span> causes the <span class="guilabel"><strong>system-config-cluster</strong></span> dialog box to be displayed.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>system-config-cluster</strong></span> dialog box, select a backup file (for example, <code class="filename">/etc/cluster/cluster.conf.bak.1</code>). Verify the file selection in the <span class="guimenu"><strong>Selection</strong></span> box and click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Increment the configuration version beyond the current working version number as follows:
				</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
							Click <span class="guimenu"><strong>Cluster</strong></span> =&gt; <span class="guimenu"><strong>Edit Cluster Properties</strong></span>.
						</div></li><li class="listitem"><div class="para">
							At the <span class="guilabel"><strong>Cluster Properties</strong></span> dialog box, change the <span class="guimenu"><strong>Config Version</strong></span> value and click <span class="guibutton"><strong>OK</strong></span>.
						</div></li></ol></div></li><li class="listitem"><div class="para">
					Click <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save As</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save As</strong></span> causes the <span class="guilabel"><strong>system-config-cluster</strong></span> dialog box to be displayed.
				</div></li><li class="listitem"><div class="para">
					At the <span class="guilabel"><strong>system-config-cluster</strong></span> dialog box, select <code class="filename">/etc/cluster/cluster.conf</code> and click <span class="guibutton"><strong>OK</strong></span>. (Verify the file selection in the <span class="guimenu"><strong>Selection</strong></span> box.)
				</div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>OK</strong></span> causes an <span class="guilabel"><strong>Information</strong></span> dialog box to be displayed. At that dialog box, click <span class="guibutton"><strong>OK</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Propagate the updated configuration file throughout the cluster by clicking <span class="guibutton"><strong>Send to Cluster</strong></span>.
				</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
						The <span class="application"><strong>Cluster Configuration Tool</strong></span> does not display the <span class="guibutton"><strong>Send to Cluster</strong></span> button if the cluster is new and has not been started yet, or if the node from which you are running the <span class="application"><strong>Cluster Configuration Tool</strong></span> is not a member of the cluster. If the <span class="guibutton"><strong>Send to Cluster</strong></span> button is not displayed, you can still use the <span class="application"><strong>Cluster Configuration Tool</strong></span>; however, you cannot propagate the configuration. You can still <span class="emphasis"><em>save</em></span> the configuration file. For information about using the <span class="application"><strong>Cluster Configuration Tool</strong></span> for a new cluster configuration, refer to <a class="xref" href="#ch-config-scc-CA">Chapitre 5, <em>Configuring Red Hat Cluster With <code class="command">system-config-cluster</code></em></a>.
					</div></div></div></li><li class="listitem"><div class="para">
					Clicking <span class="guibutton"><strong>Send to Cluster</strong></span> causes a <span class="guilabel"><strong>Warning</strong></span> dialog box to be displayed. Click <span class="guibutton"><strong>Yes</strong></span> to propagate the configuration.
				</div></li><li class="listitem"><div class="para">
					Click the <span class="guimenu"><strong>Cluster Management</strong></span> tab and verify that the changes have been propagated to the cluster members.
				</div></li></ol></div></div><div class="section" id="s1-admin-disable-resource-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-disable-resource-CA">6.5. Disabling Resources of a Clustered Service for Maintenance</h2></div></div></div><div class="para">
			At times, it may be necessary to to stop a resource that is part of a clustered service. You can configure services in the <code class="filename">cluster.conf</code> file to have hierarchical resources (similar to a dependency tree) to disable a resource in a service without disabling other resources within that service.
		</div><div class="para">
			So, for example, if you have a database that uses an ext3-formatted filesystem, you can disable the database while preserving the filesystem resource for use in the service.
		</div><div class="para">
			In the following example snippet of a <code class="filename">cluster.conf</code> file, a service uses a MySQL database and and ext3-formatted filesystem resources.
		</div><pre class="screen">

&lt;resources&gt;
     &lt;mysql config_file="/etc/my.cnf" name="mysql-resource" shutdown_wait="0"/&gt;
     &lt;fs device="/dev/sdb1" force_fsck="0" force_unmount="1" fsid="9349" fstype="ext3" mountpoint="/opt/db" name="SharedDisk" self_fence="0"/&gt;
&lt;/resources&gt;

&lt;service name="ha-mysql"&gt;
     &lt;fs ref="SharedDisk"&gt;
          &lt;mysql ref="mysql-resource"/&gt;
     &lt;/fs&gt;
&lt;/service&gt;

</pre><div class="para">
			In order to stop the MySQL-database and do some maintenance without interfering with the cluster software (mainly rgmanager), you must first freeze the clustered service:
		</div><pre class="screen">
clusvcadm -Z ha-mysql
</pre><div class="para">
			You can then stop the MySQL service with the <code class="command">rg_test</code> command:
		</div><pre class="screen">
rg_test test /etc/cluster/cluster.conf stop mysql mysql-resource
</pre><div class="para">
			When the MySQL database has been shutdown, the maintenance can be done. After finishing the maintenance, start the MySQL database with rg_test again:
		</div><pre class="screen">
rg_test test /etc/cluster/cluster.conf start mysql mysql-resource
</pre><div class="para">
			The cluster service is still frozen and will not be monitored by rgmanager. To enable monitoring again, unfreeze the clustered service:
		</div><pre class="screen">
clusvcadm -U ha-mysql
</pre><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				The <code class="command">rg_test</code> utility will stop all instances of a resource on a given node, potentially causing undesired results if multiple services on a single node are sharing the same resource. Do not perform these steps on resources that have multiple instances within the <code class="filename">cluster.conf</code> file. In such cases, it is usually necessary to disable the service for maintenance.
			</div></div></div></div><div class="section" id="s1-admin-disable-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-disable-CA">6.6. Disabling the Cluster Software</h2></div></div></div><a id="id890917" class="indexterm"></a><a id="id890929" class="indexterm"></a><a id="id890940" class="indexterm"></a><div class="para">
			It may become necessary to temporarily disable the cluster software on a cluster member. For example, if a cluster member experiences a hardware failure, you may want to reboot that member, but prevent it from rejoining the cluster to perform maintenance on the system.
		</div><div class="para">
			Use the <code class="command">/sbin/chkconfig</code> command to stop the member from joining the cluster at boot-up as follows:
		</div><pre class="screen">
# <strong class="userinput"><code>chkconfig --level 2345 rgmanager off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 gfs off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 clvmd off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 cman off</code></strong></pre><div class="para">
			Once the problems with the disabled cluster member have been resolved, use the following commands to allow the member to rejoin the cluster:
		</div><pre class="screen">
# <strong class="userinput"><code>chkconfig --level 2345 rgmanager on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 gfs on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 clvmd on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 cman on</code></strong></pre><div class="para">
			You can then reboot the member for the changes to take effect or run the following commands in the order shown to restart cluster software:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					<code class="command">service cman start</code>
				</div></li><li class="listitem"><div class="para">
					<code class="command">service clvmd start</code>, if CLVM has been used to create clustered volumes
				</div></li><li class="listitem"><div class="para">
					<code class="command">service gfs start</code>, if you are using Red Hat GFS
				</div></li><li class="listitem"><div class="para">
					<code class="command">service rgmanager start</code>
				</div></li></ol></div></div><div class="section" id="s1-admin-problems-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-admin-problems-CA">6.7. Diagnosing and Correcting Problems in a Cluster</h2></div></div></div><a id="id891086" class="indexterm"></a><a id="id891098" class="indexterm"></a><a id="id891109" class="indexterm"></a><div class="para">
			For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative.
		</div></div></div><div xml:lang="fr-FR" class="appendix" id="ap-httpd-service-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Example of Setting Up Apache HTTP Server</h1></div></div></div><a id="id812187" class="indexterm"></a><a id="id779314" class="indexterm"></a><a id="id814254" class="indexterm"></a><div class="para">
		This appendix provides an example of setting up a highly available Apache HTTP Server on a Red Hat Cluster. The example describes how to set up a service to fail over an Apache HTTP Server. Variables in the example apply to this example only; they are provided to assist setting up a service that suits your requirements.
	</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			This example uses the <span class="application"><strong>Cluster Configuration Tool</strong></span> (<code class="command">system-config-cluster</code>). You can use comparable <span class="application"><strong>Conga</strong></span> functions to make an Apache HTTP Server highly available on a Red Hat Cluster.
		</div></div></div><div class="section" id="s1-apache-setup-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-apache-setup-CA">A.1. Apache HTTP Server Setup Overview</h2></div></div></div><div class="para">
			First, configure Apache HTTP Server on all nodes in the cluster. If using a failover domain , assign the service to all cluster nodes configured to run the Apache HTTP Server. Refer to <a class="xref" href="#s1-config-failover-domain-CA">Section 5.6, « Configuring a Failover Domain »</a> for instructions. The cluster software ensures that only one cluster system runs the Apache HTTP Server at one time. The example configuration consists of installing the <code class="filename">httpd</code> RPM package on all cluster nodes (or on nodes in the failover domain, if used) and configuring a shared GFS shared resource for the Web content.
		</div><div class="para">
			When installing the Apache HTTP Server on the cluster systems, run the following command to ensure that the cluster nodes do not automatically start the service when the system boots:
		</div><pre class="screen">
# <strong class="userinput"><code>chkconfig --del httpd</code></strong></pre><div class="para">
			Rather than having the system init scripts spawn the <code class="command">httpd</code> daemon, the cluster infrastructure initializes the service on the active cluster node. This ensures that the corresponding IP address and file system mounts are active on only one cluster node at a time.
		</div><div class="para">
			When adding an <code class="filename">httpd</code> service, a <em class="firstterm">floating</em> IP address must be assigned to the service so that the IP address will transfer from one cluster node to another in the event of failover or service relocation. The cluster infrastructure binds this IP address to the network interface on the cluster system that is currently running the Apache HTTP Server. This IP address ensures that the cluster node running <code class="filename">httpd</code> is transparent to the clients accessing the service.
		</div><div class="para">
			The file systems that contain the Web content cannot be automatically mounted on the shared storage resource when the cluster nodes boot. Instead, the cluster software must mount and unmount the file system as the <code class="filename">httpd</code> service is started and stopped. This prevents the cluster systems from accessing the same data simultaneously, which may result in data corruption. Therefore, do not include the file systems in the <code class="filename">/etc/fstab</code> file.
		</div></div><div class="section" id="s1-apache-sharedfs-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-apache-sharedfs-CA">A.2. Configuring Shared Storage</h2></div></div></div><div class="para">
			To set up the shared file system resource, perform the following tasks as root on one cluster system:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					On one cluster node, use the interactive <code class="command">parted</code> utility to create a partition to use for the document root directory. Note that it is possible to create multiple document root directories on different disk partitions.
				</div></li><li class="listitem"><div class="para">
					Use the <code class="command">mkfs</code> command to create an ext3 file system on the partition you created in the previous step. Specify the drive letter and the partition number. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>mkfs -t ext3 /dev/sde3</code></strong></pre></li><li class="listitem"><div class="para">
					Mount the file system that contains the document root directory. For example:
				</div><pre class="screen">
# <strong class="userinput"><code>mount /dev/sde3 /var/www/html</code></strong></pre><div class="para">
					Do not add this mount information to the <code class="filename">/etc/fstab</code> file because only the cluster software can mount and unmount file systems used in a service.
				</div></li><li class="listitem"><div class="para">
					Copy all the required files to the document root directory.
				</div></li><li class="listitem"><div class="para">
					If you have CGI files or other files that must be in different directories or in separate partitions, repeat these steps, as needed.
				</div></li></ol></div></div><div class="section" id="s1-apache-inshttpd-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-apache-inshttpd-CA">A.3. Installing and Configuring the Apache HTTP Server</h2></div></div></div><div class="para">
			The Apache HTTP Server must be installed and configured on all nodes in the assigned failover domain, if used, or in the cluster. The basic server configuration must be the same on all nodes on which it runs for the service to fail over correctly. The following example shows a basic Apache HTTP Server installation that includes no third-party modules or performance tuning.
		</div><div class="para">
			On all node in the cluster (or nodes in the failover domain, if used), install the <code class="filename">httpd</code> RPM package. For example:
		</div><div class="para">
			<code class="command"> rpm -Uvh httpd-<em class="replaceable"><code>&lt;version&gt;</code></em>.<em class="replaceable"><code>&lt;arch&gt;</code></em>.rpm</code>
		</div><a id="id817797" class="indexterm"></a><a id="id817811" class="indexterm"></a><a id="id741276" class="indexterm"></a><div class="para">
			To configure the Apache HTTP Server as a cluster service, perform the following tasks:
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Edit the <code class="filename">/etc/httpd/conf/httpd.conf</code> configuration file and customize the file according to your configuration. For example:
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							Specify the directory that contains the HTML files. Also specify this mount point when adding the service to the cluster configuration. It is only required to change this field if the mount point for the web site's content differs from the default setting of <code class="filename">/var/www/html/</code>. For example:
						</div><pre class="screen">
DocumentRoot "/mnt/httpdservice/html"
</pre></li><li class="listitem"><div class="para">
							Specify a unique IP address to which the service will listen for requests. For example:
						</div><pre class="screen">
Listen 192.168.1.100:80
</pre><div class="para">
							This IP address then must be configured as a cluster resource for the service using the <span class="application"><strong>Cluster Configuration Tool</strong></span>.
						</div></li><li class="listitem"><div class="para">
							If the script directory resides in a non-standard location, specify the directory that contains the CGI programs. For example:
						</div><pre class="screen">
ScriptAlias /cgi-bin/ "/mnt/httpdservice/cgi-bin/"
</pre></li><li class="listitem"><div class="para">
							Specify the path that was used in the previous step, and set the access permissions to default to that directory. For example:
						</div><pre class="screen">
&lt;Directory /mnt/httpdservice/cgi-bin"&gt;
AllowOverride None
Options None 
Order allow,deny 
Allow from all 
&lt;/Directory&gt;
</pre><div class="para">
							Additional changes may need to be made to tune the Apache HTTP Server or add module functionality. For information on setting up other options, refer to the <em class="citetitle">Red Hat Enterprise Linux System Administration Guide</em> and the <em class="citetitle">Red Hat Enterprise Linux Reference Guide</em>.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					The standard Apache HTTP Server start script, <code class="filename">/etc/rc.d/init.d/httpd</code> is also used within the cluster framework to start and stop the Apache HTTP Server on the active cluster node. Accordingly, when configuring the service, specify this script by adding it as a <span class="guimenu"><strong>Script</strong></span> resource in the <span class="application"><strong>Cluster Configuration Tool</strong></span>.
				</div></li><li class="listitem"><div class="para">
					Copy the configuration file over to the other nodes of the cluster (or nodes of the failover domain, if configured).
				</div></li></ol></div><div class="para">
			Before the service is added to the cluster configuration, ensure that the Apache HTTP Server directories are not mounted. Then, on one node, invoke the <span class="application"><strong>Cluster Configuration Tool</strong></span> to add the service, as follows. This example assumes a failover domain named <code class="filename">httpd-domain</code> was created for this service.
		</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
					Add the init script for the Apache HTTP Server service.
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							Select the <span class="guimenu"><strong>Resources</strong></span> tab and click <span class="guibutton"><strong>Create a Resource</strong></span>. The <span class="guilabel"><strong>Resources Configuration</strong></span> properties dialog box is displayed.
						</div></li><li class="listitem"><div class="para">
							Select <span class="guimenu"><strong>Script</strong></span> form the drop down menu.
						</div></li><li class="listitem"><div class="para">
							Enter a <span class="guimenu"><strong>Name</strong></span> to be associated with the Apache HTTP Server service.
						</div></li><li class="listitem"><div class="para">
							Specify the path to the Apache HTTP Server init script (for example, <strong class="userinput"><code>/etc/rc.d/init.d/httpd</code></strong>) in the <span class="guimenu"><strong>File (with path)</strong></span> field.
						</div></li><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>OK</strong></span>.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Add a device for the Apache HTTP Server content files and/or custom scripts.
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>Create a Resource</strong></span>.
						</div></li><li class="listitem"><div class="para">
							In the <span class="guilabel"><strong>Resource Configuration</strong></span> dialog, select <span class="guimenu"><strong>File System</strong></span> from the drop-down menu.
						</div></li><li class="listitem"><div class="para">
							Enter the <span class="guimenu"><strong>Name</strong></span> for the resource (for example, <strong class="userinput"><code>httpd-content</code></strong>.
						</div></li><li class="listitem"><div class="para">
							Choose <span class="guimenu"><strong>ext3</strong></span> from the <span class="guimenu"><strong>File System Type</strong></span> drop-down menu.
						</div></li><li class="listitem"><div class="para">
							Enter the mount point in the <span class="guimenu"><strong>Mount Point</strong></span> field (for example, <strong class="userinput"><code>/var/www/html/</code></strong>).
						</div></li><li class="listitem"><div class="para">
							Enter the device special file name in the <span class="guimenu"><strong>Device</strong></span> field (for example, <strong class="userinput"><code>/dev/sda3</code></strong>).
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Add an IP address for the Apache HTTP Server service.
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>Create a Resource</strong></span>.
						</div></li><li class="listitem"><div class="para">
							Choose <span class="guimenu"><strong>IP Address</strong></span> from the drop-down menu.
						</div></li><li class="listitem"><div class="para">
							Enter the <span class="guimenu"><strong>IP Address</strong></span> to be associated with the Apache HTTP Server service.
						</div></li><li class="listitem"><div class="para">
							Make sure that the <span class="guimenu"><strong>Monitor Link</strong></span> checkbox is left checked.
						</div></li><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>OK</strong></span>.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Click the <span class="guimenu"><strong>Services</strong></span> property.
				</div></li><li class="listitem"><div class="para">
					Create the Apache HTTP Server service.
				</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>Create a Service</strong></span>. Type a <span class="guimenu"><strong>Name</strong></span> for the service in the <span class="guilabel"><strong>Add a Service</strong></span> dialog.
						</div></li><li class="listitem"><div class="para">
							In the <span class="guilabel"><strong>Service Management</strong></span> dialog, select a <span class="guimenu"><strong>Failover Domain</strong></span> from the drop-down menu or leave it as <span class="guimenu"><strong>None</strong></span>.
						</div></li><li class="listitem"><div class="para">
							Click the <span class="guibutton"><strong>Add a Shared Resource to this service</strong></span> button. From the available list, choose each resource that you created in the previous steps. Repeat this step until all resources have been added.
						</div></li><li class="listitem"><div class="para">
							Click <span class="guibutton"><strong>OK</strong></span>.
						</div></li></ul></div></li><li class="listitem"><div class="para">
					Choose <span class="guimenu"><strong>File</strong></span> =&gt; <span class="guimenuitem"><strong>Save</strong></span> to save your changes.
				</div></li></ol></div></div></div><div xml:lang="fr-FR" class="appendix" id="ap-fence-device-param-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Fence Device Parameters</h1></div></div></div><a id="id832126" class="indexterm"></a><div class="para">
		This appendix provides tables with parameter descriptions of fence devices.
	</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			The <span class="guimenu"><strong>Name</strong></span> parameter for a fence device specifies an arbitrary name for the device that will be used by Red Hat Cluster Suite. This is not the same as the DNS name for the device.
		</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			Certain fence devices have an optional <span class="guimenu"><strong>Password Script</strong></span> parameter. The <span class="guimenu"><strong>Password Script</strong></span> parameter allows specifying that a fence-device password is supplied from a script rather than from the <span class="guimenu"><strong>Password</strong></span> parameter. Using the <span class="guimenu"><strong>Password Script</strong></span> parameter supersedes the <span class="guimenu"><strong>Password</strong></span> parameter, allowing passwords to not be visible in the cluster configuration file (<code class="filename">/etc/cluster/cluster.conf</code>).
		</div></div></div><a id="id803754" class="indexterm"></a><a id="id785279" class="indexterm"></a><a id="id783386" class="indexterm"></a><div class="table" id="tb-software-fence-apc-CA"><h6>Tableau B.1. APC Power Switch</h6><div class="table-contents"><table summary="APC Power Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the APC device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The port.
					</td></tr><tr><td>
						Switch (optional)
					</td><td>
						The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches.
					</td></tr><tr><td>
						Use SSH
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						<code class="command">fence_apc</code>
					</td><td>
						The fence agent for APC.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-apc-snmp-CA"><h6>Tableau B.2. APC Power Switch over SNMP (Red Hat Enterprise Linux 5.2 and later)</h6><div class="table-contents"><table summary="APC Power Switch over SNMP (Red Hat Enterprise Linux 5.2 and later)" border="1"><colgroup><col width="23%" class="Field" /><col width="77%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the APC device connected to the cluster into which the fence daemon logs via the SNMP protocol.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						UDP/TCP port
					</td><td>
						The UDP/TCP port to use for connection with the device; the default value is 161.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The port.
					</td></tr><tr><td>
						Switch (optional)
					</td><td>
						The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches.
					</td></tr><tr><td>
						SNMP version
					</td><td>
						The SNMP version to use (1, 2c, 3); the default value is 1.
					</td></tr><tr><td>
						SNMP community
					</td><td>
						The SNMP community string; the default value is <code class="literal">private</code>.
					</td></tr><tr><td>
						SNMP security level
					</td><td>
						The SNMP security level (noAuthNoPriv, authNoPriv, authPriv).
					</td></tr><tr><td>
						SNMP authentication protocol
					</td><td>
						The SNMP authentication protocol (MD5, SHA).
					</td></tr><tr><td>
						SNMP privacy protocol
					</td><td>
						The SNMP privacy protocol (DES, AES).
					</td></tr><tr><td>
						SNMP privacy protocol password
					</td><td>
						The SNMP privacy protocol password.
					</td></tr><tr><td>
						SNMP privacy protocol script
					</td><td>
						The script that supplies a password for SNMP privacy protocol. Using this supersedes the <span class="guimenu"><strong>SNMP privacy protocol password</strong></span> parameter.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						<code class="command">fence_apc_snmp</code>
					</td><td>
						The fence agent for APC that logs into the SNP device via the SNMP protocol.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-brocade-CA"><h6>Tableau B.3. Brocade Fabric Switch</h6><div class="table-contents"><table summary="Brocade Fabric Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the Brocade device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						<code class="command">fence_brocade</code>
					</td><td>
						The fence agent for Brocade FC switches.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-bullpap-CA"><h6>Tableau B.4. Bull PAP (Platform Administration Processor)</h6><div class="table-contents"><table summary="Bull PAP (Platform Administration Processor)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the Bull PAP system connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the PAP console.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the PAP console.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the PAP console.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Domain
					</td><td>
						Domain of the Bull PAP system to power cycle.
					</td></tr><tr><td>
						<code class="command">fence_bullpap</code>
					</td><td>
						The fence agent for Bull’s NovaScale machines controlled by PAP management consoles.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-ciscomds-CA"><h6>Tableau B.5. Cisco MDS (Red Hat Enterprise Linux 5.4 and later)</h6><div class="table-contents"><table summary="Cisco MDS (Red Hat Enterprise Linux 5.4 and later)" border="1"><colgroup><col width="23%" class="Field" /><col width="77%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the Cisco MDS 9000 series device with SNMP enabled.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The port.
					</td></tr><tr><td>
						SNMP version
					</td><td>
						The SNMP version to use (1, 2c, 3).
					</td></tr><tr><td>
						SNMP community
					</td><td>
						The SNMP community string.
					</td></tr><tr><td>
						SNMP authentication protocol
					</td><td>
						The SNMP authentication protocol (MD5, SHA).
					</td></tr><tr><td>
						SNMP security level
					</td><td>
						The SNMP security level (noAuthNoPriv, authNoPriv, authPriv).
					</td></tr><tr><td>
						SNMP privacy protocol
					</td><td>
						The SNMP privacy protocol (DES, AES).
					</td></tr><tr><td>
						SNMP privacy protocol password
					</td><td>
						The SNMP privacy protocol password.
					</td></tr><tr><td>
						SNMP privacy protocol script
					</td><td>
						The script that supplies a password for SNMP privacy protocol. Using this supersedes the <span class="guimenu"><strong>SNMP privacy protocol password</strong></span> parameter.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						<code class="command">fence_cisco_mds</code>
					</td><td>
						The fence agent for Cisco MDS.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-ciscoucs-CA"><h6>Tableau B.6. Cisco UCS (Red Hat Enterprise Linux 5.6 and later)</h6><div class="table-contents"><table summary="Cisco UCS (Red Hat Enterprise Linux 5.6 and later)" border="1"><colgroup><col width="23%" class="Field" /><col width="77%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the Cisco UCS device.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						SSL
					</td><td>
						The SSL connection.
					</td></tr><tr><td>
						IP port (optional)
					</td><td>
						The TCP port to use to connect to the device.
					</td></tr><tr><td>
						Port
					</td><td>
						Name of virtual machine.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						Power timeout
					</td><td>
						Number of seconds to test for a status change after issuing a power off or power on command.
					</td></tr><tr><td>
						Shell timeout
					</td><td>
						Number of seconds to wait for a command prompt after issuing a command.
					</td></tr><tr><td>
						Retry on
					</td><td>
						Number of attempts to retry power on.
					</td></tr><tr><td>
						<code class="command">fence_cisco_ucs</code>
					</td><td>
						The fence agent for Cisco UCS.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-dracmc-CA"><h6>Tableau B.7. Dell DRAC</h6><div class="table-contents"><table summary="Dell DRAC" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						The name assigned to the DRAC.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the DRAC.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the DRAC.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the DRAC.
					</td></tr><tr><td>
						Module name
					</td><td>
						(optional) The module name for the DRAC when you have multiple DRAC modules.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Use SSH (DRAC5 only)
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						<code class="command">fence_drac</code>
					</td><td>
						The fence agent for Dell Remote Access Card (DRAC).
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-egen-CA"><h6>Tableau B.8. Egenera SAN Controller</h6><div class="table-contents"><table summary="Egenera SAN Controller" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the BladeFrame device connected to the cluster.
					</td></tr><tr><td>
						CServer
					</td><td>
						The hostname (and optionally the username in the form of <strong class="userinput"><code>username@hostname</code></strong>) assigned to the device. Refer to the <span class="citerefentry"><span class="refentrytitle">fence_egenera</span>(8)</span> man page for more information.
					</td></tr><tr><td>
						ESH Path (optional)
					</td><td>
						The path to the esh command on the cserver (default is /opt/pan- mgr/bin/esh)
					</td></tr><tr><td>
						lpan
					</td><td>
						The logical process area network (LPAN) of the device.
					</td></tr><tr><td>
						pserver
					</td><td>
						The processing blade (pserver) name of the device.
					</td></tr><tr><td>
						<code class="command">fence_egenera</code>
					</td><td>
						The fence agent for the Egenera BladeFrame.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-RSB-CA"><h6>Tableau B.9. Fujitsu Siemens Remoteview Service Board (RSB)</h6><div class="table-contents"><table summary="Fujitsu Siemens Remoteview Service Board (RSB)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the RSB to use as a fence device.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The hostname assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						<code class="command">fence_rsb</code>
					</td><td>
						The fence agent for Fujitsu-Siemens RSB.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-gnbd-CA"><h6>Tableau B.10. GNBD (Global Network Block Device)</h6><div class="table-contents"><table summary="GNBD (Global Network Block Device)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the GNBD device used to fence the cluster. Note that the GFS server must be accessed via GNBD for cluster node fencing support.
					</td></tr><tr><td>
						Server
					</td><td>
						The hostname of the server to fence the client from, in either IP address or hostname form. For multiple hostnames, separate each hostname with a space.
					</td></tr><tr><td>
						IP address
					</td><td>
						The cluster name of the node to be fenced. Refer to the <code class="command">fence_gnbd</code>(8) man page for more information.
					</td></tr><tr><td>
						<code class="command">fence_gnbd</code>
					</td><td>
						The fence agent for GNBD-based GFS clusters.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-hpilo-CA"><h6>Tableau B.11. HP iLO (Integrated Lights Out)</h6><div class="table-contents"><table summary="HP iLO (Integrated Lights Out)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the server with HP iLO support.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The hostname assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Use SSH
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						<code class="command">fence_ilo</code>
					</td><td>
						The fence agent for HP servers with the Integrated Light Out (iLO) PCI card.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-hpilo-mp-CA"><h6>Tableau B.12. HP iLO (Integrated Lights Out) MP (Red Hat Enterprise Linux 5.5 and later)</h6><div class="table-contents"><table summary="HP iLO (Integrated Lights Out) MP (Red Hat Enterprise Linux 5.5 and later)" border="1"><colgroup><col width="23%" class="Field" /><col width="77%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the server with HP iLO support.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The hostname assigned to the device.
					</td></tr><tr><td>
						IP port (optional)
					</td><td>
						TCP port to use for connection with the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						SSH
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						Path to the SSH identity file
					</td><td>
						The identity file for SSH.
					</td></tr><tr><td>
						Force command prompt
					</td><td>
						The command prompt to use. The default value is ’MP&gt;’, ’hpiLO-&gt;’.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						<code class="command">fence_ilo_mp</code>
					</td><td>
						The fence agent for HP iLO MP devices.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-bladectr-CA"><h6>Tableau B.13. IBM Blade Center</h6><div class="table-contents"><table summary="IBM Blade Center" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the IBM BladeCenter device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Blade
					</td><td>
						The blade of the device.
					</td></tr><tr><td>
						Use SSH
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						<code class="command">fence_bladecenter</code>
					</td><td>
						The fence agent for IBm BladeCenter.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-rsaII-CA"><h6>Tableau B.14. IBM Remote Supervisor Adapter II (RSA II)</h6><div class="table-contents"><table summary="IBM Remote Supervisor Adapter II (RSA II)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the RSA device connected to the cluster.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The hostname assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						<code class="command">fence_rsa</code>
					</td><td>
						The fence agent for the IBM RSA II management interface.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-ifmib-CA"><h6>Tableau B.15. IF MIB (Red Hat Enterprise Linux 5.6 and later)</h6><div class="table-contents"><table summary="IF MIB (Red Hat Enterprise Linux 5.6 and later)" border="1"><colgroup><col width="23%" class="Field" /><col width="77%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the IF MIB device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						UDP/TCP port(optiona)
					</td><td>
						The UDP/TCP port to use for connection with the device; the default value is 161.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						SNMP version
					</td><td>
						The SNMP version to use (1, 2c, 3); the default value is 1.
					</td></tr><tr><td>
						SNMP community
					</td><td>
						The SNMP community string.
					</td></tr><tr><td>
						SNMP security level
					</td><td>
						The SNMP security level (noAuthNoPriv, authNoPriv, authPriv).
					</td></tr><tr><td>
						SNMP authentication protocol
					</td><td>
						The SNMP authentication protocol (MD5, SHA).
					</td></tr><tr><td>
						SNMP privacy protocol
					</td><td>
						The SNMP privacy protocol (DES, AES).
					</td></tr><tr><td>
						SNMP privacy protocol password
					</td><td>
						The SNMP privacy protocol password.
					</td></tr><tr><td>
						SNMP privacy protocol script
					</td><td>
						The script that supplies a password for SNMP privacy protocol. Using this supersedes the <span class="guimenu"><strong>SNMP privacy protocol password</strong></span> parameter.
					</td></tr><tr><td>
						Power timeout
					</td><td>
						Number of seconds to test for a status change after issuing a power off or power on command.
					</td></tr><tr><td>
						Shell timeout
					</td><td>
						Number of seconds to wait for a command prompt after issuing a command.
					</td></tr><tr><td>
						Login timeout
					</td><td>
						Number of seconds to wait for a command prompt after login.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						Retry on
					</td><td>
						Number of attempts to retry power on.
					</td></tr><tr><td>
						Port
					</td><td>
						Physical plug number or name of virtual machine.
					</td></tr><tr><td>
						<code class="command">fence_ifmib</code>
					</td><td>
						The fence agent for IF-MIB devices.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-ipmi-CA"><h6>Tableau B.16. IPMI (Intelligent Platform Management Interface) LAN</h6><div class="table-contents"><table summary="IPMI (Intelligent Platform Management Interface) LAN" border="1"><colgroup><col width="29%" class="Field" /><col width="71%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the IPMI LAN device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the IPMI port.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name of a user capable of issuing power on/off commands to the given IPMI port.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the IPMI port.
					</td></tr><tr><td>
						Privilege level
					</td><td>
						The privilege level on the IPMI device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Authentication Type
					</td><td>
						<code class="option">none</code>, <code class="option">password</code>, <code class="option">md2</code>, or <code class="option">md5</code>.
					</td></tr><tr><td>
						Use Lanplus
					</td><td>
						<code class="option">True</code> or <code class="option">1</code>. If blank, then value is <code class="option">False</code>.
					</td></tr><tr><td>
						<code class="command">fence_ipmilan</code>
					</td><td>
						The fence agent for machines controlled by IPMI.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-manual-CA"><h6>Tableau B.17. Manual Fencing</h6><div class="table-contents"><table summary="Manual Fencing" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name to assign the Manual fencing agent. Refer to the <code class="command">fence_manual</code>(8) man page for more information.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="warning"><div class="admonition_header"><h2>Warning</h2></div><div class="admonition"><div class="para">
			Manual fencing is <span class="emphasis"><em>not</em></span> supported for production environments.
		</div></div></div><div class="table" id="tb-software-fence-mcdata-CA"><h6>Tableau B.18. McData SAN Switch</h6><div class="table-contents"><table summary="McData SAN Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the McData device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						<code class="command">fence_mcdata</code>
					</td><td>
						The fence agent for McData FC switches.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-sanbox-CA"><h6>Tableau B.19. QLogic SANBox2 Switch</h6><div class="table-contents"><table summary="QLogic SANBox2 Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the SANBox2 device connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						<code class="command">fence_sanbox2</code>
					</td><td>
						The fence agent for QLogic SANBox2 FC switches.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-rhevm-CA"><h6>Tableau B.20. RHEV-M REST API (RHEL 5.8 and later against RHEV 3.0 and later)</h6><div class="table-contents"><table summary="RHEV-M REST API (RHEL 5.8 and later against RHEV 3.0 and later)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Name of the RHEV-M REST API fencing device.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						IP Port
					</td><td>
						The TCP port to use for connection with the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Separator
					</td><td>
						Separator for CSV created by operation list The default value is a comma(,).
					</td></tr><tr><td>
						Use SSL connections
					</td><td>
						Use SSL connections to communicate with the device.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						Port
					</td><td>
						Physical plug number or name of virtual machine.
					</td></tr><tr><td>
						Power timeout
					</td><td>
						Number of seconds to test for a status change after issuing a power off or power on command.
					</td></tr><tr><td>
						Shell timeout
					</td><td>
						Number of seconds to wait for a command prompt after issuing a command.
					</td></tr><tr><td>
						<code class="command">fence_rhevm</code>
					</td><td>
						The fence agent for RHEV-M REST API.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-wti-rps10-CA"><h6>Tableau B.21. RPS-10 Power Switch (two-node clusters only)</h6><div class="table-contents"><table summary="RPS-10 Power Switch (two-node clusters only)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the WTI RPS-10 power switch connected to the cluster.
					</td></tr><tr><td>
						Device Name
					</td><td>
						The device name of the device the switch is connected to on the controlling host (for example, <code class="filename">/dev/ttys2</code>).
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						<code class="command">fence_wti</code>
					</td><td>
						The fence agent for the WTI Network Power Switch.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-scsi-CA"><h6>Tableau B.22. SCSI Fencing</h6><div class="table-contents"><table summary="SCSI Fencing" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the SCSI fence device.
					</td></tr><tr><td>
						Node name
					</td><td>
						Name of the node to be fenced. Refer to the <code class="command">fence_scsi</code>(8) man page for more information.
					</td></tr><tr><td>
						<code class="command">fence_scsi</code>
					</td><td>
						The fence agent for SCSI persistent reservations.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			Use of SCSI persistent reservations as a fence method is supported with the following limitations:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					As of Red Hat Enterprise Linux 5.5 and fully-updated releases of Red Hat Enterprise Linux 5.4, SCSI fencing can be used in a 2-node cluster; previous releases did not support this feature.
				</div></li><li class="listitem"><div class="para">
					When using SCSI fencing, all nodes in the cluster must register with the same devices so that each node can remove another node's registration key from all the devices it is registered with.
				</div></li><li class="listitem"><div class="para">
					Devices used for the cluster volumes should be a complete LUN, not partitions. SCSI persistent reservations work on an entire LUN, meaning that access is controlled to each LUN, not individual partitions.
				</div></li><li class="listitem"><div class="para">
					As of Red Hat Enterprise Linux 5.5 and fully-updated releases of Red Hat Enterprise Linux 5.4, SCSI fencing can be used in conjunction with qdisk; previous releases did not support this feature. You cannot use <code class="literal">fence_scsi</code> on the LUN where <code class="literal">qdiskd</code> resides; it must be a raw LUN or raw partition of a LUN.
				</div></li></ul></div></div></div><div class="table" id="tb-software-fence-virtual-CA"><h6>Tableau B.23. Virtual Machine Fencing</h6><div class="table-contents"><table summary="Virtual Machine Fencing" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Name of the virtual machine fencing device.
					</td></tr><tr><td>
						Domain
					</td><td>
						Unique domain name of the guest to be fenced.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-vmware-soap-CA"><h6>Tableau B.24. VMware (SOAP Interface) (Red Hat Enterprise Linux 5.7 and later)</h6><div class="table-contents"><table summary="VMware (SOAP Interface) (Red Hat Enterprise Linux 5.7 and later)" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Name of the virtual machine fencing device.
					</td></tr><tr><td>
						Hostname
					</td><td>
						The IP address or hostname assigned to the device.
					</td></tr><tr><td>
						IP Port
					</td><td>
						The TCP port to use for connection with the device.
					</td></tr><tr><td>
						Login
					</td><td>
						The login name used to access the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Use SSL connections
					</td><td>
						Use SSL connections to communicate with the device.
					</td></tr><tr><td>
						Power wait
					</td><td>
						Number of seconds to wait after issuing a power off or power on command.
					</td></tr><tr><td>
						Virtual machine name
					</td><td>
						Name of virtual machine in inventory path format (e.g., /datacenter/vm/Discovered_virtual_machine/myMachine).
					</td></tr><tr><td>
						Virtual machine UUID
					</td><td>
						The UUID of the virtual machine to fence.
					</td></tr><tr><td>
						<code class="command">fence_vmware_soap</code>
					</td><td>
						The fence agent for VMWare over SOAP API.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-vixel-CA"><h6>Tableau B.25. Vixel SAN Switch</h6><div class="table-contents"><table summary="Vixel SAN Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the Vixel switch connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						<code class="command">fence_vixel</code>
					</td><td>
						The fence agent for Vixel switches.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-software-fence-wti-CA"><h6>Tableau B.26. WTI Power Switch</h6><div class="table-contents"><table summary="WTI Power Switch" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A name for the WTI power switch connected to the cluster.
					</td></tr><tr><td>
						IP Address
					</td><td>
						The IP address assigned to the device.
					</td></tr><tr><td>
						Password
					</td><td>
						The password used to authenticate the connection to the device.
					</td></tr><tr><td>
						Password Script (optional)
					</td><td>
						The script that supplies a password for access to the fence device. Using this supersedes the <span class="guimenu"><strong>Password</strong></span> parameter.
					</td></tr><tr><td>
						Port
					</td><td>
						The switch outlet number.
					</td></tr><tr><td>
						Use SSH
					</td><td>
						(Red Hat Enterprise Linux 5.4 and later) Indicates that system will use SSH to access the device.
					</td></tr><tr><td>
						<code class="command">fence_wti</code>
					</td><td>
						The fence agent for the WTI network power switch.
					</td></tr></tbody></table></div></div><br class="table-break" /></div><div xml:lang="fr-FR" class="appendix" id="ap-ha-resource-params-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">HA Resource Parameters</h1></div></div></div><a id="id857316" class="indexterm"></a><div class="para">
		This appendix provides descriptions of HA resource parameters. You can configure the parameters with <span class="application"><strong>Luci</strong></span>, <code class="command">system-config-cluster</code>, or by editing <code class="filename">etc/cluster/cluster.conf</code>. <a class="xref" href="#tb-resource-agent-summary-CA">Tableau C.1, « HA Resource Summary »</a> lists the resources, their corresponding resource agents, and references to other tables containing parameter descriptions. To understand resource agents in more detail you can view them in <code class="filename">/usr/share/cluster</code> of any cluster node.
	</div><div class="para">
		For a comprehensive list and description of <code class="filename">cluster.conf</code> elements and attributes, refer to the cluster schema at <code class="filename">/usr/share/system-config-cluster/misc/cluster.ng</code>, and the annotated schema at <code class="filename">/usr/share/doc/system-config-cluster-X.Y.ZZ/cluster_conf.html</code> (for example <code class="filename">/usr/share/doc/system-config-cluster-1.0.57/cluster_conf.html</code>).
	</div><div class="table" id="tb-resource-agent-summary-CA"><h6>Tableau C.1. HA Resource Summary</h6><div class="table-contents"><table summary="HA Resource Summary" border="1"><colgroup><col width="17%" class="Resource" /><col width="50%" class="Resource Agent" /><col width="33%" class="Reference to Parameter Description" /></colgroup><thead><tr><th>
						Resource
					</th><th>
						Resource Agent
					</th><th>
						Reference to Parameter Description
					</th></tr></thead><tbody><tr><td>
						Apache
					</td><td>
						apache.sh
					</td><td>
						<a class="xref" href="#tb-apache-server-resource-CA">Tableau C.2, « Apache Server »</a>
					</td></tr><tr><td>
						File System
					</td><td>
						fs.sh
					</td><td>
						<a class="xref" href="#tb-fs-resource-CA">Tableau C.3, « File System »</a>
					</td></tr><tr><td>
						GFS File System
					</td><td>
						clusterfs.sh
					</td><td>
						<a class="xref" href="#tb-gfs-resource-CA">Tableau C.4, « GFS »</a>
					</td></tr><tr><td>
						IP Address
					</td><td>
						ip.sh
					</td><td>
						<a class="xref" href="#tb-ipaddress-resource-CA">Tableau C.5, « IP Address »</a>
					</td></tr><tr><td>
						LVM
					</td><td>
						lvm.sh
					</td><td>
						<a class="xref" href="#tb-lvm-resource-CA">Tableau C.6, « LVM »</a>
					</td></tr><tr><td>
						MySQL
					</td><td>
						mysql.sh
					</td><td>
						<a class="xref" href="#tb-mysql-resource-CA">Tableau C.7, « MySQL »</a>
					</td></tr><tr><td>
						NFS Client
					</td><td>
						nfsclient.sh
					</td><td>
						<a class="xref" href="#tb-nfsclient-resource-CA">Tableau C.8, « NFS Client »</a>
					</td></tr><tr><td>
						NFS Export
					</td><td>
						nfsexport.sh
					</td><td>
						<a class="xref" href="#tb-nfsexport-resource-CA">Tableau C.9, « NFS Export »</a>
					</td></tr><tr><td>
						NFS Mount
					</td><td>
						netfs.sh
					</td><td>
						<a class="xref" href="#tb-nfsmount-resource-CA">Tableau C.10, « NFS Mount »</a>
					</td></tr><tr><td>
						Open LDAP
					</td><td>
						openldap.sh
					</td><td>
						<a class="xref" href="#tb-openldap-resource-CA">Tableau C.11, « Open LDAP »</a>
					</td></tr><tr><td>
						Oracle 10g
					</td><td>
						oracledb.sh
					</td><td>
						<a class="xref" href="#tb-oracledb-resource-CA">Tableau C.12, « Oracle 10g »</a>
					</td></tr><tr><td>
						PostgreSQL 8
					</td><td>
						postgres-8.sh
					</td><td>
						<a class="xref" href="#tb-postgres-8-resource-CA">Tableau C.13, « PostgreSQL 8 »</a>
					</td></tr><tr><td>
						SAP Database
					</td><td>
						SAPDatabase
					</td><td>
						<a class="xref" href="#tb-sapdatabase-resource-CA">Tableau C.14, « SAP Database »</a>
					</td></tr><tr><td>
						SAP Instance
					</td><td>
						SAPInstance
					</td><td>
						<a class="xref" href="#tb-sapinstance-resource-CA">Tableau C.15, « SAP Instance »</a>
					</td></tr><tr><td>
						Samba
					</td><td>
						smb.sh
					</td><td>
						<a class="xref" href="#tb-sambaservice-resource-CA">Tableau C.16, « Samba Service »</a>
					</td></tr><tr><td>
						Script
					</td><td>
						script.sh
					</td><td>
						<a class="xref" href="#tb-script-resource-CA">Tableau C.17, « Script »</a>
					</td></tr><tr><td>
						Service
					</td><td>
						service.sh
					</td><td>
						<a class="xref" href="#tb-service-resource-CA">Tableau C.18, « Service »</a>
					</td></tr><tr><td>
						Sybase ASE
					</td><td>
						ASEHAagent.sh
					</td><td>
						<a class="xref" href="#tb-sybaseasa-resource-CA">Tableau C.19, « Sybase ASE Failover Instance »</a>
					</td></tr><tr><td>
						Tomcat 5
					</td><td>
						tomcat-5.sh
					</td><td>
						<a class="xref" href="#tb-tomcat-5-resource-CA">Tableau C.20, « Tomcat 5 »</a>
					</td></tr><tr><td>
						Virtual Machine
					</td><td>
						vm.sh
					</td><td>
						<a class="xref" href="#tb-vm-resource-CA">Tableau C.21, « Virtual Machine »</a> <div class="para">
							NOTE: <span class="application"><strong>Luci</strong></span> displays this as a virtual service if the host cluster can support virtual machines.
						</div>
						 
					</td></tr></tbody></table></div></div><br class="table-break" /><a id="id891516" class="indexterm"></a><div class="table" id="tb-apache-server-resource-CA"><h6>Tableau C.2. Apache Server</h6><div class="table-contents"><table summary="Apache Server" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						The name of the Apache Service.
					</td></tr><tr><td>
						Server Root
					</td><td>
						The default value is <code class="filename">/etc/httpd</code>.
					</td></tr><tr><td>
						Config File
					</td><td>
						Specifies the Apache configuration file. The default valuer is <code class="filename">/etc/httpd/conf</code>.
					</td></tr><tr><td>
						httpd Options
					</td><td>
						Other command line options for <code class="command">httpd</code>.
					</td></tr><tr><td>
						Shutdown Wait (seconds)
					</td><td>
						Specifies the number of seconds to wait for correct end of service shutdown.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-fs-resource-CA"><h6>Tableau C.3. File System</h6><div class="table-contents"><table summary="File System" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a name for the file system resource.
					</td></tr><tr><td>
						File System Type
					</td><td>
						If not specified, <code class="command">mount</code> tries to determine the file system type.
					</td></tr><tr><td>
						Mount Point
					</td><td>
						Path in file system hierarchy to mount this file system.
					</td></tr><tr><td>
						Device
					</td><td>
						Specifies the device associated with the file system resource. This can be a block device, file system label, or UUID of a file system.
					</td></tr><tr><td>
						Options
					</td><td>
						Mount options; that is, options used when the file system is mounted. These may be file-system specific. Refer to the <em class="citetitle"><code class="command">mount</code>(8)</em> man page for supported mount options.
					</td></tr><tr><td>
						File System ID
					</td><td>
						<div class="note"><div class="admonition_header"><h2> Note </h2></div><div class="admonition"><div class="para">
								<em class="parameter"><code>File System ID</code></em> is used only by NFS services.
							</div></div></div>
						 <div class="para">
							When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you commit the parameter during configuration. If you need to assign a file system ID explicitly, specify it in this field.
						</div>

					</td></tr><tr><td>
						Force unmount
					</td><td>
						If enabled, forces the file system to unmount. The default setting is <em class="parameter"><code>disabled</code></em>. <em class="parameter"><code>Force Unmount</code></em> kills all processes using the mount point to free up the mount when it tries to unmount.
					</td></tr><tr><td>
						Reboot host node if unmount fails
					</td><td>
						If enabled, reboots the node if unmounting this file system fails. The default setting is <em class="parameter"><code>disabled</code></em>.
					</td></tr><tr><td>
						Check file system before mounting
					</td><td>
						If enabled, causes <code class="command">fsck</code> to be run on the file system before mounting it. The default setting is <em class="parameter"><code>disabled</code></em>.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-gfs-resource-CA"><h6>Tableau C.4. GFS</h6><div class="table-contents"><table summary="GFS" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						The name of the file system resource.
					</td></tr><tr><td>
						Mount Point
					</td><td>
						The path to which the file system resource is mounted.
					</td></tr><tr><td>
						Device
					</td><td>
						The device file associated with the file system resource.
					</td></tr><tr><td>
						Options
					</td><td>
						Mount options.
					</td></tr><tr><td>
						File System ID
					</td><td>
						<div class="note"><div class="admonition_header"><h2> Note </h2></div><div class="admonition"><div class="para">
								<em class="parameter"><code>File System ID</code></em> is used only by NFS services.
							</div></div></div>
						 <div class="para">
							When creating a new GFS resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you commit the parameter during configuration. If you need to assign a file system ID explicitly, specify it in this field.
						</div>

					</td></tr><tr><td>
						Force Unmount
					</td><td>
						If enabled, forces the file system to unmount. The default setting is <em class="parameter"><code>disabled</code></em>. <em class="parameter"><code>Force Unmount</code></em> kills all processes using the mount point to free up the mount when it tries to unmount. With GFS resources, the mount point is <span class="emphasis"><em>not</em></span> unmounted at service tear-down unless <em class="parameter"><code>Force Unmount</code></em> is <span class="emphasis"><em>enabled</em></span>.
					</td></tr><tr><td>
						Reboot Host Node if Unmount Fails (self fence)
					</td><td>
						If enabled and unmounting the file system fails, the node will immediately reboot. Generally, this is used in conjunction with force-unmount support, but it is not required.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-ipaddress-resource-CA"><h6>Tableau C.5. IP Address</h6><div class="table-contents"><table summary="IP Address" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						IP Address
					</td><td>
						The IP address for the resource. This is a virtual IP address. IPv4 and IPv6 addresses are supported, as is NIC link monitoring for each IP address.
					</td></tr><tr><td>
						Monitor Link
					</td><td>
						Enabling this causes the status check to fail if the link on the NIC to which this IP address is bound is not present.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-lvm-resource-CA"><h6>Tableau C.6. LVM</h6><div class="table-contents"><table summary="LVM" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						A unique name for this LVM resource.
					</td></tr><tr><td>
						Volume Group Name
					</td><td>
						A descriptive name of the volume group being managed.
					</td></tr><tr><td>
						Logical Volume Name (optional)
					</td><td>
						Name of the logical volume being managed. This parameter is optional if there is more than one logical volume in the volume group being managed.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-mysql-resource-CA"><h6>Tableau C.7. MySQL</h6><div class="table-contents"><table summary="MySQL" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a name of the MySQL server resource.
					</td></tr><tr><td>
						Config File
					</td><td>
						Specifies the configuration file. The default value is <code class="filename">/etc/my.cnf</code>.
					</td></tr><tr><td>
						Listen Address
					</td><td>
						Specifies an IP address for MySQL server. If an IP address is not provided, the first IP address from the service is taken.
					</td></tr><tr><td>
						mysqld Options
					</td><td>
						Other command line options for <code class="command">httpd</code>.
					</td></tr><tr><td>
						Shutdown Wait (seconds)
					</td><td>
						Specifies the number of seconds to wait for correct end of service shutdown.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-nfsclient-resource-CA"><h6>Tableau C.8. NFS Client</h6><div class="table-contents"><table summary="NFS Client" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						This is a symbolic name of a client used to reference it in the resource tree. This is <span class="emphasis"><em>not</em></span> the same thing as the <em class="parameter"><code>Target</code></em> option.
					</td></tr><tr><td>
						Target
					</td><td>
						This is the server from which you are mounting. It can be specified using a hostname, a wildcard (IP address or hostname based), or a netgroup defining a host or hosts to export to.
					</td></tr><tr><td>
						Option
					</td><td>
						Defines a list of options for this client — for example, additional client access rights. For more information, refer to the <em class="citetitle"><code class="command">exports</code> (5)</em> man page, <em class="citetitle">General Options</em>.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-nfsexport-resource-CA"><h6>Tableau C.9. NFS Export</h6><div class="table-contents"><table summary="NFS Export" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						<div class="para">
							Descriptive name of the resource. The NFS Export resource ensures that NFS daemons are running. It is fully reusable; typically, only one NFS Export resource is needed.
						</div>
						 <div class="note"><div class="admonition_header"><h2> Tip </h2></div><div class="admonition"><div class="para">
								Name the NFS Export resource so it is clearly distinguished from other NFS resources.
							</div></div></div>

					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-nfsmount-resource-CA"><h6>Tableau C.10. NFS Mount</h6><div class="table-contents"><table summary="NFS Mount" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						<div class="para">
							Symbolic name for the NFS mount.
						</div>
						 <div class="note"><div class="admonition_header"><h2> Note </h2></div><div class="admonition"><div class="para">
								This resource is required only when a cluster service is configured to be an NFS client.
							</div></div></div>

					</td></tr><tr><td>
						Mount Point
					</td><td>
						Path to which the file system resource is mounted.
					</td></tr><tr><td>
						Host
					</td><td>
						NFS server IP address or hostname.
					</td></tr><tr><td>
						Export Path
					</td><td>
						NFS Export directory name.
					</td></tr><tr><td>
						NFS version
					</td><td>
						<div class="para">
							NFS protocol:
						</div>
						 <div class="itemizedlist"><ul><li class="listitem"><div class="para">
									<em class="parameter"><code>NFS3</code></em> — Specifies using NFSv3 protocol. The default setting is <em class="parameter"><code>NFS3</code></em>.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>NFS4</code></em> — Specifies using NFSv4 protocol.
								</div></li></ul></div>

					</td></tr><tr><td>
						Options
					</td><td>
						Mount options. Specifies a list of mount options. If none are specified, the NFS file system is mounted <code class="option">-o sync</code>. For more information, refer to the <em class="citetitle"><code class="command">nfs</code>(5)</em> man page.
					</td></tr><tr><td>
						Force Unmount
					</td><td>
						If <em class="parameter"><code>Force Unmount</code></em> is enabled, the cluster kills all processes using this file system when the service is stopped. Killing all processes using the file system frees up the file system. Otherwise, the unmount will fail, and the service will be restarted.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-openldap-resource-CA"><h6>Tableau C.11. Open LDAP</h6><div class="table-contents"><table summary="Open LDAP" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a service name for logging and other purposes.
					</td></tr><tr><td>
						Config File
					</td><td>
						Specifies an absolute path to a configuration file. The default value is <code class="filename">/etc/openldap/slapd.conf</code>.
					</td></tr><tr><td>
						URL List
					</td><td>
						The default value is <code class="filename">ldap:///</code>.
					</td></tr><tr><td>
						<code class="command">slapd</code> Options
					</td><td>
						Other command line options for <code class="command">slapd</code>.
					</td></tr><tr><td>
						Shutdown Wait (seconds)
					</td><td>
						Specifies the number of seconds to wait for correct end of service shutdown.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-oracledb-resource-CA"><h6>Tableau C.12. Oracle 10g</h6><div class="table-contents"><table summary="Oracle 10g" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Instance name (SID) of Oracle instance
					</td><td>
						Instance name.
					</td></tr><tr><td>
						Oracle user name
					</td><td>
						This is the user name of the Oracle user that the Oracle AS instance runs as.
					</td></tr><tr><td>
						Oracle application home directory
					</td><td>
						This is the Oracle (application, not user) home directory. It is configured when you install Oracle.
					</td></tr><tr><td>
						Virtual hostname (optional)
					</td><td>
						Virtual Hostname matching the installation hostname of Oracle 10g. Note that during the start/stop of an oracledb resource, your hostname is changed temporarily to this hostname. Therefore, you should configure an oracledb resource as part of an exclusive service only.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-postgres-8-resource-CA"><h6>Tableau C.13. PostgreSQL 8</h6><div class="table-contents"><table summary="PostgreSQL 8" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a service name for logging and other purposes.
					</td></tr><tr><td>
						Config File
					</td><td>
						Define absolute path to configuration file. The default value is <code class="filename">/var/lib/pgsql/data/postgresql.conf</code>.
					</td></tr><tr><td>
						Postmaster User
					</td><td>
						User who runs the database server because it cannot be run by root. The default value is postgres.
					</td></tr><tr><td>
						Postmaster Options
					</td><td>
						Other command line options for postmaster.
					</td></tr><tr><td>
						Shutdown Wait (seconds)
					</td><td>
						Specifies the number of seconds to wait for correct end of service shutdown.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-sapdatabase-resource-CA"><h6>Tableau C.14. SAP Database</h6><div class="table-contents"><table summary="SAP Database" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						SAP Database Name
					</td><td>
						Specifies a unique SAP system identifier. For example, P01.
					</td></tr><tr><td>
						SAP executable directory
					</td><td>
						Specifies the fully qualified path to <code class="command">sapstartsrv</code> and <code class="command">sapcontrol</code>.
					</td></tr><tr><td>
						Database type
					</td><td>
						Specifies one of the following database types: Oracle, DB6, or ADA.
					</td></tr><tr><td>
						Oracle TNS listener name
					</td><td>
						Specifies Oracle TNS listener name.
					</td></tr><tr><td>
						ABAP stack is not installed, only Java stack is installed
					</td><td>
						If you do not have an ABAP stack installed in the SAP database, enable this parameter.
					</td></tr><tr><td>
						J2EE instance bootstrap directory
					</td><td>
						The fully qualified path the J2EE instance bootstrap directory. For example, <code class="filename">/usr/sap/P01/J00/j2ee/cluster/bootstrap</code>.
					</td></tr><tr><td>
						J2EE security store path
					</td><td>
						The fully qualified path the J2EE security store directory. For example, <code class="filename">/usr/sap/P01/SYS/global/security/lib/tools</code>.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-sapinstance-resource-CA"><h6>Tableau C.15. SAP Instance</h6><div class="table-contents"><table summary="SAP Instance" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						SAP Instance Name
					</td><td>
						The fully qualified SAP instance name. For example, P01_DVEBMGS00_sapp01ci.
					</td></tr><tr><td>
						SAP executable directory
					</td><td>
						The fully qualified path to <code class="command">sapstartsrv</code> and <code class="command">sapcontrol</code>.
					</td></tr><tr><td>
						Directory containing the SAP START profile
					</td><td>
						The fully qualified path to the SAP START profile.
					</td></tr><tr><td>
						Name of the SAP START profile
					</td><td>
						Specifies name of the SAP START profile.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			Regarding <a class="xref" href="#tb-sambaservice-resource-CA">Tableau C.16, « Samba Service »</a>, when creating or editing a cluster service, connect a Samba-service resource directly to the service, <span class="emphasis"><em>not</em></span> to a resource within a service.
		</div></div></div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			Red Hat Enterprise Linux 5 does not support running Clustered Samba in an active/active configuration, in which Samba serves the same shared file system from multiple nodes. Red Hat Enterprise Linux 5 does support running Samba in a cluster in active/passive mode, with failover from one node to the other nodes in a cluster. Note that if failover occurs, locking states are lost and active connections to Samba are severed so that the clients must reconnect.
		</div></div></div><div class="table" id="tb-sambaservice-resource-CA"><h6>Tableau C.16. Samba Service</h6><div class="table-contents"><table summary="Samba Service" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies the name of the Samba server.
					</td></tr><tr><td>
						Workgroup
					</td><td>
						Specifies a Windows workgroup name or Windows NT domain of the Samba service.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-script-resource-CA"><h6>Tableau C.17. Script</h6><div class="table-contents"><table summary="Script" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a name for the custom user script. The script resource allows a standard LSB-compliant init script to be used to start a clustered service.
					</td></tr><tr><td>
						File (with path)
					</td><td>
						Enter the path where this custom script is located (for example, <code class="filename">/etc/init.d/<em class="replaceable"><code>userscript</code></em></code>).
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-service-resource-CA"><h6>Tableau C.18. Service</h6><div class="table-contents"><table summary="Service" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Service name
					</td><td>
						Name of service. This defines a collection of resources, known as a resource group or cluster service.
					</td></tr><tr><td>
						Automatically start this service
					</td><td>
						If enabled, this service (or resource group) is started automatically after the cluster forms a quorum. If this parameter is <span class="emphasis"><em>disabled</em></span>, this service is <span class="emphasis"><em>not</em></span> started automatically after the cluster forms a quorum; the service is put into the <em class="parameter"><code>disabled</code></em> state.
					</td></tr><tr><td>
						Run exclusive
					</td><td>
						If enabled, this service (resource group) can only be relocated to run on another node exclusively; that is, to run on a node that has no other services running on it. If no nodes are available for a service to run exclusively, the service is not restarted after a failure. Additionally, other services do not automatically relocate to a node running this service as <em class="parameter"><code>Run exclusive</code></em>. You can override this option by manual start or relocate operations.
					</td></tr><tr><td>
						Failover Domain
					</td><td>
						Defines lists of cluster members to try in the event that a service fails.
					</td></tr><tr><td>
						Recovery policy
					</td><td>
						<div class="para">
							<em class="parameter"><code>Recovery policy</code></em> provides the following options:
						</div>
						 <div class="itemizedlist"><ul><li class="listitem"><div class="para">
									<em class="parameter"><code>Disable</code></em> — Disables the resource group if any component fails.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Relocate</code></em> — Tries to restart service in another node; that is, it does not try to restart in the current node.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Restart</code></em> — Tries to restart failed parts of this service locally (in the current node) before trying to relocate (default) to service to another node.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Restart-Disable</code></em> — (Red Hat Enterprise Linux release 5.6 and later) The service will be restarted in place if it fails. However, if restarting the service fails the service will be disabled instead of being moved to another host in the cluster.
								</div></li></ul></div>

					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-sybaseasa-resource-CA"><h6>Tableau C.19. Sybase ASE Failover Instance</h6><div class="table-contents"><table summary="Sybase ASE Failover Instance" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Instance Name
					</td><td>
						Specifies the instance name of the Sybase ASE resource.
					</td></tr><tr><td>
						ASE server name
					</td><td>
						The ASE server name that is configured for the HA service.
					</td></tr><tr><td>
						Sybase home directory
					</td><td>
						The home directory of Sybase products.
					</td></tr><tr><td>
						Login file
					</td><td>
						The full path of login file that contains the login-password pair.
					</td></tr><tr><td>
						Interfaces file
					</td><td>
						The full path of the interfaces file that is used to start/access the ASE server.
					</td></tr><tr><td>
						SYBASE_ASE directory name
					</td><td>
						The directory name under sybase_home where ASE products are installed.
					</td></tr><tr><td>
						SYBASE_OCS directory name
					</td><td>
						The directory name under sybase_home where OCS products are installed. For example, ASE-15_0.
					</td></tr><tr><td>
						Sybase user
					</td><td>
						The user who can run ASE server.
					</td></tr><tr><td>
						Deep probe timeout
					</td><td>
						The maximum seconds to wait for the response of ASE server before determining that the server had no response while running deep probe.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="table" id="tb-tomcat-5-resource-CA"><h6>Tableau C.20. Tomcat 5</h6><div class="table-contents"><table summary="Tomcat 5" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Name
					</td><td>
						Specifies a service name for logging and other purposes.
					</td></tr><tr><td>
						Config File
					</td><td>
						Specifies the absolute path to the configuration file. The default value is <code class="filename">/etc/tomcat5/tomcat5.conf</code>.
					</td></tr><tr><td>
						Tomcat User
					</td><td>
						User who runs the Tomcat server. The default value is <span class="emphasis"><em>tomcat</em></span>.
					</td></tr><tr><td>
						Catalina Options
					</td><td>
						Other command line options for Catalina.
					</td></tr><tr><td>
						Catalina Base
					</td><td>
						Catalina base directory (differs for each service) The default value is /usr/share/tomcat5.
					</td></tr><tr><td>
						Shutdown Wait (seconds)
					</td><td>
						Specifies the number of seconds to wait for correct end of service shutdown. The default value is 30.
					</td></tr></tbody></table></div></div><br class="table-break" /><div class="important"><div class="admonition_header"><h2> Important </h2></div><div class="admonition"><div class="para">
			Regarding <a class="xref" href="#tb-vm-resource-CA">Tableau C.21, « Virtual Machine »</a>, when you configure your cluster with virtual machine resources, you should use the <code class="command">rgmanager</code> tools to start and stop the virtual machines. Using <code class="command">virsh</code> or <code class="command">libvirt</code> tools to start the machine can result in the virtual machine running in more than one place, which can cause data corruption in the virtual machine. For information on configuring your system to reduce the chances of administrators accidentally "double-starting" virtual machines by using both cluster and non-cluster tools, refer to <a class="xref" href="#s1-vm-considerations-CA">Section 2.12, « Configuring Virtual Machines in a Clustered Environment »</a>.
		</div></div></div><div class="table" id="tb-vm-resource-CA"><h6>Tableau C.21. Virtual Machine</h6><div class="table-contents"><table summary="Virtual Machine" border="1"><colgroup><col width="17%" class="Field" /><col width="83%" class="Description" /></colgroup><thead><tr><th>
						Field
					</th><th>
						Description
					</th></tr></thead><tbody><tr><td>
						Virtual machine name
					</td><td>
						Specifies the name of the virtual machine.
					</td></tr><tr><td>
						Path to VM configuration files
					</td><td>
						<div class="para">
							A colon-delimited path specification that <code class="command">xm create</code> searches for the virtual machine configuration file. For example: <code class="filename">/etc/xen:/guests/config_files:/var/xen/configs</code>
						</div>
						 <div class="important"><div class="admonition_header"><h2> Important </h2></div><div class="admonition"><div class="para">
								The path should <span class="emphasis"><em>never</em></span> directly point to a virtual machine configuration file.
							</div></div></div>

					</td></tr><tr><td>
						Automatically start this virtual machine
					</td><td>
						If enabled, this virtual machine is started automatically after the cluster forms a quorum. If this parameter is <span class="emphasis"><em>disabled</em></span>, this virtual machine is <span class="emphasis"><em>not</em></span> started automatically after the cluster forms a quorum; the virtual machine is put into the <em class="parameter"><code>disabled</code></em> state.
					</td></tr><tr><td>
						Run exclusive
					</td><td>
						If enabled, this virtual machine can only be relocated to run on another node exclusively; that is, to run on a node that has no other virtual machines running on it. If no nodes are available for a virtual machine to run exclusively, the virtual machine is not restarted after a failure. Additionally, other virtual machines do not automatically relocate to a node running this virtual machine as <em class="parameter"><code>Run exclusive</code></em>. You can override this option by manual start or relocate operations.
					</td></tr><tr><td>
						Failover Domain
					</td><td>
						Defines lists of cluster members to try in the event that a virtual machine fails.
					</td></tr><tr><td>
						Recovery policy
					</td><td>
						<div class="para">
							<em class="parameter"><code>Recovery policy</code></em> provides the following options:
						</div>
						 <div class="itemizedlist"><ul><li class="listitem"><div class="para">
									<em class="parameter"><code>Disable</code></em> — Disables the virtual machine if it fails.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Relocate</code></em> — Tries to restart the virtual machine in another node; that is, it does not try to restart in the current node.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Restart</code></em> — Tries to restart the virtual machine locally (in the current node) before trying to relocate (default) to virtual machine to another node.
								</div></li><li class="listitem"><div class="para">
									<em class="parameter"><code>Restart-Disable</code></em> — (Red Hat Enterprise Linux Release 5.6 and later) The service will be restarted in place if it fails. However, if restarting the service fails the service will be disabled instead of being moved to another host in the cluster.
								</div></li></ul></div>

					</td></tr><tr><td>
						Migration type
					</td><td>
						Specifies a migration type of <em class="parameter"><code>live</code></em> or <em class="parameter"><code>pause</code></em>. The default setting is <em class="parameter"><code>live</code></em>.
					</td></tr></tbody></table></div></div><br class="table-break" /></div><div xml:lang="fr-FR" class="appendix" id="ap-ha-resource-behavior-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">HA Resource Behavior</h1></div></div></div><a id="id788508" class="indexterm"></a><div class="para">
		This appendix describes common behavior of HA resources. It is meant to provide ancillary information that may be helpful in configuring HA services. You can configure the parameters with <span class="application"><strong>Luci</strong></span>, <code class="command">system-config-cluster</code>, or by editing <code class="filename">etc/cluster/cluster.conf</code>. For descriptions of HA resource parameters, refer to <a class="xref" href="#ap-ha-resource-params-CA">Annexe C, <em>HA Resource Parameters</em></a>. To understand resource agents in more detail you can view them in <code class="filename">/usr/share/cluster</code> of any cluster node.
	</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			To fully comprehend the information in this appendix, you may require detailed understanding of resource agents and the cluster configuration file, <code class="filename">/etc/cluster/cluster.conf</code>.
		</div></div></div><div class="para"> An HA service is a group of cluster resources
configured into a coherent entity that provides specialized services
to clients. An HA service is represented as a resource tree in the
cluster configuration file,
<code class="filename">/etc/cluster/cluster.conf</code> (in each cluster
node). In the cluster configuration file, each resource tree is an XML
representation that specifies each resource, its attributes, and its
relationship among other resources in the resource tree (parent,
child, and sibling relationships).</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
	Because an HA service consists of resources organized into a
	hierarchical tree, a service is sometimes referred to as a
	<em class="firstterm">resource tree</em> or <em class="firstterm">resource
	group</em>. Both phrases are synonymous with
	<span class="emphasis"><em>HA service</em></span>.
      </div></div></div><div class="para">
       At the root of each resource tree is a special type of resource
       — a <em class="firstterm">service resource</em>. Other types of resources comprise
       the rest of a service, determining its
       characteristics. Configuring an HA service consists of
       creating a service resource, creating subordinate cluster
       resources, and organizing them into a coherent entity that
       conforms to hierarchical restrictions of the service.
    </div><div class="para">
		This appendix consists of the following sections:
	</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-rsc-desc-CA">Section D.1, « Parent, Child, and Sibling Relationships Among Resources »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-rsc-sibling-starting-order-CA">Section D.2, « Sibling Start Ordering and Resource Child Ordering »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-rsc-inherit-resc-reuse-CA">Section D.3, « Inheritance, the &lt;resources&gt; Block, and Reusing Resources »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-rsc-failure-rec-CA">Section D.4, « Failure Recovery and Independent Subtrees »</a>
			</div></li><li class="listitem"><div class="para">
				<a class="xref" href="#s1-clust-rsc-testing-config-CA">Section D.5, « Debugging and Testing Services and Resource Ordering »</a>
			</div></li></ul></div><div class="note"><div class="admonition_header"><h2> Note </h2></div><div class="admonition"><div class="para">
			The sections that follow present examples from the cluster configuration file, <code class="filename">/etc/cluster/cluster.conf</code>, for illustration purposes only.
		</div></div></div><div class="section" id="s1-clust-rsc-desc-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-rsc-desc-CA">D.1. Parent, Child, and Sibling Relationships Among Resources</h2></div></div></div><a id="id781074" class="indexterm"></a><a id="id781082" class="indexterm"></a><div class="para">
			A cluster service is an integrated entity that runs under the control of <code class="command">rgmanager</code>. All resources in a service run on the same node. From the perspective of <code class="command">rgmanager</code>, a cluster service is one entity that can be started, stopped, or relocated. Within a cluster service, however, the hierarchy of the resources determines the order in which each resource is started and stopped.The hierarchical levels consist of parent, child, and sibling.
		</div><div class="para">
			<a class="xref" href="#ex-resource-hierarchy-CA">Exemple D.1, « Resource Hierarchy of Service foo »</a> shows a sample resource tree of the service <span class="emphasis"><em>foo</em></span>. In the example, the relationships among the resources are as follows:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<code class="command">fs:myfs</code> (&lt;fs name="myfs" ...&gt;) and <code class="command">ip:10.1.1.2</code> (&lt;ip address="10.1.1.2 .../&gt;) are siblings.
				</div></li><li class="listitem"><div class="para">
					<code class="command">fs:myfs</code> (&lt;fs name="myfs" ...&gt;) is the parent of <code class="command">script:script_child</code> (&lt;script name="script_child"/&gt;).
				</div></li><li class="listitem"><div class="para">
					<code class="command">script:script_child</code> (&lt;script name="script_child"/&gt;) is the child of <code class="command">fs:myfs</code> (&lt;fs name="myfs" ...&gt;).
				</div></li></ul></div><div class="example" id="ex-resource-hierarchy-CA"><h6>Exemple D.1. Resource Hierarchy of Service foo</h6><div class="example-contents"><pre class="screen">
&lt;service name="foo" ...&gt;
    &lt;fs name="myfs" ...&gt;
        &lt;script name="script_child"/&gt;
    &lt;/fs&gt;
    &lt;ip address="10.1.1.2" .../&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><div class="para">
			The following rules apply to parent/child relationships in a resource tree:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Parents are started before children.
				</div></li><li class="listitem"><div class="para">
					Children must all stop cleanly before a parent may be stopped.
				</div></li><li class="listitem"><div class="para">
					For a resource to be considered in good health, all its children must be in good health.
				</div></li></ul></div></div><div class="section" id="s1-clust-rsc-sibling-starting-order-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-rsc-sibling-starting-order-CA">D.2. Sibling Start Ordering and Resource Child Ordering</h2></div></div></div><div class="para">
			The Service resource determines the start order and the stop order of a child resource according to whether it designates a child-type attribute for a child resource as follows:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					Designates child-type attribute (<em class="firstterm">typed</em> child resource) — If the Service resource designates a child-type attribute for a child resource, the child resource is <span class="emphasis"><em>typed</em></span>. The child-type attribute explicitly determines the start and the stop order of the child resource.
				</div></li><li class="listitem"><div class="para">
					<span class="emphasis"><em>Does not designate</em></span> child-type attribute (<em class="firstterm">non-typed</em> child resource) — If the Service resource <span class="emphasis"><em>does not designate</em></span> a child-type attribute for a child resource, the child resource is <span class="emphasis"><em>non-typed</em></span>. The Service resource does not explicitly control the starting order and stopping order of a non-typed child resource. However, a non-typed child resource is started and stopped according to its order in <code class="filename">/etc/cluster.cluster.conf</code> In addition, non-typed child resources are started after all typed child resources have started and are stopped before any typed child resources have stopped.
				</div></li></ul></div><div class="note"><div class="admonition_header"><h2> Note </h2></div><div class="admonition"><div class="para">
				The only resource to implement defined <span class="emphasis"><em>child resource type</em></span> ordering is the Service resource.
			</div></div></div><div class="para">
			For more information about typed child resource start and stop ordering, refer to <a class="xref" href="#s2-clust-rsc-typed-resources-CA">Section D.2.1, « Typed Child Resource Start and Stop Ordering »</a>. For more information about non-typed child resource start and stop ordering, refer to <a class="xref" href="#s2-clust-rsc-non-typed-resources-CA">Section D.2.2, « Non-typed Child Resource Start and Stop Ordering »</a>.
		</div><div class="section" id="s2-clust-rsc-typed-resources-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-clust-rsc-typed-resources-CA">D.2.1. Typed Child Resource Start and Stop Ordering</h3></div></div></div><div class="para">
				For a typed child resource, the type attribute for the child resource defines the start order and the stop order of each resource type with a number ranging from 1 to 100; one value for start, and one value for stop. The lower the number, the earlier a resource type starts or stops. For example, <a class="xref" href="#tb-resource-start-stop-CA">Tableau D.1, « Child Resource Type Start and Stop Order »</a> shows the start and stop values for each resource type; <a class="xref" href="#ex-resource-start-stop-CA">Exemple D.2, « Resource Start and Stop Values: Excerpt from Service Resource Agent, <code class="command">service.sh</code> »</a> shows the start and stop values as they appear in the Service resource agent, <code class="command">service.sh</code>. For the Service resource, all LVM children are started first, followed by all File System children, followed by all Script children, and so forth.
			</div><div class="table" id="tb-resource-start-stop-CA"><h6>Tableau D.1. Child Resource Type Start and Stop Order</h6><div class="table-contents"><table summary="Child Resource Type Start and Stop Order" border="1"><colgroup><col width="25%" class="Resource_Type" /><col width="25%" class="Child_Type" /><col width="25%" class="Start_Order" /><col width="25%" class="Stop_Order" /></colgroup><thead><tr><th>
								Resource
							</th><th>
								Child Type
							</th><th>
								Start-order Value
							</th><th>
								Stop-order Value
							</th></tr></thead><tbody><tr><td>
								LVM
							</td><td>
								lvm
							</td><td>
								1
							</td><td>
								9
							</td></tr><tr><td>
								File System
							</td><td>
								fs
							</td><td>
								2
							</td><td>
								8
							</td></tr><tr><td>
								GFS File System
							</td><td>
								clusterfs
							</td><td>
								3
							</td><td>
								7
							</td></tr><tr><td>
								NFS Mount
							</td><td>
								netfs
							</td><td>
								4
							</td><td>
								6
							</td></tr><tr><td>
								NFS Export
							</td><td>
								nfsexport
							</td><td>
								5
							</td><td>
								5
							</td></tr><tr><td>
								NFS Client
							</td><td>
								nfsclient
							</td><td>
								6
							</td><td>
								4
							</td></tr><tr><td>
								IP Address
							</td><td>
								ip
							</td><td>
								7
							</td><td>
								2
							</td></tr><tr><td>
								Samba
							</td><td>
								smb
							</td><td>
								8
							</td><td>
								3
							</td></tr><tr><td>
								Script
							</td><td>
								script
							</td><td>
								9
							</td><td>
								1
							</td></tr></tbody></table></div></div><br class="table-break" /><div class="example" id="ex-resource-start-stop-CA"><h6>Exemple D.2. Resource Start and Stop Values: Excerpt from Service Resource Agent, <code class="command">service.sh</code></h6><div class="example-contents"><pre class="screen">
&lt;special tag="rgmanager"&gt;
    &lt;attributes root="1" maxinstances="1"/&gt;
    &lt;child type="lvm" start="1" stop="9"/&gt;
    &lt;child type="fs" start="2" stop="8"/&gt;
    &lt;child type="clusterfs" start="3" stop="7"/&gt;
    &lt;child type="netfs" start="4" stop="6"/&gt;
    &lt;child type="nfsexport" start="5" stop="5"/&gt;
    &lt;child type="nfsclient" start="6" stop="4"/&gt;
    &lt;child type="ip" start="7" stop="2"/&gt;
    &lt;child type="smb" start="8" stop="3"/&gt;
    &lt;child type="script" start="9" stop="1"/&gt;
&lt;/special&gt;
</pre></div></div><br class="example-break" /><div class="para">
				Ordering within a resource type is preserved as it exists in the cluster configuration file, <code class="filename">/etc/cluster/cluster.conf</code>. For example, consider the starting order and stopping order of the typed child resources in <a class="xref" href="#ex-ordering-within-resource-type-CA">Exemple D.3, « Ordering Within a Resource Type »</a>.
			</div><div class="example" id="ex-ordering-within-resource-type-CA"><h6>Exemple D.3. Ordering Within a Resource Type</h6><div class="example-contents"><pre class="screen">
&lt;service name="foo"&gt;
  &lt;script name="1" .../&gt;
  &lt;lvm name="1" .../&gt;
  &lt;ip address="10.1.1.1" .../&gt;
  &lt;fs name="1" .../&gt;
  &lt;lvm name="2" .../&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><h4 id="id857827">Typed Child Resource Starting Order</h4><div class="para">
				In <a class="xref" href="#ex-ordering-within-resource-type-CA">Exemple D.3, « Ordering Within a Resource Type »</a>, the resources are started in the following order:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						<code class="command">lvm:1</code> — This is an LVM resource. All LVM resources are started first. <code class="command">lvm:1</code> (<code class="command">&lt;lvm name="1" .../&gt;</code>) is the first LVM resource started among LVM resources because it is the first LVM resource listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:2</code> — This is an LVM resource. All LVM resources are started first. <code class="command">lvm:2</code> (<code class="command">&lt;lvm name="2" .../&gt;</code>) is started after <code class="command">lvm:1</code> because it is listed after <code class="command">lvm:1</code> in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">fs:1</code> — This is a File System resource. If there were other File System resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">ip:10.1.1.1</code> — This is an IP Address resource. If there were other IP Address resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">script:1</code> — This is a Script resource. If there were other Script resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li></ol></div><h4 id="id857977">Typed Child Resource Stopping Order</h4><div class="para">
				In <a class="xref" href="#ex-ordering-within-resource-type-CA">Exemple D.3, « Ordering Within a Resource Type »</a>, the resources are stopped in the following order:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						<code class="command">script:1</code> — This is a Script resource. If there were other Script resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">ip:10.1.1.1</code> — This is an IP Address resource. If there were other IP Address resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">fs:1</code> — This is a File System resource. If there were other File System resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:2</code> — This is an LVM resource. All LVM resources are stopped last. <code class="command">lvm:2</code> (<code class="command">&lt;lvm name="2" .../&gt;</code>) is stopped before <code class="command">lvm:1</code>; resources within a group of a resource type are stopped in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:1</code> — This is an LVM resource. All LVM resources are stopped last. <code class="command">lvm:1</code> (<code class="command">&lt;lvm name="1" .../&gt;</code>) is stopped after <code class="command">lvm:2</code>; resources within a group of a resource type are stopped in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li></ol></div></div><div class="section" id="s2-clust-rsc-non-typed-resources-CA"><div class="titlepage"><div><div><h3 class="title" id="s2-clust-rsc-non-typed-resources-CA">D.2.2. Non-typed Child Resource Start and Stop Ordering</h3></div></div></div><div class="para">
				Additional considerations are required for non-typed child resources. For a non-typed child resource, starting order and stopping order are not explicitly specified by the Service resource. Instead, starting order and stopping order are determined according to the order of the child resource in <code class="filename">/etc/cluster.cluster.conf</code>. Additionally, non-typed child resources are started after all typed child resources and stopped before any typed child resources.
			</div><div class="para">
				For example, consider the starting order and stopping order of the non-typed child resources in <a class="xref" href="#ex-ordering-non-typed-resource-CA">Exemple D.4, « Non-typed and Typed Child Resource in a Service »</a>.
			</div><div class="example" id="ex-ordering-non-typed-resource-CA"><h6>Exemple D.4. Non-typed and Typed Child Resource in a Service</h6><div class="example-contents"><pre class="screen">
&lt;service name="foo"&gt;
  &lt;script name="1" .../&gt;
  &lt;nontypedresource name="foo"/&gt;
  &lt;lvm name="1" .../&gt;
  &lt;nontypedresourcetwo name="bar"/&gt;
  &lt;ip address="10.1.1.1" .../&gt;
  &lt;fs name="1" .../&gt;
  &lt;lvm name="2" .../&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><h4 id="id896224">Non-typed Child Resource Starting Order</h4><div class="para">
				In <a class="xref" href="#ex-ordering-non-typed-resource-CA">Exemple D.4, « Non-typed and Typed Child Resource in a Service »</a>, the child resources are started in the following order:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						<code class="command">lvm:1</code> — This is an LVM resource. All LVM resources are started first. <code class="command">lvm:1</code> (<code class="command">&lt;lvm name="1" .../&gt;</code>) is the first LVM resource started among LVM resources because it is the first LVM resource listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:2</code> — This is an LVM resource. All LVM resources are started first. <code class="command">lvm:2</code> (<code class="command">&lt;lvm name="2" .../&gt;</code>) is started after <code class="command">lvm:1</code> because it is listed after <code class="command">lvm:1</code> in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">fs:1</code> — This is a File System resource. If there were other File System resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">ip:10.1.1.1</code> — This is an IP Address resource. If there were other IP Address resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">script:1</code> — This is a Script resource. If there were other Script resources in Service <span class="emphasis"><em>foo</em></span>, they would start in the order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">nontypedresource:foo</code> — This is a non-typed resource. Because it is a non-typed resource, it is started after the typed resources start. In addition, its order in the Service resource is before the other non-typed resource, <code class="command">nontypedresourcetwo:bar</code>; therefore, it is started before <code class="command">nontypedresourcetwo:bar</code>. (Non-typed resources are started in the order that they appear in the Service resource.)
					</div></li><li class="listitem"><div class="para">
						<code class="command">nontypedresourcetwo:bar</code> — This is a non-typed resource. Because it is a non-typed resource, it is started after the typed resources start. In addition, its order in the Service resource is after the other non-typed resource, <code class="command">nontypedresource:foo</code>; therefore, it is started after <code class="command">nontypedresource:foo</code>. (Non-typed resources are started in the order that they appear in the Service resource.)
					</div></li></ol></div><div class="para">
			</div><div class="para">
			</div><div class="para">
			</div><div class="para">
			</div><h4 id="id896436">Non-typed Child Resource Stopping Order</h4><div class="para">
				In <a class="xref" href="#ex-ordering-non-typed-resource-CA">Exemple D.4, « Non-typed and Typed Child Resource in a Service »</a>, the child resources are stopped in the following order:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						<code class="command">nontypedresourcetwo:bar</code> — This is a non-typed resource. Because it is a non-typed resource, it is stopped before the typed resources are stopped. In addition, its order in the Service resource is after the other non-typed resource, <code class="command">nontypedresource:foo</code>; therefore, it is stopped before <code class="command">nontypedresource:foo</code>. (Non-typed resources are stopped in the reverse order that they appear in the Service resource.)
					</div></li><li class="listitem"><div class="para">
						<code class="command">nontypedresource:foo</code> — This is a non-typed resource. Because it is a non-typed resource, it is stopped before the typed resources are stopped. In addition, its order in the Service resource is before the other non-typed resource, <code class="command">nontypedresourcetwo:bar</code>; therefore, it is stopped after <code class="command">nontypedresourcetwo:bar</code>. (Non-typed resources are stopped in the reverse order that they appear in the Service resource.)
					</div></li><li class="listitem"><div class="para">
						<code class="command">script:1</code> — This is a Script resource. If there were other Script resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">ip:10.1.1.1</code> — This is an IP Address resource. If there were other IP Address resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">fs:1</code> — This is a File System resource. If there were other File System resources in Service <span class="emphasis"><em>foo</em></span>, they would stop in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:2</code> — This is an LVM resource. All LVM resources are stopped last. <code class="command">lvm:2</code> (<code class="command">&lt;lvm name="2" .../&gt;</code>) is stopped before <code class="command">lvm:1</code>; resources within a group of a resource type are stopped in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li><li class="listitem"><div class="para">
						<code class="command">lvm:1</code> — This is an LVM resource. All LVM resources are stopped last. <code class="command">lvm:1</code> (<code class="command">&lt;lvm name="1" .../&gt;</code>) is stopped after <code class="command">lvm:2</code>; resources within a group of a resource type are stopped in the reverse order listed in the Service <span class="emphasis"><em>foo</em></span> portion of <code class="filename">/etc/cluster/cluster.conf</code>.
					</div></li></ol></div></div></div><div class="section" id="s1-clust-rsc-inherit-resc-reuse-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-rsc-inherit-resc-reuse-CA">D.3. Inheritance, the &lt;resources&gt; Block, and Reusing Resources</h2></div></div></div><div class="para">
			Some resources benefit by inheriting values from a parent resource; that is commonly the case in an NFS service. <a class="xref" href="#ex-nfs-reuse-inheritance-CA">Exemple D.5, « NFS Service Set Up for Resource Reuse and Inheritance »</a> shows a typical NFS service configuration, set up for resource reuse and inheritance.
		</div><div class="example" id="ex-nfs-reuse-inheritance-CA"><h6>Exemple D.5. NFS Service Set Up for Resource Reuse and Inheritance</h6><div class="example-contents"><pre class="screen">
&lt;resources&gt;
  &lt;nfsclient name="bob" target="bob.example.com" options="rw,no_root_squash"/&gt;
  &lt;nfsclient name="jim" target="jim.example.com" options="rw,no_root_squash"/&gt;
  &lt;nfsexport name="exports"/&gt;
&lt;/resources&gt;
&lt;service name="foo"&gt;
  &lt;fs name="1" mountpoint="/mnt/foo" device="/dev/sdb1" fsid="12344"&gt;
    &lt;nfsexport ref="exports"&gt;  &lt;!-- nfsexport's path and fsid
        attributes are inherited from the mountpoint and fsid
	attribute of the parent fs resource --&gt;
    &lt;nfsclient ref="bob"/&gt; &lt;!-- nfsclient's path is inherited
        from the mountpoint and the fsid is added to the options
	string during export --&gt;
    &lt;nfsclient ref="jim"/ &gt;
  &lt;/nfsexport&gt;
&lt;/fs&gt;
&lt;fs name="2" mountpoint="/mnt/bar" device="/dev/sdb2" fsid="12345"&gt;
  &lt;nfsexport ref="exports"&gt;
    &lt;nfsclient ref="bob"/&gt; &lt;!-- Because all of the critical
       data for this resource is either defined in the resources block
       or inherited, we can reference it again! --&gt;
    &lt;nfsclient ref="jim"/&gt;
  &lt;/nfsexport&gt;
&lt;/fs&gt;
&lt;ip address="10.2.13.20"/&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><div class="para">
			If the service were flat (that is, with no parent/child relationships), it would need to be configured as follows:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					The service would need four nfsclient resources — one per file system (a total of two for file systems), and one per target machine (a total of two for target machines).
				</div></li><li class="listitem"><div class="para">
					The service would need to specify export path and file system ID to each nfsclient, which introduces chances for errors in the configuration.
				</div></li></ul></div><div class="para">
			In <a class="xref" href="#ex-nfs-reuse-inheritance-CA">Exemple D.5, « NFS Service Set Up for Resource Reuse and Inheritance »</a> however, the NFS client resources <span class="emphasis"><em>nfsclient:bob</em></span> and <span class="emphasis"><em>nfsclient:jim</em></span> are defined once; likewise, the NFS export resource <span class="emphasis"><em>nfsexport:exports</em></span> is defined once. All the attributes needed by the resources are inherited from parent resources. Because the inherited attributes are dynamic (and do not conflict with one another), it is possible to reuse those resources — which is why they are defined in the resources block. It may not be practical to configure some resources in multiple places. For example, configuring a file system resource in multiple places can result in mounting one file system on two nodes, therefore causing problems.
		</div></div><div class="section" id="s1-clust-rsc-failure-rec-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-rsc-failure-rec-CA">D.4. Failure Recovery and Independent Subtrees</h2></div></div></div><div class="para">
			In most enterprise environments, the normal course of action for failure recovery of a service is to restart the entire service if any component in the service fails. For example, in <a class="xref" href="#ex-failure-recovery-normal-CA">Exemple D.6, « Service <span class="emphasis"><em>foo</em></span> Normal Failure Recovery »</a>, if any of the scripts defined in this service fail, the normal course of action is to restart (or relocate or disable, according to the service recovery policy) the service. However, in some circumstances certain parts of a service may be considered non-critical; it may be necessary to restart only part of the service in place before attempting normal recovery. To accomplish that, you can use the <em class="parameter"><code> __independent_subtree</code></em> attribute. For example, in <a class="xref" href="#ex-failure-recovery-ind-subtree-CA">Exemple D.7, « Service <span class="emphasis"><em>foo</em></span> Failure Recovery with <em class="parameter"><code>__independent_subtree</code></em> Attribute »</a>, the <em class="parameter"><code> __independent_subtree</code></em> attribute is used to accomplish the following actions:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					If script:script_one fails, restart script:script_two and script:script_one.
				</div></li><li class="listitem"><div class="para">
					If script:script_two fails, restart just script:script_two.
				</div></li><li class="listitem"><div class="para">
					If script:script_three fails, restart script:script_one, script:script_two, and script:script_three.
				</div></li><li class="listitem"><div class="para">
					If script:script_four fails, restart the whole service.
				</div></li></ul></div><div class="example" id="ex-failure-recovery-normal-CA"><h6>Exemple D.6. Service <span class="emphasis"><em>foo</em></span> Normal Failure Recovery</h6><div class="example-contents"><pre class="screen">
&lt;service name="foo"&gt;
      &lt;script name="script_one" ...&gt;
          &lt;script name="script_two" .../&gt;
      &lt;/script&gt;
      &lt;script name="script_three" .../&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><div class="example" id="ex-failure-recovery-ind-subtree-CA"><h6>Exemple D.7. Service <span class="emphasis"><em>foo</em></span> Failure Recovery with <em class="parameter"><code>__independent_subtree</code></em> Attribute</h6><div class="example-contents"><pre class="screen">
&lt;service name="foo"&gt;
      &lt;script name="script_one" __independent_subtree="1" ...&gt;
          &lt;script name="script_two" __independent_subtree="1" .../&gt;
          &lt;script name="script_three" .../&gt;
      &lt;/script&gt;
      &lt;script name="script_four" .../&gt;
&lt;/service&gt;
</pre></div></div><br class="example-break" /><div class="para">
			In some circumstances, if a component of a service fails you may want to disable only that component without disabling the entire service, to avoid affecting other services the use other components of that service. As of the Red Hat Enterprise Linux 5.6 release, you can accomplish that by using the <em class="parameter"><code> __independent_subtree="2"</code></em> attribute, which designates the independent subtree as non-critical.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				You may only use the non-critical flag on singly-referenced resources. The non-critical flag works with all resources at all levels of the resource tree, but should not be used at the top level when defining services or virtual machines.
			</div></div></div><div class="para">
			As of the Red Hat Enterprise Linux 5.6 release, you can set maximum restart and restart expirations on a per-node basis in the resource tree for independent subtrees. To set these thresholds, you can use the following attributes:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					<code class="literal">__max_restarts</code> configures the maximum number of tolerated restarts prior to giving up.
				</div></li><li class="listitem"><div class="para">
					<code class="literal"> __restart_expire_time</code> configures the amount of time, in seconds, after which a restart is no longer attempted.
				</div></li></ul></div></div><div class="section" id="s1-clust-rsc-testing-config-CA"><div class="titlepage"><div><div><h2 class="title" id="s1-clust-rsc-testing-config-CA">D.5. Debugging and Testing Services and Resource Ordering</h2></div></div></div><div class="para">
			You can debug and test services and resource ordering with the <code class="command">rg_test</code> utility. <code class="command">rg_test</code> is a command-line utility that is run from a shell or a terminal (it is not available in <span class="application"><strong>Conga</strong></span> or <code class="command">system-config-cluster</code>.) <a class="xref" href="#tb-rgtest-command-summary-CA">Tableau D.2, « <code class="command">rg_test</code> Utility Summary »</a> summarizes the actions and syntax for the <code class="command">rg_test</code> utility.
		</div><div class="table" id="tb-rgtest-command-summary-CA"><h6>Tableau D.2. <code class="command">rg_test</code> Utility Summary</h6><div class="table-contents"><table summary="rg_test Utility Summary" border="1"><colgroup><col width="14%" class="Action" /><col width="86%" class="Syntax" /></colgroup><thead><tr><th>
							Action
						</th><th>
							Syntax
						</th></tr></thead><tbody><tr><td>
							Display the resource rules that <code class="command">rg_test</code> understands.
						</td><td>
							<code class="command">rg_test rules</code>
						</td></tr><tr><td>
							Test a configuration (and /usr/share/cluster) for errors or redundant resource agents.
						</td><td>
							<code class="command">rg_test test /etc/cluster/cluster.conf</code>
						</td></tr><tr><td>
							Display the start and stop ordering of a service.
						</td><td>
							<div class="para">
								Display start order:
							</div>
							 <div class="para">
								<code class="command">rg_test noop /etc/cluster/cluster.conf start service <em class="parameter"><code>servicename</code></em></code>
							</div>
							 <div class="para">
								Display stop order:
							</div>
							 <div class="para">
								<code class="command">rg_test noop /etc/cluster/cluster.conf stop service <em class="parameter"><code>servicename</code></em></code>
							</div>

						</td></tr><tr><td>
							Explicitly start or stop a service.
						</td><td>
							<div class="important"><div class="admonition_header"><h2>Important</h2></div><div class="admonition"><div class="para">
									Only do this on one node, and always disable the service in rgmanager first.
								</div></div></div>
							 <div class="para">
								Start a service:
							</div>
							 <div class="para">
								<code class="command">rg_test test /etc/cluster/cluster.conf start service <em class="parameter"><code>servicename</code></em></code>
							</div>
							 <div class="para">
								Stop a service:
							</div>
							 <div class="para">
								<code class="command"> rg_test test /etc/cluster/cluster.conf stop service <em class="parameter"><code>servicename</code></em></code>
							</div>

						</td></tr><tr><td>
							Calculate and display the resource tree delta between two cluster.conf files.
						</td><td>
							<div class="para">
								<code class="command">rg_test delta <em class="parameter"><code> cluster.conf file 1</code></em> <em class="parameter"><code> cluster.conf file 2</code></em></code>
							</div>
							 <div class="para">
								For example:
							</div>
							 <div class="para">
								<code class="command">rg_test delta /etc/cluster/cluster.conf.bak /etc/cluster/cluster.conf</code>
							</div>

						</td></tr></tbody></table></div></div><br class="table-break" /></div></div><div xml:lang="fr-FR" class="appendix" id="ap-status-check-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Cluster Service Resource Check and Failover Timeout</h1></div></div></div><a id="id810661" class="indexterm"></a><a id="id902893" class="indexterm"></a><a id="id740591" class="indexterm"></a><a id="id860355" class="indexterm"></a><div class="para">
		This appendix describes how <code class="command">rgmanager</code> monitors the status of cluster resources, and how to modify the status check interval. The appendix also describes the <code class="literal">__enforce_timeouts</code> service parameter, which indicates that a timeout for an operation should cause a service to fail.
	</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
			To fully comprehend the information in this appendix, you may require detailed understanding of resource agents and the cluster configuration file, <code class="filename">/etc/cluster/cluster.conf</code>. For a comprehensive list and description of <code class="filename">cluster.conf</code> elements and attributes, refer to the cluster schema at <code class="filename">/usr/share/system-config-cluster/misc/cluster.ng</code>, and the annotated schema at <code class="filename">/usr/share/doc/system-config-cluster-X.Y.ZZ/cluster_conf.html</code> (for example <code class="filename">/usr/share/doc/system-config-cluster-1.0.57/cluster_conf.html</code>).
		</div></div></div><div class="section" id="resource-status-check-CA"><div class="titlepage"><div><div><h2 class="title" id="resource-status-check-CA">E.1. Modifying the Resource Status Check Interval</h2></div></div></div><div class="para">
			<code class="command">rgmanager</code> checks the status of individual resources, not whole services. (This is a change from <code class="command">clumanager</code> on Red Hat Enterprise Linux 3, which periodically checked the status of the whole service.) Every 10 seconds, rgmanager scans the resource tree, looking for resources that have passed their "status check" interval.
		</div><div class="para">
			Each resource agent specifies the amount of time between periodic status checks. Each resource utilizes these timeout values unless explicitly overridden in the <code class="filename">cluster.conf</code> file using the special <code class="command">&lt;action&gt;</code> tag:
		</div><div class="para">
			<code class="command">&lt;action name="status" depth="*" interval="10" /&gt;</code>
		</div><div class="para">
			This tag is a special child of the resource itself in the <code class="filename">cluster.conf</code> file. For example, if you had a file system resource for which you wanted to override the status check interval you could specify the file system resource in the <code class="filename">cluster.conf</code> file as follows:
		</div><pre class="screen">

  &lt;fs name="test" device="/dev/sdb3"&gt;
    &lt;action name="status" depth="*" interval="10" /&gt;
    &lt;nfsexport...&gt;
    &lt;/nfsexport&gt;
  &lt;/fs&gt;

</pre><div class="para">
			Some agents provide multiple "depths" of checking. For example, a normal file system status check (depth 0) checks whether the file system is mounted in the correct place. A more intensive check is depth 10, which checks whether you can read a file from the file system. A status check of depth 20 checks whether you can write to the file system. In the example given here, the <code class="literal">depth</code> is set to <code class="literal">*</code>, which indicates that these values should be used for all depths. The result is that the <code class="literal">test</code> file system is checked at the highest-defined depth provided by the resource-agent (in this case, 20) every 10 seconds.
		</div></div><div class="section" id="resource-timeout-CA"><div class="titlepage"><div><div><h2 class="title" id="resource-timeout-CA">E.2. Enforcing Resource Timeouts</h2></div></div></div><div class="para">
			There is no timeout for starting, stopping, or failing over resources. Some resources take an indeterminately long amount of time to start or stop. Unfortunately, a failure to stop (including a timeout) renders the service inoperable (failed state). You can, if desired, turn on timeout enforcement on each resource in a service individually by adding <code class="literal">__enforce_timeouts="1"</code> to the reference in the <code class="filename">cluster.conf</code> file.
		</div><div class="para">
			The following example shows a cluster service that has been configured with the <code class="literal">__enforce_timeouts</code> attribute set for the <code class="literal">netfs</code> resource. With this attribute set, then if it takes more than 30 seconds to unmount the NFS file system during a recovery process the operation will time out, causing the service to enter the failed state.
		</div><pre class="screen">

&lt;/screen&gt;
&lt;rm&gt;
  &lt;failoverdomains/&gt;
  &lt;resources&gt;
    &lt;netfs export="/nfstest" force_unmount="1" fstype="nfs" host="10.65.48.65" 
           mountpoint="/data/nfstest" name="nfstest_data" options="rw,sync,soft"/&gt;
  &lt;/resources&gt;
  &lt;service autostart="1" exclusive="0" name="nfs_client_test" recovery="relocate"&gt;
    &lt;netfs ref="nfstest_data" __enforce_timeouts="1"/&gt;
  &lt;/service&gt;
&lt;/rm&gt;

</pre></div><div class="section" id="concensus-timeout-CA"><div class="titlepage"><div><div><h2 class="title" id="concensus-timeout-CA">E.3. Changing Consensus Timeout</h2></div></div></div><div class="para">
			The consensus timeout specifies the time (in milliseconds) to wait for consensus to be achieved before starting a new round of membership configuration.
		</div><div class="para">
			When consensus is calculated automatically, the following rules will be used:
		</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
					If configuring a cluster of 2 or less nodes, consensus will be <code class="command">(token * 0.2)</code> , with a maximum of 2000 milliseconds and a minimum of 200 milliseconds.
				</div></li><li class="listitem"><div class="para">
					If configuring a cluster of 3 or more nodes, consensus will be <code class="command">(token + 2000 milliseconds)</code>
				</div></li></ul></div><div class="para">
			If you let <code class="command">cman</code> configure your consensus timeout in this fashion, realize that moving from 2 to 3 (or more) nodes will require a cluster restart, since the consensus timeout will need to change to the larger value based on the token timeout.
		</div><div class="para">
			When configuring a 2-member cluster with the ultimate intention of adding more nodes at a later time, you must adjust the consensus timeout so that you do not have to restart the cluster to add the new nodes. To do this, you can edit the <code class="filename">cluster.conf</code> as follows:
		</div><pre class="screen">

&lt;totem token="X" consensus="X + 2000" /&gt;

</pre><div class="para">
			Note that the configuration parser does not calculate <code class="command">X + 2000</code> automatically. A integer value must be used rather then an equation.
		</div><div class="para">
			The advantage of the optimized consensus timeout for 2 node clusters, is that overall failover time is reduced for the 2 node case, since consensus is not a function of the token timeout.
		</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
				For two node auto-detection in <code class="command">cman</code>, the number of physical nodes matters and not the presence of the <code class="option">two_node=1</code> directive in <code class="filename">cluster.conf</code>.
			</div></div></div></div></div><div xml:lang="fr-FR" class="appendix" id="ap-upgrade-rhel4-to-rhel5-CA" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5</h1></div></div></div><a id="id829622" class="indexterm"></a><div class="para">
		This appendix provides a procedure for upgrading a Red Hat cluster from RHEL 4 to RHEL 5. The procedure includes changes required for Red Hat GFS and CLVM, also. For more information about Red Hat GFS, refer to <em class="citetitle">Global File System: Configuration and Administration</em>. For more information about LVM for clusters, refer to <em class="citetitle">LVM Administrator's Guide: Configuration and Administration</em>.
	</div><div class="para">
		Upgrading a Red Hat Cluster from RHEL 4 to RHEL 5 consists of stopping the cluster, converting the configuration from a GULM cluster to a CMAN cluster (only for clusters configured with the GULM cluster manager/lock manager), adding node IDs, and updating RHEL and cluster software. To upgrade a Red Hat Cluster from RHEL 4 to RHEL 5, follow these steps:
	</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
				Stop client access to cluster high-availability services.
			</div></li><li class="listitem"><div class="para">
				At each cluster node, stop the cluster software as follows:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						Stop all high-availability services.
					</div></li><li class="listitem"><div class="para">
						Run <code class="command">service rgmanager stop</code>.
					</div></li><li class="listitem"><div class="para">
						Run <code class="command">service gfs stop</code>, if you are using Red Hat GFS.
					</div></li><li class="listitem"><div class="para">
						Run <code class="command">service clvmd stop</code>, if CLVM has been used to create clustered volumes.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							If <code class="command">clvmd</code> is already stopped, an error message is displayed:
						</div><pre class="screen">
# <strong class="userinput"><code>service clvmd stop</code></strong>
Stopping clvm:                                             [FAILED]
</pre><div class="para">
							The error message is the expected result when running <code class="command">service clvmd stop</code> after <code class="command">clvmd</code> has stopped.
						</div></div></div></li><li class="listitem"><div class="para">
						Depending on the type of cluster manager (either CMAN or GULM), run the following command or commands:
					</div><div class="itemizedlist"><ul><li class="listitem"><div class="para">
								CMAN — Run <code class="command">service fenced stop; service cman stop</code>.
							</div></li><li class="listitem"><div class="para">
								GULM — Run <code class="command">service lock_gulmd stop</code>.
							</div></li></ul></div></li><li class="listitem"><div class="para">
						Run <code class="command">service ccsd stop</code>.
					</div></li></ol></div></li><li class="listitem"><div class="para">
				Disable cluster software from starting during reboot. At each node, run <code class="command">/sbin/chkconfig</code> as follows:
			</div><pre class="screen">
# <strong class="userinput"><code>chkconfig --level 2345 rgmanager off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 gfs off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 clvmd off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 fenced off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 cman off</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 ccsd off</code></strong></pre></li><li class="listitem"><div class="para">
				Edit the cluster configuration file as follows:
			</div><div class="orderedlist"><ol><li class="listitem"><div class="para">
						At a cluster node, open <code class="filename">/etc/cluster/cluster.conf</code> with a text editor.
					</div></li><li class="listitem"><div class="para">
						If your cluster is configured with GULM as the cluster manager, remove the GULM XML elements — <code class="command">&lt;gulm&gt;</code> and <code class="command"> &lt;/gulm&gt;</code> — and their content from <code class="filename">/etc/cluster/cluster.conf</code>. GULM is not supported in Red Hat Cluster Suite for RHEL 5. <a class="xref" href="#ex-gulm-xml-elements-CA">Exemple F.1, « GULM XML Elements and Content »</a> shows an example of GULM XML elements and content.
					</div></li><li class="listitem"><div class="para">
						At the <code class="command">&lt;clusternode&gt;</code> element for each node in the configuration file, insert <code class="command">nodeid="<em class="replaceable"><code>number</code></em>"</code> after <code class="command">name="<em class="replaceable"><code>name</code></em>"</code>. Use a <em class="replaceable"><code>number</code></em> value unique to that node. Inserting it there follows the format convention of the <code class="command">&lt;clusternode&gt;</code> element in a RHEL 5 cluster configuration file.
					</div><div class="note"><div class="admonition_header"><h2>Note</h2></div><div class="admonition"><div class="para">
							The <code class="command">nodeid</code> parameter is required in Red Hat Cluster Suite for RHEL 5. The parameter is optional in Red Hat Cluster Suite for RHEL 4. If your configuration file already contains <code class="command">nodeid</code> parameters, skip this step.
						</div></div></div></li><li class="listitem"><div class="para">
						When you have completed editing <code class="filename">/etc/cluster/cluster.conf</code>, save the file and copy it to the other nodes in the cluster (for example, using the <code class="command">scp</code> command).
					</div></li></ol></div></li><li class="listitem"><div class="para">
				If your cluster is a GULM cluster and uses Red Hat GFS, change the superblock of each GFS file system to use the DLM locking protocol. Use the <code class="command">gfs_tool</code> command with the <code class="option">sb</code> and <code class="option">proto</code> options, specifying <code class="option">lock_dlm</code> for the DLM locking protocol:
			</div><div class="para">
				<code class="command">gfs_tool sb <em class="replaceable"><code>device</code></em> proto lock_dlm</code>
			</div><div class="para">
				For example:
			</div><pre class="screen">
# <strong class="userinput"><code>gfs_tool sb /dev/my_vg/gfs1 proto lock_dlm</code></strong>
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] <strong class="userinput"><code>y</code></strong>

current lock protocol name = "lock_gulm"
new lock protocol name = "lock_dlm"
Done
</pre></li><li class="listitem"><div class="para">
				Update the software in the cluster nodes to RHEL 5 and Red Hat Cluster Suite for RHEL 5. You can acquire and update software through Red Hat Network channels for RHEL 5 and Red Hat Cluster Suite for RHEL 5.
			</div></li><li class="listitem"><div class="para">
				Run <code class="command">lvmconf --enable-cluster</code>.
			</div></li><li class="listitem"><div class="para">
				Enable cluster software to start upon reboot. At each node run <code class="command">/sbin/chkconfig</code> as follows:
			</div><pre class="screen">
# <strong class="userinput"><code>chkconfig --level 2345 rgmanager on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 gfs on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 clvmd on</code></strong>
# <strong class="userinput"><code>chkconfig --level 2345 cman on</code></strong>

</pre></li><li class="listitem"><div class="para">
				Reboot the nodes. The RHEL 5 cluster software should start while the nodes reboot. Upon verification that the Red Hat cluster is running, the upgrade is complete.
			</div></li></ol></div><div class="example" id="ex-gulm-xml-elements-CA"><h6>Exemple F.1. GULM XML Elements and Content</h6><div class="example-contents"><pre class="screen">
&lt;gulm&gt;
  &lt;lockserver name="gulmserver1"/&gt;
  &lt;lockserver name="gulmserver2"/&gt;
  &lt;lockserver name="gulmserver3"/&gt;
&lt;/gulm&gt;


</pre></div></div><br class="example-break" /></div><div xml:lang="fr-FR" class="appendix" id="appe-Publican-Revision_History" lang="fr-FR"><div class="titlepage"><div><div><h1 class="title">Revision History</h1></div></div></div><div class="para">
		<div class="revhistory"><table border="0" width="100%" summary="Revision history"><tr><th align="left" valign="top" colspan="3"><strong>Historique des versions</strong></th></tr><tr><td align="left">Version 7.0-3</td><td align="left">Wed Jan 25 2012</td><td align="left"><span class="author"><span class="firstname">Steven</span> <span class="surname">Levine</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Release of Red Hat Enterprise Linux 5.8</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #712376</td></tr><tr><td> Adds information on disabling cluster software. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #712387</td></tr><tr><td> Adds information on stopping single resources of a cluster. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #712593</td></tr><tr><td> Adds appendix on consensus timeout. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #626495</td></tr><tr><td> Adds note on single site cluster support. </td></tr></table>
				</td></tr><tr><td align="left">Version 7.0-2</td><td align="left">Thu Dec 15 2011</td><td align="left"><span class="author"><span class="firstname">Steven</span> <span class="surname">Levine</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Beta release of Red Hat Enterprise Linux 5.8</td></tr></table>
				</td></tr><tr><td align="left">Version 7.0-1</td><td align="left">Thu Nov 10 2011</td><td align="left"><span class="author"><span class="firstname">Steven</span> <span class="surname">Levine</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #571557</td></tr><tr><td> Adds note on managing virtual machines in a cluster. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #742310</td></tr><tr><td> Documents new privilege level parameter for IPMI fence device. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #747456</td></tr><tr><td> Corrects small typographical errors throughout document. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #748935</td></tr><tr><td> Clarifies description of iptables firewall filters. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #718084</td></tr><tr><td> Provides link to Support Essentials article. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #749858</td></tr><tr><td> Documents support for RHEV-M REST API fence agent. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #569585</td></tr><tr><td> Clarifies support statement for running Samba in a cluster. </td></tr></table>

				</td></tr><tr><td align="left">Version 6.0-1</td><td align="left">Thu Jul 21 2011</td><td align="left"><span class="author"><span class="firstname">Steven</span> <span class="surname">Levine</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #713256</td></tr><tr><td> Documents new fence_vmware_soap agent. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #446137</td></tr><tr><td> Documents procedure to configure a system to listen to luci from the internal network only. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #515858</td></tr><tr><td> Provides information about cluster service status check and failover timeout. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #560558</td></tr><tr><td> Provides rules to allow multicast traffic for cluster comunication </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #705131</td></tr><tr><td> Updates tables of fence agent parameters to reflect Red Hat Enterprise Linux 5.7 support. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #705134</td></tr><tr><td> Documents non-critical resources and restart-disable configuration parameter. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #480292</td></tr><tr><td> Adds pointer to cluster.conf schema in documentation of resource parameters. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #515860</td></tr><tr><td> Updates example domains. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #595711</td></tr><tr><td> Fixes minor typographical errors. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #654176</td></tr><tr><td> Fixes minor typographical errors. </td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #675809</td></tr><tr><td> Fixes incorrect table title reference. </td></tr></table>

				</td></tr><tr><td align="left">Version 5.0-1</td><td align="left">Thu Dec 23 2010</td><td align="left"><span class="author"><span class="firstname">Steven</span> <span class="surname">Levine</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #664055</td></tr><tr><td>Adds newly-supported fence agents to Fence Device Parameters appendix.</td></tr></table>

				</td></tr><tr><td align="left">Version 4.0-1</td><td align="left">Mon Mar 15 2010</td><td align="left"><span class="author"><span class="firstname">Paul</span> <span class="surname">Kennedy</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #511150</td></tr><tr><td>Clarifies support for SELinux.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #527473</td></tr><tr><td>Adds information about cluster node-count limit.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #568179</td></tr><tr><td>Adds information about support of and GFS/GFS2 deployment.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #568483</td></tr><tr><td>Adds general support statement.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #526540</td></tr><tr><td>Clarifies meaning of Name parameter for fencing devices.</td></tr></table>

				</td></tr><tr><td align="left">Version 3.0-1</td><td align="left">Tue Aug 18 2009</td><td align="left"><span class="author"><span class="firstname">Paul</span> <span class="surname">Kennedy</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #516128</td></tr><tr><td>Adds notes about not supporting IPV6.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #482936</td></tr><tr><td>Corrects Section 5.7 title and intro text.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #488751</td></tr><tr><td>Corrects iptables rules. Removed examples.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #502053</td></tr><tr><td>Corrects iptables rules for rgmanager.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #511150</td></tr><tr><td>Adds content stating that SELinux must be disabled for Red Hat Cluster Suite.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #513072</td></tr><tr><td>Adds information about limitations on using SCSI reservations as a fencing method.</td></tr></table>

				</td></tr><tr><td align="left">Version 2.0-1</td><td align="left">Tue Jan 20 2009</td><td align="left"><span class="author"><span class="firstname">Paul</span> <span class="surname">Kennedy</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #458882</td></tr><tr><td>Explains Firewall settings for multicast address.</td></tr></table>
					 <table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #450777</td></tr><tr><td>Includes content about configuring failover domains to not fail back a service (an added feature).</td></tr></table>

				</td></tr><tr><td align="left">Version 1.0-1</td><td align="left">Wed May 21 2008</td><td align="left"><span class="author"><span class="firstname">Michael Hideo</span> <span class="surname">Smith</span></span></td></tr><tr><td align="left" colspan="3">
					<table border="0" summary="Simple list" class="simplelist"><tr><td>Resolves: #232215</td></tr><tr><td>Changing from XML to HTML Single with floating Table of Contents and viewable by browser</td></tr></table>

				</td></tr></table></div>

	</div></div><div class="index" id="id625066"><div class="titlepage"><div><div><h2 class="title">Index</h2></div></div></div><div class="index"><div class="indexdiv"><h3>A</h3><dl><dt>ACPI</dt><dd><dl><dt>configuring, <a class="indexterm" href="#s1-acpi-CA">Configuring ACPI For Use with Integrated Fence Devices</a></dt></dl></dd><dt>Apache HTTP Server</dt><dd><dl><dt>httpd.conf , <a class="indexterm" href="#s1-apache-inshttpd-CA">Installing and Configuring the Apache HTTP Server</a></dt><dt>setting up service, <a class="indexterm" href="#ap-httpd-service-CA">Example of Setting Up Apache HTTP Server</a></dt></dl></dd></dl></div><div class="indexdiv"><h3>B</h3><dl><dt>behavior, HA resources, <a class="indexterm" href="#ap-ha-resource-behavior-CA">HA Resource Behavior</a></dt></dl></div><div class="indexdiv"><h3>C</h3><dl><dt>cluster</dt><dd><dl><dt>administration, <a class="indexterm" href="#ch-before-config-CA">Before Configuring a Red Hat Cluster</a>, <a class="indexterm" href="#ch-mgmt-conga-CA">Managing Red Hat Cluster With Conga</a>, <a class="indexterm" href="#ch-mgmt-scc-CA">Managing Red Hat Cluster With system-config-cluster</a></dt><dt>diagnosing and correcting problems, <a class="indexterm" href="#s1-admin-problems-conga-CA">Diagnosing and Correcting Problems in a Cluster</a>, <a class="indexterm" href="#s1-admin-problems-CA">Diagnosing and Correcting Problems in a Cluster</a></dt><dt>disabling the cluster software, <a class="indexterm" href="#s1-admin-disable-CA">Disabling the Cluster Software</a></dt><dt>displaying status, <a class="indexterm" href="#s2-admin-overview-CA">Cluster Status Tool</a>, <a class="indexterm" href="#s1-admin-service-CA">Managing High-Availability Services</a></dt><dt>managing node, <a class="indexterm" href="#s1-admin-manage-nodes-conga-CA">Managing Cluster Nodes</a></dt><dt>starting, <a class="indexterm" href="#s1-starting-cluster-CA">Starting the Cluster Software</a></dt><dt>starting, stopping, restarting, and deleting, <a class="indexterm" href="#s1-admin-start-conga-CA">Starting, Stopping, and Deleting Clusters</a></dt></dl></dd><dt>cluster administration, <a class="indexterm" href="#ch-before-config-CA">Before Configuring a Red Hat Cluster</a>, <a class="indexterm" href="#ch-mgmt-conga-CA">Managing Red Hat Cluster With Conga</a>, <a class="indexterm" href="#ch-mgmt-scc-CA">Managing Red Hat Cluster With system-config-cluster</a></dt><dd><dl><dt>backing up the cluster database, <a class="indexterm" href="#s1-admin-backup-CA">Backing Up and Restoring the Cluster Database</a></dt><dt>compatible hardware, <a class="indexterm" href="#s1-hw-compat-CA">Compatible Hardware</a></dt><dt>configuring ACPI, <a class="indexterm" href="#s1-acpi-CA">Configuring ACPI For Use with Integrated Fence Devices</a></dt><dt>configuring iptables, <a class="indexterm" href="#s1-iptables-CA">Enabling IP Ports</a></dt><dt>configuring max_luns, <a class="indexterm" href="#s1-max-luns-CA">Configuring max_luns</a></dt><dt>Conga considerations, <a class="indexterm" href="#s1-conga-considerations-CA">Considerations for Using Conga</a></dt><dt>considerations for using qdisk, <a class="indexterm" href="#s1-qdisk-considerations-CA">Considerations for Using Quorum Disk</a></dt><dt>considerations for using quorum disk, <a class="indexterm" href="#s1-qdisk-considerations-CA">Considerations for Using Quorum Disk</a></dt><dt>diagnosing and correcting problems in a cluster, <a class="indexterm" href="#s1-admin-problems-conga-CA">Diagnosing and Correcting Problems in a Cluster</a>, <a class="indexterm" href="#s1-admin-problems-CA">Diagnosing and Correcting Problems in a Cluster</a></dt><dt>disabling the cluster software, <a class="indexterm" href="#s1-admin-disable-CA">Disabling the Cluster Software</a></dt><dt>displaying cluster and service status, <a class="indexterm" href="#s2-admin-overview-CA">Cluster Status Tool</a>, <a class="indexterm" href="#s1-admin-service-CA">Managing High-Availability Services</a></dt><dt>enabling IP ports, <a class="indexterm" href="#s1-iptables-CA">Enabling IP Ports</a></dt><dt>general considerations, <a class="indexterm" href="#s1-clust-config-considerations-CA">General Configuration Considerations</a></dt><dt>managing cluster node, <a class="indexterm" href="#s1-admin-manage-nodes-conga-CA">Managing Cluster Nodes</a></dt><dt>managing high-availability services, <a class="indexterm" href="#s1-admin-manage-ha-services-conga-CA">Managing High-Availability Services</a></dt><dt>modifying the cluster configuration, <a class="indexterm" href="#s1-admin-modify-CA">Modifying the Cluster Configuration</a></dt><dt>network switches and multicast addresses, <a class="indexterm" href="#s1-multicast-considerations-CA">Multicast Addresses</a></dt><dt>restoring the cluster database, <a class="indexterm" href="#s1-admin-backup-CA">Backing Up and Restoring the Cluster Database</a></dt><dt>SELinux, <a class="indexterm" href="#s1-selinux-CA">Red Hat Cluster Suite and SELinux</a></dt><dt>starting and stopping the cluster software, <a class="indexterm" href="#s1-admin-start-CA">Starting and Stopping the Cluster Software</a></dt><dt>starting, stopping, restarting, and deleting a cluster, <a class="indexterm" href="#s1-admin-start-conga-CA">Starting, Stopping, and Deleting Clusters</a></dt><dt>virtual machines, <a class="indexterm" href="#s1-vm-considerations-CA">Configuring Virtual Machines in a Clustered Environment</a></dt></dl></dd><dt>cluster configuration, <a class="indexterm" href="#ch-config-conga-CA">Configuring Red Hat Cluster With Conga</a></dt><dd><dl><dt>modifying, <a class="indexterm" href="#s1-admin-modify-CA">Modifying the Cluster Configuration</a></dt></dl></dd><dt>Cluster Configuration Tool</dt><dd><dl><dt>accessing, <a class="indexterm" href="#s2-cluconfig-tool-CA">Cluster Configuration Tool</a></dt></dl></dd><dt>cluster database</dt><dd><dl><dt>backing up, <a class="indexterm" href="#s1-admin-backup-CA">Backing Up and Restoring the Cluster Database</a></dt><dt>restoring, <a class="indexterm" href="#s1-admin-backup-CA">Backing Up and Restoring the Cluster Database</a></dt></dl></dd><dt>cluster resource relationships, <a class="indexterm" href="#s1-clust-rsc-desc-CA">Parent, Child, and Sibling Relationships Among Resources</a></dt><dt>cluster resource status check, <a class="indexterm" href="#ap-status-check-CA">Cluster Service Resource Check and Failover Timeout</a></dt><dt>cluster resource types, <a class="indexterm" href="#s1-clust-svc-ov-CA">Considerations for Configuring HA Services</a></dt><dt>cluster service</dt><dd><dl><dt>displaying status, <a class="indexterm" href="#s2-admin-overview-CA">Cluster Status Tool</a>, <a class="indexterm" href="#s1-admin-service-CA">Managing High-Availability Services</a></dt></dl></dd><dt>cluster service managers</dt><dd><dl><dt>configuration, <a class="indexterm" href="#s1-add-service-conga-CA">Adding a Cluster Service to the Cluster</a>, <a class="indexterm" href="#s1-add-service-CA">Adding a Cluster Service to the Cluster</a>, <a class="indexterm" href="#s1-propagate-config-CA">Propagating The Configuration File: New Cluster</a></dt></dl></dd><dt>cluster services, <a class="indexterm" href="#s1-add-service-conga-CA">Adding a Cluster Service to the Cluster</a>, <a class="indexterm" href="#s1-add-service-CA">Adding a Cluster Service to the Cluster</a></dt><dd><dl><dt>(voir aussi adding to the cluster configuration)</dt><dt>Apache HTTP Server, setting up, <a class="indexterm" href="#ap-httpd-service-CA">Example of Setting Up Apache HTTP Server</a></dt><dd><dl><dt>httpd.conf , <a class="indexterm" href="#s1-apache-inshttpd-CA">Installing and Configuring the Apache HTTP Server</a></dt></dl></dd></dl></dd><dt>cluster software</dt><dd><dl><dt>configuration, <a class="indexterm" href="#ch-config-conga-CA">Configuring Red Hat Cluster With Conga</a></dt><dt>disabling, <a class="indexterm" href="#s1-admin-disable-CA">Disabling the Cluster Software</a></dt><dt>installation and configuration, <a class="indexterm" href="#ch-config-scc-CA">Configuring Red Hat Cluster With system-config-cluster</a></dt><dt>starting and stopping, <a class="indexterm" href="#s1-admin-start-CA">Starting and Stopping the Cluster Software</a></dt></dl></dd><dt>cluster software installation and configuration, <a class="indexterm" href="#ch-config-scc-CA">Configuring Red Hat Cluster With system-config-cluster</a></dt><dt>cluster storage</dt><dd><dl><dt>configuration, <a class="indexterm" href="#s1-config-storage-conga-CA">Configuring Cluster Storage</a></dt></dl></dd><dt>command line tools table, <a class="indexterm" href="#s1-cmdlinetools-overview-CA">Command Line Administration Tools</a></dt><dt>configuration</dt><dd><dl><dt>HA service, <a class="indexterm" href="#s1-clust-svc-ov-CA">Considerations for Configuring HA Services</a></dt></dl></dd><dt>configuration file</dt><dd><dl><dt>propagation of, <a class="indexterm" href="#s1-propagate-config-CA">Propagating The Configuration File: New Cluster</a></dt></dl></dd><dt>configuring cluster storage , <a class="indexterm" href="#s1-config-storage-conga-CA">Configuring Cluster Storage</a></dt><dt>Conga</dt><dd><dl><dt>accessing, <a class="indexterm" href="#s2-config-cluster-CA">Configuring Red Hat Cluster Software</a></dt><dt>considerations for cluster administration, <a class="indexterm" href="#s1-conga-considerations-CA">Considerations for Using Conga</a></dt><dt>overview, <a class="indexterm" href="#s1-conga-overview-CA">Conga</a></dt></dl></dd><dt>Conga overview, <a class="indexterm" href="#s1-conga-overview-CA">Conga</a></dt></dl></div><div class="indexdiv"><h3>F</h3><dl><dt>failover timeout, <a class="indexterm" href="#ap-status-check-CA">Cluster Service Resource Check and Failover Timeout</a></dt><dt>feedback, <a class="indexterm" href="#s1-intro-feedback-CA">Feedback</a></dt></dl></div><div class="indexdiv"><h3>G</h3><dl><dt>general</dt><dd><dl><dt>considerations for cluster administration, <a class="indexterm" href="#s1-clust-config-considerations-CA">General Configuration Considerations</a></dt></dl></dd></dl></div><div class="indexdiv"><h3>H</h3><dl><dt>HA service configuration</dt><dd><dl><dt>overview, <a class="indexterm" href="#s1-clust-svc-ov-CA">Considerations for Configuring HA Services</a></dt></dl></dd><dt>hardware</dt><dd><dl><dt>compatible, <a class="indexterm" href="#s1-hw-compat-CA">Compatible Hardware</a></dt></dl></dd><dt>HTTP services</dt><dd><dl><dt>Apache HTTP Server</dt><dd><dl><dt>httpd.conf, <a class="indexterm" href="#s1-apache-inshttpd-CA">Installing and Configuring the Apache HTTP Server</a></dt><dt>setting up, <a class="indexterm" href="#ap-httpd-service-CA">Example of Setting Up Apache HTTP Server</a></dt></dl></dd></dl></dd></dl></div><div class="indexdiv"><h3>I</h3><dl><dt>integrated fence devices</dt><dd><dl><dt>configuring ACPI, <a class="indexterm" href="#s1-acpi-CA">Configuring ACPI For Use with Integrated Fence Devices</a></dt></dl></dd><dt>introduction, <a class="indexterm" href="#ch-intro-CA">Introduction</a></dt><dd><dl><dt>other Red Hat Enterprise Linux documents, <a class="indexterm" href="#ch-intro-CA">Introduction</a></dt></dl></dd><dt>IP ports</dt><dd><dl><dt>enabling, <a class="indexterm" href="#s1-iptables-CA">Enabling IP Ports</a></dt></dl></dd><dt>iptables</dt><dd><dl><dt>configuring, <a class="indexterm" href="#s1-iptables-CA">Enabling IP Ports</a></dt></dl></dd><dt>iptables firewall, <a class="indexterm" href="#s1-iptables_firewall-CA">Configuring the iptables Firewall to Allow Cluster Components</a></dt></dl></div><div class="indexdiv"><h3>M</h3><dl><dt>max_luns</dt><dd><dl><dt>configuring, <a class="indexterm" href="#s1-max-luns-CA">Configuring max_luns</a></dt></dl></dd><dt>multicast addresses</dt><dd><dl><dt>considerations for using with network switches and multicast addresses, <a class="indexterm" href="#s1-multicast-considerations-CA">Multicast Addresses</a></dt></dl></dd><dt>multicast traffic, enabling, <a class="indexterm" href="#s1-iptables_firewall-CA">Configuring the iptables Firewall to Allow Cluster Components</a></dt></dl></div><div class="indexdiv"><h3>P</h3><dl><dt>parameters, fence device, <a class="indexterm" href="#ap-fence-device-param-CA">Fence Device Parameters</a></dt><dt>parameters, HA resources, <a class="indexterm" href="#ap-ha-resource-params-CA">HA Resource Parameters</a></dt><dt>power controller connection, configuring, <a class="indexterm" href="#ap-fence-device-param-CA">Fence Device Parameters</a></dt><dt>power switch, <a class="indexterm" href="#ap-fence-device-param-CA">Fence Device Parameters</a></dt><dd><dl><dt>(voir aussi power controller)</dt></dl></dd></dl></div><div class="indexdiv"><h3>Q</h3><dl><dt>qdisk</dt><dd><dl><dt>considerations for using, <a class="indexterm" href="#s1-qdisk-considerations-CA">Considerations for Using Quorum Disk</a></dt></dl></dd><dt>quorum disk</dt><dd><dl><dt>considerations for using, <a class="indexterm" href="#s1-qdisk-considerations-CA">Considerations for Using Quorum Disk</a></dt></dl></dd></dl></div><div class="indexdiv"><h3>R</h3><dl><dt>relationships</dt><dd><dl><dt>cluster resource, <a class="indexterm" href="#s1-clust-rsc-desc-CA">Parent, Child, and Sibling Relationships Among Resources</a></dt></dl></dd></dl></div><div class="indexdiv"><h3>S</h3><dl><dt>SELinux</dt><dd><dl><dt>configuring, <a class="indexterm" href="#s1-selinux-CA">Red Hat Cluster Suite and SELinux</a></dt></dl></dd><dt>starting the cluster software, <a class="indexterm" href="#s1-starting-cluster-CA">Starting the Cluster Software</a></dt><dt>status check, cluster resource, <a class="indexterm" href="#ap-status-check-CA">Cluster Service Resource Check and Failover Timeout</a></dt><dt>System V init , <a class="indexterm" href="#s1-admin-start-CA">Starting and Stopping the Cluster Software</a></dt></dl></div><div class="indexdiv"><h3>T</h3><dl><dt>table</dt><dd><dl><dt>command line tools, <a class="indexterm" href="#s1-cmdlinetools-overview-CA">Command Line Administration Tools</a></dt></dl></dd><dt>tables</dt><dd><dl><dt>HA resources, parameters, <a class="indexterm" href="#ap-ha-resource-params-CA">HA Resource Parameters</a></dt><dt>power controller connection, configuring, <a class="indexterm" href="#ap-fence-device-param-CA">Fence Device Parameters</a></dt></dl></dd><dt>timeout failover, <a class="indexterm" href="#ap-status-check-CA">Cluster Service Resource Check and Failover Timeout</a></dt><dt>troubleshooting</dt><dd><dl><dt>diagnosing and correcting problems in a cluster, <a class="indexterm" href="#s1-admin-problems-conga-CA">Diagnosing and Correcting Problems in a Cluster</a>, <a class="indexterm" href="#s1-admin-problems-CA">Diagnosing and Correcting Problems in a Cluster</a></dt></dl></dd><dt>types</dt><dd><dl><dt>cluster resource, <a class="indexterm" href="#s1-clust-svc-ov-CA">Considerations for Configuring HA Services</a></dt></dl></dd></dl></div><div class="indexdiv"><h3>U</h3><dl><dt>upgrading, RHEL 4 to RHEL 5, <a class="indexterm" href="#ap-upgrade-rhel4-to-rhel5-CA">Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5</a></dt></dl></div><div class="indexdiv"><h3>V</h3><dl><dt>virtual machines, in a cluster, <a class="indexterm" href="#s1-vm-considerations-CA">Configuring Virtual Machines in a Clustered Environment</a></dt></dl></div></div></div></div></body></html>