C H A P T E R  2

Installing and Configuring the nhinstall Tool

The nhinstall tool enables you to install and configure the software and services on your cluster nodes. You install and configure the nhinstall tool on the installation server.

You can use the nhinstall tool to install a cluster that consists of master-eligible nodes, diskless nodes, and dataless nodes.

For information about setting up the installation environment and configuring the nhinstall tool, see the following sections:


Overview of Installing With the nhinstall Tool

The nhinstall tool enables you to install and configure the Foundation Services on the cluster. This tool must be installed on an installation server. The installation server must be connected to your cluster. For details on how to connect nodes of the cluster and the installation server, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

The nhinstall tool runs on the installation server. This tool installs the Solaris Operating System and the Foundation Services on the cluster nodes. For a description of the types of nodes in a cluster, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

The following table lists the tasks for installing the software with the nhinstall tool. Perform the tasks in the order shown.


TABLE 2-1 Tasks for Installing the Software by Using the nhinstall Tool

Task

For Instructions

1.

Choose the software.

Choosing Your Software

2.

Prepare the installation environment.

Preparing the Installation Environment

3.

Install the nhinstall tool on the installation server.

Installing the nhinstall Tool

4.

Configure the nhinstall tool.

Configuring the nhinstall Tool

5.

Install the software using the nhinstall tool.

Chapter 3

6.

Verify that the cluster is configured correctly.

Verifying the Installation



Preparing the Installation Environment

Before installing the nhinstall tool on the installation server, you must create a Solaris distribution on the installation server. You must also prepare the installation server to install the Solaris Operating System and the Foundation Services on the cluster nodes.


procedure icon  To Create a Solaris Distribution on the Installation Server

To install the Solaris Operating System on the cluster, create a Solaris distribution on the installation server. The Solaris distribution is used to install the Solaris Operating System on the cluster nodes. If you are installing more than one Solaris distribution on the cluster, perform the steps in the procedure for each Solaris distribution.

1. Make sure that you have at least 1.5 GBytes of free disk space on the installation server.

2. Log in as superuser on the installation server.

3. Create a directory for the Solaris distribution:

# mkdir Solaris-distribution-dir

where Solaris-distribution-dir is the directory where the distribution is to be stored on the installation server.

4. Change to the directory where the setup_install_server command is located:

# cd Solaris-dir/Solaris_x/Tools

5. Run the setup_install_server command:

# ./setup_install_server Solaris-distribution-dir

For more information about the setup_install_server command, see the appropriate do cumentation:


procedure icon  To Prepare the Installation Server

Before you begin the installation process, make sure that the installation server is configured correctly.

1. Configure the installation server as described in the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

2. If you are planning to install remotely from another system, open a shell window to connect to the installation server.

3. Confirm that the Solaris software packages that contain Perl 5.0 are installed on the installation server.

Use the pkginfo command to check for the SUNWp15u, SUNWp15p, and SUNWp15m Perl packages.

4. Delete any entries for your cluster nodes in the following files:

5. Disable the installation server as a router by creating an /etc/notrouter file:

# touch /etc/notrouter

If a system running the Solaris Operating System has two network interfaces, the system is configured as a router by default. However, for security reasons, a Foundation Services cluster network must not be routed.

6. Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:

hosts: files nis
ethers: files nis
bootparams: files nis

7. From the installation server, open a terminal window to connect to the console of each cluster node.

You can also connect to the consoles from the system that you use to connect to the installation server.


Installing the nhinstall Tool

Install the package containing the nhinstall tool on the installation server described in the following procedure.


procedure icon  To Install the nhinstall Tool

1. Log in to the installation server as superuser.

2. Install the nhinstall package, SUNWnhins:

# pkgadd -d /software-distribution-dir/NetraHAS2.1/Packages/ SUNWnhins

where software-distribution-dir is the directory that contains the Foundation Services packages.

3. To access the man pages on the installation server, install the man page package, SUNWnhman:

# pkgadd -d /software-distribution-dir/NetraHAS2.1/Packages/ SUNWnhman

where software-distribution-dir is the directory that contains the Foundation Services packages.


Configuring the nhinstall Tool

After you have installed the package containing the nhinstall tool, configure the nhinstall tool to install the Foundation Services on your cluster. To configure the nhinstall tool, modify the following configuration files:

The following sections describe in detail the main configuration options of the nhinstall tool:

Configuring the Disk Partitions on Master-Eligible Nodes

Use the SLICE or SHARED_SLICE parameters to specify the disk partitions on the master-eligible nodes.

If you plan to use Netra High Availability Suite for replicating NFS-served data over IP, use the SLICE parameter for all partitions.

If you plan to locate NFS-served data on shared disks, use the SHARED_SLICE parameter for the partition storing this data and use SLICE for the local partitions (the root filesystem, for example).

TABLE 2-2 through TABLE 2-4 list the space requirements for sample disk partitions of master-eligible nodes in a cluster with diskless nodes, either with IP-replicated data or with a shared disk.


TABLE 2-2 Example Disk Partitions of Master-Eligible Nodes With NFS-Served Data Replicated Over IP

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes minimum

1

swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte

3

/export

Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster.

1 Gbyte + 100 Mbytes per diskless node

4

/SUNWcgha/local

This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option.

2 Gbytes

5

Reserved for Reliable NFS internal use

Bitmap partition reserved for nhcrfsd. This volume is associated with the /export file system.

1 Mbyte

6

Reserved for Reliable NFS internal use

Bitmap partition reserved for nhcrfsd. This partition is associated with the /SUNWcgha/local file system.

1 Mbyte

7

replica

OR

/test1

If you have configured volume management, this partition must be named replica. This partition is mounted with the logging option. See Configuring Volume Management.

The remaining space



TABLE 2-3 Local Disk Partitions of Master-Eligible Nodes With NFS-Served Data on Shared Disks

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes minimum

1

swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte

7

replica

Partition used to store SVM meta database.

8 Mbytes



TABLE 2-4 Shared Disk Partitions of Master-Eligible Nodes With NFS-Served Data on Shared Disks

Disk Partition

File System Name

Description

Example Size

0

/export

Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster.

1 Gbyte + 100 Mbytes per diskless node

1

/SUNWcgha/local

This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option.

2 Gbytes

7

replica

Partition used to store SVM meta database.

8 Mbytes




Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined.



Configuring Disk Partitions on Dataless Nodes

Configure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the dataless nodes.

TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes.


TABLE 2-5 Example Disk Partitions of Dataless Nodes

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes minimum

1

swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte




Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined.



Mirroring Shared Disks

Configure the MIRROR parameter to mirror a shared disk to another shared disk.

Configuring the Disk Fencing

To prevent simultaneous access to the shared data in case of split-brain, SCSI disk reservation is used. The SCSI version is configured by the SHARED_DISK_FENCING parameter. It can be set to SCSI2 or SCSI3.

Configuring the Scoreboard Bitmaps

You can configure the nhinstall tool to store the scoreboard bitmaps of IP-replicated partitions either in memory or on the disk.

If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk.

If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update.

Configuring the NFS Option noac

You can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching.

Configuring a Direct Link Between the Master-Eligible Nodes

You can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:


DIRECT_LINK=/dev/ttya 	/dev/ttya	115200	20

Configuring Automatic Reboot for the Master-Eligible Nodes

You can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation.

Configuring the Carrier Grade Transport Protocol

You can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP).

Configuring the Environment for Diskless Nodes

If you define diskless nodes with the NODE or DISKLESS parameters in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes.

If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool gives you the choice of installing the Solaris services for diskless nodes anyway. Type y if you plan to add diskless nodes to the cluster at a later date. Otherwise, the nhinstall tool does not install the Solaris services for the diskless nodes on the master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software. Therefore, try to include possible future nodes in your cluster configuration.



Note - You can manually add diskless nodes to a running cluster as described in Chapter 8.



Configuring the Boot Policy for Diskless Nodes

You can configure the nhinstall tool to have the diskless nodes in the cluster boot dynamically, statically, or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy.

The following table summarizes the boot policies supported by the nhinstall tool.


TABLE 2-6 Boot Policies for Diskless Nodes

Boot Policy

Description

DHCP dynamic boot policy

IP address dynamically assigned from a pool of IP addresses when the diskless node is booted.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_DYNAMIC, nhinstall configures the diskless nodes with a dynamic boot policy. This option is configured by default if you do not define the DISKLESS_BOOT_POLICY parameter. THis option is not recommended for a production cluster.

DHCP static boot policy

IP address based on the Ethernet address of the diskless node. The Ethernet address is specified in the cluster_definition.conf file.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_STATIC, nhinstall configures the diskless nodes with a static boot policy.

DHCP client ID boot policy

IP address generated from the diskless node's client ID in a CompactPCI server.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_CLIENT_ID, nhinstall configures the diskless nodes to use the client ID to generate the IP address.


For further information about the boot policies for diskless nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring DHCP Configuration Files Locally on Master-Eligible Nodes

By default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the highly available directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.

REPLICATED_DHCP_FILES=NO

When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node.



Note - Do not use this feature if the DHCP configuration is dynamic, that is, if information is stored in the DHCP configuration files at run time.



If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages.

Configuring the Watchdog Timer

You can configure the nhinstall tool to install the Foundation Services Watchdog Timer on each node in the cluster.

Set the USE_WDT parameter to YES in the cluster_definition.conf file only if you are using Netra servers that have hardware watchdogs at the Lights-Off Management (LOM) level. You might need to install additional software packages. For further information, see the addon.conf.template file. When this parameter is set to YES, the Foundation Services Watchdog Timer is installed and configured.

Set the USE_WDT parameter to NO if you are using Netra servers with hardware watchdogs at the OpenBoottrademark PROM (OBP) level. These hardware watchdogs are monitored by the server's software. For a list of the types of watchdogs of different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

Configuring the Default Router to the Public Network

By default, nhinstall configures the installation server to be the default router to the public network. To choose another machine as the router to the public network specify the IP address of the default router of your choice in the cluster_definition.conf file as follows:

DEFAULT_ROUTER_IP=IP address

For more information, see the cluster_definition.conf(4) man page.

Configuring the Cluster IP Addresses

You can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:


CLUSTER_NETWORK=255.255.0.0 192.168.0.0 192.169.0.0 192.170.0.0

Configuring the Floating External Address of the Master Node

You can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network.

As an option, IPMP (IP Multipathing) can be used to support a floating external address on dual redundant links.

If you specify an IP address and a network interface for the external address parameter in the cluster_definition.conf file, the floating external address is configured. The External Address Manager daemon, nheamd, that monitors floating addresses and IPMP groups on master-eligible nodes is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nheamd(1M) man page.

If you do not configure the external address parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network.

Configuring External IP Addresses for Cluster Nodes

You can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network.


procedure icon  To Configure External IP Addresses for Cluster Nodes

1. Set the PUBLIC_NETWORK parameter in the cluster_definition.conf file specifying the subnet and netmask for the subnet.

This parameter also configures the network interface of the installation server. Therefore, the SERVER_IP parameter is an IP address that is on the same subnetwork as defined for PUBLIC_NETWORK. The SERVER_IP parameter is defined in the env_installation.conf file. For more information, see the env_installation.conf(4) man page.

2. Specify the external IP address, external node name, and the external network interface for each NODE definition. For example:

MEN=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1
MEN=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1

Sharing Physical Interfaces Between CGTP and IPMP Using VLAN

Physical links can be shared between CGTP and IPMP only when CGTP is used over a VLAN. Before using this configuration, refer to detailed information about Solaris VLAN and IPMP in the Solaris System Administration Guide: IP Services.Not all network interfaces support VLAN. Check that your interfaces support this use. Solaris shows VLAN interfaces as separate physical interfaces, even though there is only one. Since VLANs are configured by using special names for the interfaces, you must define the topology and the interface names for that topology Keep the following points in mind when defining your topology:

For example, consider the three-node cluster shown in FIGURE 2-1. Three ce NICs are on each MEN. In both MENs, ce0 is connected to switch 1, ce1 to switch 2 and ce2 to switch 3. The external router, to which clients connect, is connected to switches 2 and 3. This restricts ce1 and ce2 for external access. CGTP can be used on any two NICs. In this case, ce0 and ce1 were chosen, making ce1 a shared interface.


FIGURE 2-1 Cluster Sharing CGTP and IPMP

Diagram shows a basic Foundation Services cluster


The VLAN is created with VID 123 over the interface ce1 by plumbing an interface called ce123001. In this example, ce0 and ce123001 will be used for CGTP, and ce1 and ce2 for IPMP. Create the tagged VLAN on SW2 (for information on how to create a VLAN, please refer to your switch's documentation), create a cluster_definition.conf file respecting these interfaces, and launch the installation as for any other case.

Configuring Volume Management

The volume management feature enables you to do the following:

The volume management software that is installed depends on the version of the Solaris Operating System that you plan to install. For information on supported software versions, see the Netra High Availability Suite Foundation Services 2.1 7/05 Release Notes.

For a Netra 20 server with a Fibre Channel-Arbitrated Loop (FC-AL) disk as a master-eligible node, you must install the Volume Management feature of the Solaris Operating System. For more information, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

To install the Volume Management software on the nodes of your cluster, perform one of the following procedures:


procedure icon  To Configure Basic Volume Management for Netra 20 Servers With FC-AL Disks

You can use the nhinstall tool to install and configure volume management for Netra 20 servers with FC-AL disks. Configure the nhinstall tool to support logical disk partitions for FC-AL disks by installing the volume management feature as follows:

1. In the env_installation.conf file, set SOLARIS_INSTALL to ALL.

2. Configure the cluster_definition.conf file:

    a. Set LOGICAL_SLICE_SUPPORT to YES.

    b. Set the SLICE definition for the last partition to replica.

For a detailed example, see the cluster_definition.conf(4) man page.

3. Run the nhinstall tool to install the Solaris Operating System and Foundation Services on the master-eligible nodes.

For more information, see To Launch the nhinstall Tool.

The nhinstall tool installs and configures the appropriate volume management software depending on the version of the Solaris Operating System you chose to install.


procedure icon  To Configure Advanced Volume Management

To configure advanced volume management, install the Solaris Operating System and configure the Volume Management feature to suit your needs. Then configure nhinstall to install only the Foundation Services.

1. Install the Solaris Operating System with volume management on the master-eligible nodes.

For more information, see the documentation for your volume management software:

This documentation is available at http://docs.sun.com.



Note - Install the same packages of the same version of the Solaris Operating System on both master-eligible nodes. Create identical disk partitions on the disks of both master-eligible nodes.



2. Configure a physical Ethernet card interface that corresponds to the first network interface, NIC0.

3. Configure the /etc/netmasks file.

See the netmasks(4) man page.

4. Configure the sizes of the disk partitions.

For more information, see TABLE 2-2.

5. In the env_installation.conf file, set SOLARIS_INSTALL to DISKLESS_DATALESS_ONLY.

The Solaris Operating System is configured on the dataless nodes and the Solaris services are configured for the diskless environment.

6. In the cluster_definition.conf file, do the following:

    a. Set the LOGICAL_SLICE_SUPPORT parameter to NO.

    b. For the SLICE parameter, specify the metadevice names of the disk partitions.

    For example:


    SLICE=d1 2048 /               -        logging
    

    For details on the SLICE parameter, see the cluster_definition.conf(4) man page.

7. Run the nhinstall tool to install the Foundation Services on the master-eligible nodes.

For more information, see To Launch the nhinstall Tool.

Specifying the Version of the Operating System to be Installed on the Cluster

Some hardware types require specific or modified versions of the Solaris Operating System that nhinstall is unable to detect automatically. In these cases, you must explicitly force nhinstall to recognize the version of the operating system you want to install on the cluster. To determine if your cluster hardware requires such action, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

Selecting the Solaris Package Set to be Installed

To install a Solaris package set on cluster nodes other than the default package set, specify the Solaris package set to be installed. For a list of the contents of the default package set, see the /opt/SUNWcgha/config.standard/nodeprof.conf.template file. For information about installing a Solaris package set on cluster nodes, see the nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the diskless nodes, see the diskless_nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the dataless nodes, see the dataless_nodeprof.conf(4).

Installing a Different Version of the Operating System on Diskless and Dataless Nodes

To install a version of the Solaris operating system on diskless nodes that is different from the one you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:

SOLARIS_DIR=/export/su28u7fcs
DISKLESS_SOLARIS_DIR=/export/su29HW8a

To install a version of the Solaris operating system on dataless nodes that is different from the one you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:

SOLARIS_DIR=/export/su28u7fcs
DATALESS_SOLARIS_DIR=/export/su29HW8a

By default, the values provided to the DISKLESS_SOLARIS_DIR and DATALESS_SOLARIS_DIR parameters are set to be the same as that provided to the SOLARIS_DIR parameter. For more information, see the env_installation.conf(4) man page.

Configuring a Data Management Policy

There are three data management policies available with the Foundation Services. By default, the nhinstall tool sets the data management policy to be Integrity for data replication over IP, and Availability when using shared disks. To choose another policy, change the value of the following variable in the cluster_definition.conf file.

DATA_MGT_POLICY=INTEGRITY | AVAILABILITY | ADAPTABILITY

For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring a Masterless Cluster

By default, diskless and dataless nodes reboot if there is no master in the cluster. If you do not want the diskless and dataless nodes to reboot in this situation, add the following line to the cluster_definition.conf file:

MASTER_LOSS_DETECTION=YES

For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring Reduced Duration of Disk Synchronization

By default nhinstall enables this feature. It reduces the time taken for full synchronization between the master and the vice-master disks by synchronizing only the blocks that contain replicated data.



Note - Only use this feature with UFS file systems.



To disable this feature and have all blocks replicated, add the following line to the cluster definition.conf file:

SLICE_SYNC_TYPE=RAW

For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring Sanity Check of Replicated Slices

To activate the sanity check of replicated slices, add the following line to the cluster_definition.conf file:

CHECK_REPLICATED_SLICES=YES

By default, the nhinstall tool does not activate this feature. For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring Delayed Synchronization

By default, disk synchronization starts automatically when the cluster software is installed. If you want to delay the start of disk synchronization, add the following line to the cluster_definition.conf file:

SYNC_FLAG=NO

You can trigger disk synchronization at a time of your choice using the nhenablesync tool. For more information, see the cluster_definition.conf(4) and nhenablesync(1M) man pages and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Configuring Serialized Slice Synchronization

By default, nhinstall configures the cluster so that slices are synchronized in parallel. Synchronizing slices one slice at a time reduces the network and disk overhead but increases the time it takes for the vice-master to synchronize with the master. During this time, the vice-master is not eligible to take on the role of master. To enable serialized slice synchronization, add the following line to the cluster_definition.conf file:

SERIALIZE_SYNC=YES

For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

Installing the Node Management Agent (NMA)

By default, the Node Management Agent is installed.Set the INSTALL_NMA parameter to NO to avoid installing this agent.

Installing the Node State Manager (NSM)

By default, the Node State Manager is not installed.Set the INSTALL_NSM parameter to YES to install NSM.

Installing the SAF Cluster Membership API (SAF CLM)

By default, the SAF CLM API is not installed.Set the INSTALL_SAFCLM parameter to YES to install NSM.