C H A P T E R  5

Installing the Software on the Master-Eligible Nodes

After you have set up the installation environment, you are ready to manually install the Solaris Operating System and the Foundation Services manually on the master-eligible nodes of the cluster. The master-eligible nodes take on the roles of master node and vice-master node in the cluster. For more information about the types of nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

To manually install and configure the Foundation Services on the master-eligible nodes of your cluster, see the following sections:



Note - Do not use the nhcmmstat or scmadm tools to monitor the cluster during the installation procedure. Use these tools only after the installation and configuration procedures have been completed on all nodes.




Defining Disk Partitions on the Master-Eligible Nodes

The master-eligible nodes store current data for all nodes in the cluster, whether the cluster has diskless nodes or dataless nodes. One master-eligible node is to be the master node, while the other master-eligible node is to be the vice-master node. The vice-master node takes over the role of master in case the master node fails or is taken offline for maintenance. Therefore, the disks of both these nodes must have exactly the same partitions. Create the disk partitions of the master-eligible node according to the needs of your cluster. For example, the disks of the master-eligible nodes must be configured differently if diskless nodes are part of the cluster.

The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes.


TABLE 5-1 Example Disk Partitions of Master-Eligible Nodes for IP Replication

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes minimum

1

/swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte

2

overlap

Entire disk.

Size of entire disk

3

/export

Exported file system reserved for diskless nodes. This partition must be mounted with the logging option. This partition is further partitioned if diskless nodes are added to the cluster.

1 Gbyte + 100 Mbytes per diskless node

4

/SUNWcgha/local

This partition is reserved for NFS status files, services, and configuration files. This partition must be mounted with the logging option.

2 Gbytes

5

Reserved for Reliable NFS internal use

Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /export file system.

See TABLE 5-3

6

Reserved for Reliable NFS internal use

Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /SUNWcgha/local file system.

See TABLE 5-3

7

/mypartition

For any additional applications.

The remaining space



TABLE 5-2 Example Disk Partitions of Master-Eligible Nodes for Shared Disk

Disk Partition

File System Name

Description

Example Size

0

/

Data partition for diskless Solaris images

2 Gbytes minimum

1

/swap

Data partition for middleware data and binaries

1 Gbyte

2

overlap

Entire disk.

Size of entire disk

7

SVM replica

20 MBytes


For replication, create a bitmap partition for each partition containing an exported, replicated file system on the master-eligible nodes. The bitmap partition must be at least the following size.

1 Kbyte + 4 Kbytes per Gbyte of data in the associated data partition

In this example, the bitmaps are created on partitions 5 and 6. The bitmap partition sizes can be as shown in the following table.


TABLE 5-3 Example Bitmap Partitions

File System Name

Bitmap Partition

File System (Mbytes)

Bitmap File (Kbytes)

Bitmap Size (Block)

/export

/dev/rdsk/c0t0d0s5

2000

9216

18

/SUNWcgha/local

/dev/rdsk/c0t0d0s6

1512

7072

14


For information, see the Sun StorEdge Availability Suite 3.1 Remote Mirror Software Installation Guide in the Sun StorEdgetrademark Availability Suite 3.1 documentation set.



Note - In a cluster without diskless nodes, the /export file system and the associated bitmap partition are not required.




Installing the Solaris Operating System on the Master-Eligible Nodes

To install the Solaris Operating System on each master-eligible node, use the Solaris JumpStart tool on the installation server. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Preparing the Installation Environment.


procedure icon  To Install the Solaris Operating System on the Master-Eligible Nodes

1. Log in to the installation server as superuser.

2. Create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:

You can access these documents on http://docs.sun.com.

3. In the /etc/hosts file, add the names and IP addresses of the master-eligible nodes.

4. Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:

share -F nfs -o ro,anon=0 Solaris-distribution-dir
share -F nfs -o ro,anon=0 Jumpstart-dir

5. Share the directories that are defined in the /etc/dfs/dfstab file:

# shareall

6. Change to the directory where the add_install_client command is located:

# cd Solaris-dir/Solaris_x/Tools

7. Run the add_install_client command for each master-eligible node.

For information, see the add_install_client(1M) man page.

8. Connect to the console of each master-eligible node.

9. Boot each master-eligible node with the appropriate command using a network boot.

If you are unsure of the appropriate command, refer to the hardware documentation for your platform. The common command for SPARC systems is shown in the following example:


ok> boot net - install

If the installation server is connected to the second Ethernet interface, type:


ok> boot net2 - install

This command installs the Solaris Operating System on the master-eligible nodes.


Setting Up the Master-Eligible Nodes

To prepare the master-eligible nodes for the installation of the Foundation Services, you must configure the master-eligible nodes. You must also mount the installation server directory that contains the Foundation Services distribution.


procedure icon  To Configure the Master-Eligible Nodes

1. Log in to a master-eligible node as superuser.

2. Create /etc/notrouter file:

# touch /etc/notrouter

3. Modify the /etc/default/login file so that you can connect to a node from a remote system as superuser:

# mv /etc/default/login /etc/default/login.orig
# chmod 644 /etc/default/login.orig
# sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login
# chmod 444 /etc/default/login

4. Disable power management:

# touch /noautoshutdown

5. Modify the .rhosts file according to the security policy for your cluster:

# touch /.rhosts
# cp /.rhosts /.rhosts.orig
# echo "+ root" > /.rhosts
# chmod 444 /.rhosts

6. Set the boot parameters:

# /usr/sbin/eeprom local-mac-address?=true
# /usr/sbin/eeprom auto-boot?=true
# /usr/sbin/eeprom diag-switch?=false

7. (Optional) If you are using the Network Time Protocol (NTP) to run an external clock, configure the master-eligible node as an NTP server.

This procedure is described in the Solaris documentation.

8. (Optional) If your master-eligible node has an IDE disk, edit the /usr/kernel/drv/sdbc.conf file.

Change the value of the sdbc_max_fbas parameter from 1024 to 256.

9. Create the data/etc and data/var/dhcp directories in the /SUNWcgha/local/export/ file system on the master-eligible node:

# mkdir -p /SUNWcgha/local/export/data/etc
# mkdir -p /SUNWcgha/local/export/data/var/dhcp

10. Repeat Step 1 through Step 9 on the second master-eligible node.


procedure icon  To Mount an Installation Server Directory on the Master-Eligible Nodes

1. Log in to the installation server as superuser.

2. Check that the mountd and nfsd daemons are running on the installation server.

For example, use the ps command:


# ps -ef | grep mountd
root   184     1  0   Aug 03 ?        0:01 /usr/lib/autofs/automountd
root   290     1  0   Aug 03 ?        0:00 /usr/lib/nfs/mountd
root  2978  2974  0 17:40:34 pts/2    0:00 grep mountd
# ps -ef | grep nfsd
root   292     1  0   Aug 03 ?        0:00 /usr/lib/nfs/nfsd -a 16
root  2980  2974  0 17:40:50 pts/2    0:00 grep nfsd
# 

If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons:


# /etc/init.d/nfs.server start

3. Share the directory containing the distributions for the Foundation Services and the Solaris Operating System by adding the following lines to the /etc/dfs/dfstab file:

share -F nfs -o ro,anon=0 software-distribution-dir

where software-distribution-dir is the directory that contains the Foundation Services packages and Solaris patches.

4. Share the directories that are defined in the /etc/dfs/dfstab file:

# shareall

5. Log in to the a master-eligible node as superuser.

6. Create the mount point directories Solaris and NetraHASuite on the master-eligible node:

# mkdir /NetraHASuite
# mkdir /Solaris

7. Mount the Foundation Services and Solaris distribution directories on the installation server:

# mount -F nfs \
installation-server-IP-address:/software-distribution-dir \
/Product/NetraHASuite_2.1.2/FoundationServices/Solaris_x/sparc \
/NetraHASuite
# mount -F nfs \
installation-server-IP-address:/Solaris-distribution-dir \
/Solaris

8. Repeat Step 5 through Step 7 on the other master-eligible node.


procedure icon  To Install Solaris Patches

After you have completed the Solaris installation,you must install the Solaris patches delivered in the Foundation Services distribution. See the Netra High Availability Suite Foundation Services 2.1 7/05 README for the list of patches.



Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



1. Log in to each master-eligible node as superuser.

2. Install the necessary Solaris patches on each master-eligible node:

# patchadd -M /NetraHASuite/Patches/ patch-number


Installing the Man Pages on the Master-Eligible Nodes


procedure icon  To Install the Man Pages on the Master-Eligible Nodes

1. Log in to a master-eligible node as superuser.

2. Add the man page package:

# pkgadd -d /NetraHASuite/Packages/ SUNWnhman

The man pages are installed in the /opt/SUNWcgha/man directory. To access the man pages, see the Netra High Availability Suite Foundation Services 2.1 7/05 Reference Manual.

3. Repeat Step 1 and Step 2 on the other master-eligible node.


Installing the Foundation Services on the Master-Eligible Nodes

The following procedures explain how to install the Foundation Services on the master-eligible nodes:


procedure icon  To Install the nhadm Tool

The nhadm tool is a cluster administration tool that can verify that the installation was completed correctly. You can run this tool when your cluster is up and running.

single-step bulletAs superuser, install the nhadm tool package on each master-eligible node:


# pkgadd -d /NetraHASuite/Packages/ SUNWnhadm


procedure icon  To Install the Carrier Grade Transport Protocol

CGTP enables a redundant network for your cluster.



Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



1. Before you install the CGTP packages, make sure that you have installed the Solaris patches for CGTP.

See To Install Solaris Patches.

2. As superuser, install the following CGTP packages on each master-eligible node:

# pkgadd -d /NetraHASuite/Packages/ SUNWnhtpx SUNWnhtux

where x is 8 or 9 depending on the version of the Solaris Operating System you install.


procedure icon  To Install the Node State Manager

single-step bulletAs superuser, install the Node State Manager packages on each master-eligible node:


# pkgadd -d /NetraHASuite/Packages/ SUNWnhnsa SUNWnhnsb



Note - During the installation of the Node State Manager packages, the /etc/opt/SUNWcgha/not_configured file is created automatically. This file enables you to reboot a cluster node during the installation process without starting the Foundation Services.



For more information about the Node State Manager, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.


procedure icon  To Install the External Address Manager

1. Become superuser.

2. Type the following command:

# pkgadd -d /NetraHASuite/Packages/SUNWnheaa SUNWnheab

For information on configuring the EAM, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.


procedure icon  To Install the Cluster Membership Manager

single-step bulletAs superuser, install the following CMM packages on each master-eligible node:


# pkgadd -d /NetraHASuite/Packages/ SUNWnhcdt SUNWnhhb \
SUNWnhcmd SUNWnhcma SUNWnhcmb



Note - During the installation of the CMM packages, the /etc/opt/SUNWcgha/not_configured file is created automatically. This file enables you to reboot a cluster node during the installation process without starting the Foundation Services.



For instructions on configuring the CMM, see Configuring the Foundation Services on the Master-Eligible Nodes.

For information about the CMM, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.


procedure icon  To Install the Reliable NFS When Using IP-Based Replication

Install the Reliable NFS packages to enable the Reliable NFS service and data-replication features of Foundation Services. For a description of the Reliable NFS service, see "File Sharing and Data Replication" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview. The Reliable NFS feature is enabled by the StorEdge Network Data Replicator (SNDR), which is provided with the Reliable NFS packages.



Note - SNDR is supplied for use only with the Foundation Services. Any use of this product other than on a Foundation Services cluster is not supported.



1. As superuser, install the following Reliable NFS and SNDR packages on a master-eligible node in the following order:

# pkgadd -d /NetraHASuite/Packages/ SUNWscmr \
SUNWscmu SUNWspsvr SUNWspsvu SUNWrdcr SUNWrdcu SUNWnhfsa SUNWnhfsb



Note - During the installation of the SNDR package SUNWscmu, you might be asked to specify a database configuration location. You can choose to use the SNDR directory that is automatically created. This directory is of the format /sndrxy where x.y is the version of the SNDR release.



2. Repeat Step 1 on the second master-eligible node.

3. Install the SNDR patches on each master-eligible node.

See the Netra High Availability Suite Foundation Services 2.1 7/05 README for a list of SNDR patches.

4. Edit the /usr/kernel/drv/rdc.conf file on each master-eligible node to change the value of the rdc_bitmap_mode parameter.

To have changes to the bitmaps written on the disk at each update, change the value of the rdc_bitmap_mode parameter to 1.

To have changes to the bitmaps stored in memory at each update, change the value of the rdc_bitmap_mode parameter to 2. In this case, changes are written on the disk when the node is shut down. However, if both master-eligible nodes fail, both disks must be synchronized.

For example: rdc_bitmap_mode=2.


procedure icon  To Install the Reliable NFS When Using Shared Disk

Install the Reliable NFS packages to enable the Reliable NFS service and disk mirroring features of Foundation Services. For a description of the Reliable NFS service, see "File Sharing and Data Replication" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

1. As superuser, install the following Reliable NFS packages on a master-eligible node in the following order:

# pkgadd -d /NetraHASuite/Packages/ SUNWnhfsa SUNWnhfsb

2. Repeat Step 1 on the second master-eligible node.


procedure icon  To Install the Node Management Agent

Install the Node Management Agent (NMA) packages to gather statistics on Reliable NFS, CGTP, and CMM. For a description of the NMA, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

The NMA consists of four packages. One NMA package is installed on both master-eligible nodes. Three packages are NFS-mounted as shared middleware software on the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing and configuring all the services on the master-eligible nodes.

The NMA requires the Javatrademark DMK packages, SUNWjsnmp and SUNWjdrt, to run. For information about installing the entire Java DMK software, see the Java Dynamic Management Kit 5.0 Installation Guide.

The following table describes the packages that are required on each type of node.


Package

Description

Installed On

SUNWjsnmp

Java DMK 5.0 Simple Network Management Protocol (SNMP) manager API classes

Both master-eligible nodes

SUNWjdrt

Java DMK 5.0 dynamic management runtime classes

First master-eligible node

SUNWnhmas

NMA configuration and startup script

Both master-eligible nodes

SUNWnhmaj

NMA Java classes

First master-eligible node

SUNWnhmal

NMA JNI libraries

First master-eligible node

SUNWnhmad

NMA Javadoc files

First master-eligible node


Follow this procedure to install and configure the NMA.

1. As superuser, install the following NMA package and Java DMK package on both master-eligible nodes:

# pkgadd -d /NetraHASuite/Packages/ SUNWnhmas SUNWjsnmp



Note - If you plan to use shared disks, do not advance to Step 2 until the metadevice used for shared disks has been created. See Step 2 in To Set Up File Systems on the Master-Eligible Nodes.



2. On the first master-eligible node, install the following shared Java DMK package and NMA packages:

# pkgadd -d /NetraHASuite/Packages/ \
-M -R /SUNWcgha/local/export/services/ha_2.1.2 \
 SUNWjdrt SUNWnhmaj SUNWnhmal SUNWnhmad

The packages are installed with a predefined root path in the /SUNWcgha/local/export/services/ha_2.1.2 directory.



Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation.



3. To configure the NMA, see the Netra High Availability Suite Foundation Services 2.1 7/05 NMA Programming Guide.


procedure icon  To Install the Daemon Monitor

single-step bulletAs superuser, install the following Daemon Monitor packages on each master-eligible node:


# pkgadd -d /NetraHASuite/Packages/ SUNWnhpma SUNWnhpmb \
SUNWnhpms SUNWnhpmn SUNWnhpmm

For a description of the Daemon Monitor, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.


procedure icon  To Install the Watchdog Timer

Install and configure the Watchdog Timer provided with the Foundation Services only if you are using Netra servers that have hardware watchdogs at the Lights Out Management (LOM) level. For a list of the types of watchdogs for different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.



Note - If you are using Netra servers with hardware watchdogs at the OBP level, do not install the Watchdog Timer provided with the Foundation Services. These hardware watchdogs are monitored by the server's software.



1. Before installing the Watchdog Timer, do the following:

2. As superuser, install the Watchdog Timer package on each master-eligible node:

# pkgadd -d /NetraHASuite/Packages/ SUNWnhwdt

The Watchdog Timer can be configured differently on each node, depending on your needs. See Configuring the nhfs.conf File.


Configuring the Master-Eligible Node Addresses

Before assigning IP addresses to the network interfaces of the master-eligible nodes, see "Cluster Addressing and Networking" in the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.

In the Foundation Services, three IP addresses must be configured for each master-eligible node:

The IP addresses can be IPv4 addresses of any class with the following structure:


network_id.host_id

When you configure the IP addresses, make sure that the node ID, nodeid, is the decimal equivalent of host_id. You define the nodeid in the cluster_nodes_table file and the nhfs.conf file. For more information, see Configuring the Foundation Services on the Master-Eligible Nodes.

The following procedures explain how to create and configure IP addresses for master-eligible nodes.

Examples in these procedures use IPv4 Class C addresses.


procedure icon  To Create the IP Addresses for the Network Interfaces

1. Log in to each master-eligible node as superuser.

2. In the /etc/hosts file on each master-eligible node, add the three IP addresses, followed by the name of each interface:

10.250.1.10     netraMEN1-nic0
10.250.2.10     netraMEN1-nic1
10.250.3.10     netraMEN1-cgtp

10.250.1.20     netraMEN2-nic0
10.250.2.20     netraMEN2-nic1
10.250.3.20     netraMEN2-cgtp

10.250.1.1      master-nic0
10.250.2.1      master-nic1
10.250.3.1      master-cgtp

In the rest of this book, the node netraMEN1 is the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing the Foundation Services. The node netraMEN2 is the second master-eligible node that is booted after the first master-eligible node has completed booting.


procedure icon  To Update the Network Files

In the /etc directory on each master-eligible node, you must create a hostname file for each of the three interfaces. In addition, update the nodename and netmasks files.

1. Create or update the file /etc/hostname.NIC0 for the NIC0 interface.

This file must contain the name of the master-eligible node on the first interface, for example, netraMEN1-nic0.

2. Create or update the file /etc/hostname.NIC1 for the NIC1 interface.

This file must contain the name of the master-eligible node on the second interface, for example, netraMEN1-nic1.

3. Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.

This file must contain the name of the master-eligible node on the cgtp0 interface, for example, netraMEN1-cgtp.

4. Update the /etc/nodename file with the IP address of the master-eligible node.

5. Create a /etc/netmasks file with a netmask of 255.255.255.0 for all subnetworks in the cluster.


procedure icon  To Configure External IP Addresses

To configure external IP addresses for a master-eligible node, the node must have an extra physical network interface or logical network interface. An extra physical network interface is an unused interface on an existing Ethernet card or a supplemental Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.

single-step bulletConfigure an external IP address for the extra network interface based on your public network policy.


procedure icon  To Configure an External Floating Adress Using a Single Link

1. Add, if required, the hostname associated with the external floating address in /etc/host on each master-eligible node.

129.253.1.13     ext-float

2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.

129.253.1.0     255.255.255.0

3. Create or update the file /etc/hostname.interface for the interface supporting the external floating address on each master-eligible node.

If the file does not exist, create the following lines (the file must contain at least two lines for the arguments to be taken into account):

ext-float netmask + broadcast + 
down

If the file already exists, add the following line:

addif ext-float netmask + broadcast + down

4. Configure the external floating address parameter in the nhfs.conf file on each master-eligible node.

For more information, see the nhfs.conf(4) man page.


procedure icon  To Configure an External Floating Address Using Redundant Links Managed by IPMP

To configure the external floating address, the node must have two network interfaces not already used for a CGTP network. Using a different VLAN can be considered if no network interfaces are available.Each interface must be configured with a special IP address used for monitoring. The external floating address must be configured in one of them, and all of these IP addresses must be part of the same subnetwork.

1. Add, if required, the hostname associated to test IP addresses and the external floating address in /etc/host on each master-eligible node.

IP addresses for testing must be different on each node.

129.253.1.11     test-ipmp-1
129.253.1.12     test-ipmp-2
129.253.1.30     ipmp-float

2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.

129.253.1.0     255.255.255.0

3. Create or update the file /etc/hostname.interface for the first interface on each master-eligible node.

The file must contain the definition of the test IP address for this interface and the external floating address in this format:

test IP address #1 netmask + broadcast + -failover deprecated group name up addif floating address netmask + broadcast + failover down

For example:

test-ipmp-1 netmask + broadcast + -failover deprecated group 
ipmp-group up addif ipmp-float netmask + broadcast + failover down

4. Create or update the file /etc/hostname.interface for the second interface on each master-eligible node.

The file must contain the definition of the test IP address for this interface in this format:

test IP address #1 netmask + broadcast + -failover deprecated group name up

For instance:

test-ipmp-2 netmask + broadcast + -failover deprecated group 
ipmp-group up

5. Configure the external floating address parameters (floating address and IPMP group to be monitored) in the nhfs.conf file on each master-eligible node.

For more information, see the nhfs.conf(4) man page.


Configuring the Foundation Services on the Master-Eligible Nodes

Configure the services that are installed on the master-eligible nodes by modifying the nhfs.conf and the cluster_nodes_table files on each master-eligible node in the cluster. Master-eligible nodes have read-write access to these files. Diskless nodes or dataless nodes in the cluster have read-only access to these files.

Configuring the nhfs.conf File

The following procedures describe how to configure the nhfs.conf file.


procedure icon  To Configure the nhfs.conf File Properties

The nhfs.conf file enables you to configure the node after you have installed the Foundation Services on the node. This file provides parameters for configuring the node, CMM, Reliable NFS, the direct link between the master-eligible nodes, the Node State Manager, the Watchdog Timer, and daemon scheduling.

1. As superuser, copy the template /etc/opt/SUNWcgha/nhfs.conf.template file:

# cp /etc/opt/SUNWcgha/nhfs.conf.template \
 /etc/opt/SUNWcgha/nhfs.conf

2. For each property that you want to change, uncomment the associated parameter (delete the comment mark at the beginning of the line).

3. Modify the value of each parameter that you want to change.

For descriptions of each parameter, see the nhfs.conf(4) man page.

If you have not installed the CGTP patches and packages, do the following:


procedure icon  To Create the Floating Address Triplet Assigned to the Master Role

The floating address triplet is a triplet of three logical addresses active on the node holding the master role. When the cluster is started, the floating address triplet is activated on the master node. In the event of a switchover or a failover, these addresses are activated on the new master node. Simultaneously, the floating address triplet is deactivated automatically on the old master node, that is, the new vice-master node.

single-step bulletTo create the floating address triplet, you must define the master ID in the nhfs.conf file.

The floating address triplet is calculated from the master ID, the netmask, and the network interface addresses.

For more information about the floating address triplet of the master node, see "Cluster Addressing and Networking" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview.


procedure icon  To Configure a Direct Link Between the Master-Eligible Nodes

You can configure a direct link between the master-eligible nodes to prevent a split brain cluster. A split brain cluster is a cluster that has two master nodes because the network between the master node and the vice-master node has failed.

1. Connect the serial ports of the master-eligible nodes.

For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.

2. Configure the direct link parameters.

For more information, see the nhfs.conf(4) man page.

Creating the cluster_nodes_table File

The cluster_nodes_table file contains the configuration data for each node in the cluster. Create this file on each master-eligible node. Once the cluster is running, this file is accessed by all nodes in the cluster. Therefore, the cluster_nodes_table on both master-eligible nodes must be exactly the same.


procedure icon  To Create the cluster_nodes_table File

1. Log in to a master-eligible node as superuser.

2. Copy the template file from /etc/opt/SUNWcgha/cluster_nodes_table.template to /etc/opt/SUNWcgha/cluster_nodes_table.

You can save the cluster_nodes_table file in a directory other than the /etc/opt/SUNWcgha directory. By default, the cluster_nodes_table file is located in the /etc/opt/SUNWcgha directory.

3. Edit the cluster_nodes_table file to add a line for each node in the cluster.

For more information, see the cluster_nodes_table(4) man page.

4. Edit the nhfs.conf file to specify the directory that contains the cluster_nodes_table file:

CMM.LocalConfig.Dir=/etc/opt/SUNWcgha

For more information, see the nhfs.conf(4) man page.

5. Log in to the other master-eligible node as superuser.

6. Copy the /etc/opt/SUNWcgha/cluster_nodes_table file from the first master-eligible node to the same directory on the second master-eligible node.

If you saved the cluster_nodes_table file in a directory other than /etc/opt/SUNWcgha, copy the file to that other directory on the second master-eligible node. The cluster_nodes_table file must be available in the same directory on both master-eligible nodes.

7. Repeat Step 4 on the second master-eligible node.


Note - When there is a change in the attribute of a node, the cluster_nodes_table file is updated by the nhcmmd daemon on each master-eligible node. If a switchover or failover occurs, the diskless nodes or dataless nodes in the cluster access the cluster_nodes_table file on the new master node. Only master-eligible nodes can write information to the cluster_nodes_table file.




Configuring Solaris Volume Manager With Reliable NFS and Shared Disk


procedure icon  To Configure Solaris Volume Manager for Use with Reliable NFS and a Shared Disk

This procedure uses the following values for its code examples:

Detailed information about SVM and how to set up a shared disk can be found in the Solaris Volume Manager Administration Guide.

1. On the first master-eligible node, change the node name with the name of the host associated to the CGTP interface:

# uname -S netraMEN1-cgtp

2. Repeat Step 1 for the second master-eligible node:

# uname -S netraMEN2-cgtp

3. On the first master-eligible node, restart the rpcbind daemon to make it use the new node name:

# pkill -x -u 0 rpcbind
# /usr/sbin/rpcbind -w

4. Repeat Step 3 on the second master-eligible node.

5. Create the database replicas for the dedicated root disk slice on each master-eligible node:

# metadb -a -c 3 -f /dev/rdsk/c0t0d0s7

6. Repeat Step 9 for the second master-eligible node:

# cat /etc/nodename
netraMEN2-cgtp

7. (Optional) If you plan to use CGTP, configure a temporary network interface on the first private network and make it match the name and IP address of the CGTP interface on the first master-eligible node:

# ifconfig hme0:111 plumb
# ifconfig hme0:111 10.250.3.10 netmask + broadcast + up
Setting netmask of hme0:111 to 255.255.255.0

8. (Optional) If you plan to use CGTP, repeat Step 7 for the second master-eligible node:

# ifconfig hme0:111 plumb
# ifconfig hme0:111 10.250.3.11 netmask + broadcast + up
Setting netmask of hme0:111 to 255.255.255.0

9. On the first master-eligible node, verify that the /etc/nodename file matches the name of the CGTP interface (or the name of the private network interface, if CGTP is not used) :

# cat /etc/nodename
netraMEN1-cgtp



Note - The rest of the procedure only applies to the first master-eligible node.



10. Create the SVM diskset that manages the shared disks:

# metaset -s nhas_diskset -a -h netraMEN1-cgtp netraMEN2-cgtp

11. Remove any possible existing SCSI3-PGR keys from the shared disks.

In the following example, there was no key lying on the disks):

# /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t8d0s2
Performing a SCSI bus reset ... done.
There are no keys on disk '/dev/rdsk/c1t8d0s2'.
# /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t9d0s2
Performing a SCSI bus reset ... done.
There are no keys on disk '/dev/rdsk/c1t9d0s2'.

12. Add the names of the shared disks to the previously created diskset:

# metaset -s nhas_diskset -a /dev/rdsk/c1t8d0 /dev/rdsk/c1t9d0



Note - This step will reformat the shared disks, and all existing data on the shared disks will be lost.



13. Verify that the SVM configuration is set up correctly:

# metaset
Set name = nhas_diskset, Set number = 1
Host                Owner
   netraMEN1-cgtp     Yes
   netraMEN2-cgtp
Drive    Dbase
c1t8d0   Yes
c1t9d0   Yes



Note - If you do not plan to install diskless nodes, jump to Step 18.



14. Retrieve disk geometry information using the prtvtoc command.

A known problem in the diskless management tool, smosservice, prevents the creation of the diskless environment on a metadevice. To avoid this problem, mount the /export directory on a physical partition during the diskless environment creation.

To support access to the /export via a metadevice without preventing its access on a physical partition, the disk must be re-partitioned in a particular way after it has been inserted into a diskset. This re-partitioning preserves data already stored by SVM, since there is no formatting of created partitions.

TABLE 5-4 gives an example of the prtvtoc command output after inserting a disk into a diskset.


TABLE 5-4 Example Output From prtvtoc Command
# prtvtoc /dev/rdsk/c1t8d0s0
* /dev/rdsk/c1t8d0s0 partition map
*
* Dimensions:
*     512 bytes/sector
*     107 sectors/track
*      27 tracks/cylinder
*    2889 sectors/cylinder
*   24622 cylinders
*   24620 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First      Sector   Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00       8667  71118513  71127179
       7      4    01          0      8667      8666

15. Create the data file using the fmthard command.

The fmthard command (see its man page for more information) is used to create physical partitions. It requires you to input a data file describing the partitions to be created. There is one entry per partition, using the following format:

slice # tag flag starting sector size in sectors

starting sector and size in sectors values must be rounded to a cylinder boundary and must be computed as explained below.

Three particular slices must be created:

Other slices can be added depending on your application requirements. TABLE 5-5 gives an example for partitioning::


TABLE 5-5 Example Slices for Physical Partitions

Slice Number

Usage

Size in MBytes

0

/export

4096

1

/SUNWcgha/local

2048

7

metadb

Not Applicable


The following slice constraints must be respected:

An example of computing for slice 0 (located after slice 7) :

These values would display the following content in the data file (datafile.txt):

7 4 01       0     8667
0 0 00    8667  8389656
1 0 00 8398323  4194828
2 5 01       0 71127180

Note that this example leaves some unallocated spaces on the disk that can be used for user-specific partitions.

16. Re-partition the disk.

Execute the following commands for the primary and for the secundary disk:

# fmthard -s datafile.txt /dev/rdsk/c1t8d0s2
# fmthard -s datafile.txt /dev/rdsk/c1t9d0s2

17. Create the metadevices for partition mapping and mirroring.

Create the metadevices on the primary disk:

# metainit -s nhas_diskset d11 1 1 /dev/rdsk/c1t8d0s0
# metainit -s nhas_diskset d12 1 1 /dev/rdsk/c1t8d0s1

Create the metadevices on the secondary disk:

# metainit -s nhas_diskset d21 1 1 /dev/rdsk/c1t9d0s0
# metainit -s nhas_diskset d22 1 1 /dev/rdsk/c1t9d0s1

Create the mirror sets:


# metainit -s nhas_diskset d1 -m d11
# metattach -s nhas_diskset d1 d21
# metainit -s nhas_diskset d2 -m d12
# metattach -s nhas_diskset d2 d22



Note - This ends the section specific to the configuration for diskless installation. To complete diskless installation, jump to Step 20.



18. Create your specific SVM RAID configuration (refer to the Solaris Volume Manager Administration Guide for information on specific configurations).

In the following example, the two disks form a mirror called d0:


# metainit -s nhas_diskset d18 1 1 /dev/rdsk/c1t8d0s0
# metainit -s nhas_diskset d19 1 1 /dev/rdsk/c1t9d0s0
# metainit -s nhas_diskset d0 -m d18
# metattach -s nhas_diskset d0 d19

19. Create soft partitions to host the shared data.

These soft partitions are the file systems managed by Reliable NFS. In the following example, d1 and d2 are managed by Reliable NFS.


# metainit -s nhas_diskset d1 -p d0 2g
# metainit -s nhas_diskset d2 -p d0 2g

The devices managed by Reliable NFS are now accessible through /dev/md/nhas_diskset/dsk/d1 and /dev/md/nhas_diskset/dsk/d2.

20. Create the file systems on the soft partitions:

# newfs /dev/md/nhas_diskset/rdsk/d1
# newfs /dev/md/nhas_diskset/rdsk/d2

21. Create the following directories on both master-eligible nodes:

# mkdir /SUNWcgha
# mkdir /SUNWcgha/local

22. Mount the file systems on the metadevice on the first node:

# mount /dev/md/nhas_diskset/dsk/d1 /export
# mount /dev/md/nhas_diskset/dsk/d2 /SUNWcgha/local


Setting Up File Systems on the Master-Eligible Nodes


procedure icon  To Set Up File Systems on the Master-Eligible Nodes

1. Ensure that the following directories exist on the first master-eligible node:

# mkdir /SUNWcgha/local/export
# mkdir /SUNWcgha/local/export/data
# mkdir /SUNWcgha/local/export/services
# mkdir /SUNWcgha/local/export/services/NetraHASuite_version/opt

where NetraHASuite_version is the version of the Foundation Services you install, for example, ha_2.1.2.

These directories contain packages and data shared between the master-eligible nodes.

2. If you are using shared disks, install the shared Java DMK package and NMA packages onto the first master-eligible node as explained in Step 2 of To Install the Node Management Agent.

3. Create the following mount points on each master-eligible node:

# mkdir /SUNWcgha/services
# mkdir /SUNWcgha/remote
# mkdir /SUNWcgha/swdb

These directories are used as mount points for the directories that contain shared data.

4. Add the following lines to the /etc/vfstab file on each master-eligible node:



Note - The noac mount option suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.



5. Check the following in the /etc/vfstab file:



Note - Only partitions identified in the nhfs.conf file can be managed by RNFS. For more information about the nhfs.conf file, see Configuring the nhfs.conf File.



6. (Only applicable to SNDR) Create the file systems on the replicated partitions:

# newfs /dev/rdsk/c0t0d0s3
# newfs /dev/rdsk/c0t0d0s4


procedure icon  To Verify File Systems Managed by Reliable NFS

The Reliable NFS daemon, nhcrfsd, is installed on each master-eligible node. To determine which partitions are managed by this daemon, do the following:

single-step bulletCheck the RNFS.Slice parameters of the /etc/opt/SUNWcgha/nhfs.conf file.

This means that soft partition d1 of diskset nhas_diskset is being managed by Reliable NFS.


Starting the Master-Eligible Nodes


procedure icon  To Delete the not_configured File

The /etc/opt/SUNWcgha/not_configured file was installed automatically when you installed the CMM packages. This file enables you to reboot a cluster node during the installation process without starting the Foundation Services.

single-step bulletAfter you have installed the Foundation Services packages on each master-eligible node, delete the not_configured file on each master-eligible node.


procedure icon  To Boot the Master-Eligible Nodes

1. Unmount the shared file system, /NetraHASuite, on each master-eligible node by using the umount command.

See the umount(1M) man page and To Mount an Installation Server Directory on the Master-Eligible Nodes.

2. Reboot the first master-eligible node, which becomes the master node:

# init 6

3. After the first master-eligible node has completed rebooting, reboot the second master-eligible node:

# init 6

This node becomes the vice-master node. To check the role of each node in the cluster, see the nhcmmrole(1M) man page.

4. Create the INST_RELEASE file to allow patching of shared packages:

# /opt/SUNWcgha/sbin/nhadm confshare


procedure icon  To Verify the Cluster Configuration

Use the nhadm tool to verify that the master-eligible nodes have been configured correctly.

1. Log in to the master-eligible node as superuser.

2. Run the nhadm tool to validate the configuration:

# nhadm check starting

If all checks pass the validation, the installation of the Foundation Services was successful. See the nhadm(1M) man page.


Configuring a Floating External Address

A floating external address is a logical address assigned to an interface that is used to connect the master node to an external network. The External Address Manager (EAM) uses the Cluster Membership Manager (CMM) notifications to determine when a node takes on or loses the master role. When notified that a node has become the master node, the EAM configures the floating external addresses on one of the node's external interfaces. When notified that a node has lost the master role, the EAM unconfigures the floating external addresses.

The EAM can be installed when you first install the software on the cluster or after you have completed the installation process and have a running cluster. The following procedure describes how to install the EAM on a running cluster.

At the same time, the floating external addresses can be managed by IP Network Multipathing (IPMP). When a node has two or more NICs connected to the external network, IPMP will failover the floating external addresses from one NIC to the other if the interface they are configured on fails. Additionally, EAM can be configured to monitor the status of those NICs and trigger a switch-over when all NICs in a monitored group have failed.

For more information on IPMP, see Solaris' System Administration Guide: IP Services.



Note - Both IPv4 and IPv6 addresses are now supported. Only IPv4 addresses are used in the examples below.




procedure icon  To Configure a Floating External Address

1. Log in to the vice-master node as superuser.

2. Create a file named not_configured in the /etc/opt/SUNWcgha directory.

# touch /etc/opt/SUNWcgha/not_configured

If the node is rebooted during this procedure, the node does not start the Foundation Services.

3. Reboot the vice-master node.

4. Install the EAM packages, SUNWnheaa and SUNWnheab on the vice-master node:

# pkgadd -d /software-distribution-dir/Product/NetraHASuite_2.1.2/
 FoundationServices/Solarisx/sparc/Packages/ SUNWnheaa SUNWnheab

where software-distribution-dir is the directory that contains the Foundation Services packages.

5. Edit the /etc/opt/SUNWcgha/nhfs.conf file to define the EAM parameters.

An example entry to configure the EAM is as follows:


Node.External.FloatingAdress.0=192.168.12.39

One floating external addresses is declared. When the node changes its role, the address is configured UP or DOWN accordingly. For more details on nhfs.conf parameters, see the nhfs.conf(4) man page.

6. Create the interface in the standard Solaris way in DOWN state:

# cat >> /etc/hostname.hme0addif 192.168.12.39 netmask + broadcast + down ^D

The interface hme0 is configured with the floating external address in DOWN state. This interface must be connected to the public network.

7. Repeat Step 1 through Step 6 on the master node.

8. On both the master node and the vice-master node, delete the /etc/opt/SUNWcgha/not_configured file.

9. Reboot both the master node and the vice-master node.

10. Log in to the master node.

11. Run the ifconfig command on the master node:

# ifconfig -a

lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 
index 2
        inet 10.25.1.26 netmask ffffff00 broadcast 10.25.1.255
        ether 8:0:20:fa:3f:70 
hme0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 
index 2
        inet 192.168.12.39 netmask ffffff00 broadcast 192.168.12.255
hme0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 2 
	 inet 10.25.1.1 netmask ffffff00 broadcast 10.25.1.255
hme1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 3
        inet 10.25.2.26 netmask ffffff00 broadcast 10.25.2.255
        ether 8:0:20:fa:3f:71
hme1:1: flags=
9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 3
        inet 10.25.2.1 netmask ffffff00 broadcast 10.25.2.255
cgtp0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 10.25.3.26 netmask ffffff00 broadcast 10.25.3.255
        ether 0:0:0:0:0:0
cgtp0:1: flags=
9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 4
        inet 10.25.3.1 netmask ffffff00 broadcast 10.25.3.255

In this output, you can see the entry for the hme0:1 interface with the floating external address 192.168.12.39 in state UP.

12. Run the ifconfig command on the vice-master node:

# ifconfig -a

lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 
index 2
        inet 10.25.1.26 netmask ffffff00 broadcast 10.25.1.255
        ether 8:0:20:fa:3f:70 
hme0:1: flags=9040842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 
index 2
        inet 192.168.12.39 netmask ffffff00 broadcast 192.168.12.255
hme0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 2 
	 inet 10.25.1.1 netmask ffffff00 broadcast 10.25.1.255
hme1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 3
        inet 10.25.2.26 netmask ffffff00 broadcast 10.25.2.255
        ether 8:0:20:fa:3f:71
hme1:1: flags=
9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 3
        inet 10.25.2.1 netmask ffffff00 broadcast 10.25.2.255
cgtp0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 10.25.3.26 netmask ffffff00 broadcast 10.25.3.255
        ether 0:0:0:0:0:0
cgtp0:1: flags=
9040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 4
        inet 10.25.3.1 netmask ffffff00 broadcast 10.25.3.255

In this output, the entry for the hme0:1 interface with the floating external address 192.168.12.39 is also configured, but as it is in DOWN state it is not working.

13. Trigger a switchover.

# /opt/SUNWcgha/sbin/nhcmmstat -c so

14. Run the ifconfig command on the new master node.

# ifconfig -a

lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
         inet 127.0.0.1 netmask ff000000
hme0: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 2
         inet 10.25.1.27 netmask ffffff00 broadcast 10.25.1.255
         ether 8:0:20:b8:d3:f6
hme0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
         inet 192.168.12.39 netmask ffffff00 broadcast 192.168.12.255
hme0:2: flags=
9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 2 
	inet 10.25.1.1 netmask ffffff00 broadcast 10.25.1.255
hme1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500
index 3
         inet 10.25.2.27 netmask ffffff00 broadcast 10.25.2.255
         ether 8:0:20:b8:d3:f7
hme1:1: flags=
9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500
index 3
         inet 10.25.2.1 netmask ffffff00 broadcast 10.25.2.255
cgtp0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
         inet 10.25.3.27 netmask ffffff00 broadcast 10.25.3.255
         ether 0:0:0:0:0:0
cgtp0:1: flags=
9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 
1500 index 4
         inet 10.25.3.1 netmask ffffff00 broadcast 10.25.3.255

15. From a remote system, ping the master node floating address.

% ping -s 192.168.12.39

For information on how to configure IPMP, refer to Solaris' System Administration Guide: IP Services.