C H A P T E R  7

Installing the Software on the Dataless Nodes

After you have installed and configured the master-eligible nodes, you can add diskless nodes and dataless nodes to the cluster.

To add a dataless node to the cluster, see the following sections:


Preparing to Install a Dataless Node

Perform the following procedures before installing and configuring a dataless node:


procedure icon  To Connect a Dataless Node to the Cluster

single-step bulletTo connect a dataless node to a cluster, connect the two Ethernet interfaces of the dataless node to the two switches of the cluster. Connect NIC0 to switch 1 and NIC1 to switch 2.

For more information, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.



Note - The packages and patches that you install on the dataless node might differ depending on the type of hardware you use on the dataless node. For information about the specific patches and packages required for your hardware configuration, see the Netra High Availability Suite Foundation Services 2.1 7/05 README.




procedure icon  To Define Disk Partitions on a Dataless Node

single-step bulletCreate the disk partitions of the dataless node according to the requirements of your cluster.

TABLE 7-1 provides the space requirements for example disk partitions of a dataless node in a cluster.


TABLE 7-1 Disk Partition Space Requirements for a Dataless Node

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes

1

/swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte

2

overlap

Entire disk.

Size of entire disk

3

/mypartition

For any additional applications.

The remaining space



Installing the Solaris Operating System on a Dataless Node

To install the Solaris Operating System on a dataless node, use the Solaris JumpStart tool. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Preparing the Installation Environment.


procedure icon  To Install the Solaris Operating System on a Dataless Node

1. Log in to the installation server as superuser.

2. If not already created, create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:

At the end of this process, you have a Jumpstart-dir directory that contains the Solaris JumpStart files that are needed to install the Solaris Operating System on the node.

3. In the /etc/hosts file, add the name and IP addresses of the dataless node.

4. In the /etc/ethers file, add the Ethernet address of the dataless node's network interface that is connected to the same switch as the installation server, for example, NIC0.

5. Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:

share -F nfs -o rw Solaris-distribution-dir
share -F nfs -o rw Jumpstart-dir

6. Change to the directory where the add_install_client command is located:

# cd Solaris-dir/Solaris_x/Tools

7. Run the add_install_client command for each dataless node:

# ./add_install_client -i IP-address \
-e Ethernet-address \
-s iserver:Solaris-distribution-dir \
-c iserver:Jumpstart-dir \
-p iserver:sysidcfg-dir \
-n name-service host-name platform-group

For more details, see the add_install_client(1M) man page.

8. Connect to the console of the dataless node.

9. At the ok prompt, boot the dataless node by using the net device alias:

ok> boot net - install

If the installation server is connected to the second Ethernet interface, type:


ok> boot net2 - install

This command installs the Solaris Operating System on the dataless node.


procedure icon  To Install Solaris Patches

After you have completed the Solaris installation, install the necessary Solaris patches. The Netra High Availability Suite Foundation Services 2.1 7/05 README contains the list of Solaris patches that you must install, depending on the version of Solaris you installed.



Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



1. Log in to the dataless node as superuser.

2. Mount the directory from the installation server that contains the Solaris patches.

See To Mount an Installation Server Directory on the Master-Eligible Nodes.

3. Install the patches on the dataless node:

# patchadd -M /NetraHASuite/Patches/ patch-name


Installing the Foundation Services on a Dataless Node

After the Solaris Operating System has been installed on the dataless node, install the Foundation Services on the dataless node.

The set of services to be installed on the dataless node is a subset of the Foundation Services installed on the master-eligible nodes. Install the packages that are listed as needed for dataless nodes in TABLE 7-2.


TABLE 7-2 Foundation Services Packages for Dataless Nodes

Package Name

Package Description

SUNWnhadm

Cluster administration tool

SUNWnhhb

Probe heartbeat module

SUNWnhcmd

CMM developer package (.h and .so files)

SUNWnhcma

CMM binaries

SUNWnhcmb

CMM binaries

SUNWnhcdt

Trace library

SUNWnhtp8 or SUNWnhtp9

CGTP kernel drivers and modules

SUNWnhtu8 or SUNWnhtu9

CGTP user-space components, configuration scripts, and files

SUNWnhmas

NMA configuration and startup script

SUNWnhsafclm

SAF CLM Service API

SUNWnhpma

Daemon monitor /opt file system

SUNWnhpmb

Daemon monitor root file system

SUNWnhpms

Daemon monitor scripts

SUNWnhpmm

Daemon monitor driver

SUNWjsnmp

Java DMK 5.0 SNMP manager API classes

SUNWlomr

LOM package required if you install the Watchdog Timer

SUNWlomu

LOM package required if you install the Watchdog Timer

SUNWnhwdt

Watchdog Timer



procedure icon  To Install the Foundation Services

1. Mount the installation server directory on the dataless node as described in To Mount an Installation Server Directory on the Master-Eligible Nodes.

2. Install the packages by using the pkgadd command:

# pkgadd -d /NetraHASuite/Packages/ package-name

where /NetraHASuite/Packages is the installation server directory that is mounted on the dataless node.

CGTP enables a redundant network for your cluster.



Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.




Configuring the Foundation Services on a Dataless Node

The following procedures explain how to configure the Foundation Services on a dataless node.


procedure icon  To Configure a Dataless Node

1. Create a /etc/notrouter file:

# touch /etc/notrouter

Because the cluster network is not routable, the dataless nodes must be disabled as routers.

2. Modify the /etc/default/login file so you can connect to the node from a remote system as superuser:

# mv /etc/default/login /etc/default/login.orig
# chmod 644 /etc/default/login.orig
# sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login
# chmod 444 /etc/default/login

3. Disable power management:

# touch /noautoshutdown

4. Modify the .rhosts file according to the security policy for your cluster:

# cp /.rhosts /.rhosts.orig
# echo "+ root" > /.rhosts
# chmod 444 /.rhosts

5. Set the boot parameters:

# /usr/sbin/eeprom local-mac-address?=true
# /usr/sbin/eeprom auto-boot?=true
# /usr/sbin/eeprom diag-switch?=false

6. (Optional) If using the Network Time Protocol (NTP) to run an external clock, configure the dataless node as an NTP server.

This procedure is described in the Solaris documentation.


procedure icon  To Configure an External IP Address

To configure external IP addresses for a dataless node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME or QFE Ethernet card, for example, hme2. A logical network interface is an interface configured on an existing Ethernet card, for example, hme1:101.

single-step bulletConfigure an external IP address for the extra network interface based on your public network policy.


procedure icon  To Update the Network Files on a Dataless Node

1. Log in to the dataless node as superuser.

As for the master-eligible nodes, three IP addresses are configured for each dataless node:

The IP addresses can be IPv4 addresses of any class. However, the nodeid that you later define in the cluster_nodes_table file and the nhfs.conf file must be a decimal representation of the host part of the node's IP address. For information about the files, see To Create the nhfs.conf File for a Dataless Node and To Update the Cluster Node Table.

2. Create or update the file /etc/hostname.NIC0 for the NIC0 interface.

This file must contain the cluster network name of the dataless node on the second interface, for example, netraDATALESS1-nic0.

3. Create or update the file /etc/hostname.NIC1 for the NIC1 interface.

This file must contain the cluster network name of the master-eligible node on the second interface, for example, netraDATALESS1-nic1.

4. Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.

This file must contain the cluster network name of the dataless node on the cgtp0 interface, for example, netraDATALESS1-cgtp.

5. In the /etc/hosts file, add the IP address and node name for the NIC0, NIC01, and cgtp0 network interfaces of all the nodes in the cluster:

127.0.0.1		  localhost
10.250.1.10 netraMEN1
10.250.2.10 netraMEN1-nic1
10.250.3.10 netraMEN1-cgtp

10.250.1.20 netraMEN2
10.250.2.20 netraMEN2-nic1
10.250.3.20 netraMEN2-cgtp

10.250.1.30 netraDATALESS1-nic0 loghost 
netraDATALESS1.localdomain
10.250.2.30 netraDATALESS1-nic1 netraDATALESS1-nic1.localdomain
10.250.3.30 netraDATALESS1-cgtp netraDATALESS1-cgtp.localdomain

10.250.1.1 		master
10.250.2.1 		master-nic1
10.250.3.1 		master-cgtp

6. Update the /etc/nodename file with the name corresponding to the address of one of the network interfaces, for example, netraDATALESS1-cgtp.

7. Create the /etc/netmasks file by adding one line for each subnet on the cluster:

10.250.1.0    255.255.255.0
10.250.2.0    255.255.255.0
10.250.3.0    255.255.255.0


procedure icon  To Create the nhfs.conf File for a Dataless Node

1. Log in to the dataless node as superuser.

2. Create the nhfs.conf file for the dataless node:

# cp /etc/opt/SUNWcgha/nhfs.conf.template /etc/opt/SUNWcgha/nhfs.conf

3. Edit the nhfs.conf file to suit your cluster configuration.

An example file for a dataless node on a cluster with the domain ID 250, with network interfaces eri0, eri1, and cgtp0 would be as follows:


Node.NodeId=40
Node.NIC0=eri0
Node.NIC1=eri1
Node.NICCGTP=cgtp0
Node.UseCGTP=True
Node.Type=Dataless
Node.DomainId=250
CMM.IsEligible=False
CMM.LocalConfig.Dir=/etc/opt/SUNWcgha

Choose a unique nodeid and unique node name for the dataless node. To view the nodeid of each node already in the cluster, see the /etc/opt/SUNWcgha/cluster_nodes_table file on the master node. For more information, see the nhfs.conf(4) man page.

If you have not installed the CGTP patches and packages, do the following:

To enable the Watchdog Timer, you must modify the nhfs.conf file. The Watchdog Timer can be configured differently on each node according to your requirements. For more information, see the nhfs.conf(4) man page.


procedure icon  To Set Up File Systems for a Dataless Node

Update the /etc/vfstab file in the dataless node's root directory to add the NFS mount points for master node directories that contain middleware data and services.

1. Log in to a dataless node as superuser.

2. Edit the entries in the /etc/vfstab file.

For more information about floating addresses of the master nodes, see To Create the Floating Address Triplet Assigned to the Master Role.

3. Define the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb:

All file systems that you mount by using NFS must be mounted with the options fg, hard, and intr. You can also set the noac mount option, which suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.



Note - Do not use IP addresses in the /etc/vfstab file for the dataless node. Instead, use logical host names. Otherwise, the pkgadd -R command fails and return the following message:

WARNING: cannot install to or verify on master_ip>



4. Create the mount points /SUNWcgha/remote, /SUNWcgha/services, and /SUNWcgha/swdb:

# mkdir -p SUNWcgha/remote
# mkdir -p SUNWcgha/services
# mkdir -p SUNWcgha/swdb

5. Repeat Step 1 through Step 4 for all dataless nodes in the cluster.


Integrating a Dataless Node Into the Cluster

The following procedures explain how to integrate a dataless node into the cluster:


procedure icon  To Update the /etc/hosts Files on Each Peer Node

1. Log in to the master node as superuser.

2. Edit the /etc/hosts file to add the following lines:

IP-address-NIC0 nic0-dataless-node-name
IP-address-NIC1 nic1-dataless-node-name
IP-address-cgtp0 cgtp0-dataless-node-name

This modification enables the master node to "see" the network interfaces of the dataless node.

3. Log in to the vice-master node as superuser.

4. Repeat Step 2.

This modification enables the vice-master node to "see" the three network interfaces of the dataless node.

5. Log in to a dataless node that is part of the cluster, if a dataless node already exists.

6. Repeat Step 2.

This modification enables the dataless node to "see" the three network interfaces of the dataless node.

7. Repeat Step 5 and Step 6 on all other diskless and dataless nodes that are already part of the cluster.


procedure icon  To Update the Cluster Node Table

1. Log in to the master node as superuser.

2. Edit the cluster_nodes_table file on the master node with the node information for a dataless node:

#NodeId Domain_id Name Attributes
nodeid domainid dataless-node-name -

The nodeid that you define in the cluster_nodes_table file must be the decimal representation of the host part of the node's IP address. For more information about the cluster_nodes_table file, see the cluster_nodes_table(4) man page.

3. Create the cluster_nodes_table file on the master node disk:

# /opt/SUNWcgha/sbin/nhcmmstat -c reload

4. Repeat Step 2 for each dataless node you are adding to the cluster.


Starting the Cluster

To integrate the dataless node into the cluster, delete the not_configured file and reboot all the nodes. After the nodes have completed booting, verify the configuration before the cluster is restarted.


procedure icon  To Delete the not_configured File

During the installation of the CMM packages, the /etc/opt/SUNWcgha/not_configured file is automatically created. This file enables you to reboot a cluster node during the installation and configuration process without starting the Foundation Services.

single-step bulletAfter you have completed installing and configuring the software on the dataless node, delete this file before starting the cluster.


procedure icon  To Start the Cluster

1. As superuser, reboot the master node:

# init 6

2. After the master node has completed rebooting, reboot the vice-master node as superuser:

# init 6

3. After the vice-master node has completed rebooting, boot the master-ineligible nodes as superuser:

# init 6


procedure icon  To Verify the Cluster Configuration

Use the nhadm tool to verify that the dataless nodes have been configured correctly and are integrated into the cluster.

1. Log in to the dataless node as superuser.

2. Run the nhadm tool to validate the configuration:

# nhadm check starting

If all checks pass the validation, the installation of the Foundation Services was successful. For more information, see the nhadm(1M) man page.