C H A P T E R 6 |
Installing the Software for Diskless Nodes |
When you have installed and configured the master-eligible nodes of the cluster, you can add diskless nodes and dataless nodes to the cluster.
This chapter pertains to diskless nodes. To add dataless nodes to your cluster, see Chapter 7.
Information about installing software for diskless nodes is provided in the following sections:
Before installing and configuring the software for a diskless node, check that the node is connected to the cluster and that there is enough disk space on the master-eligible nodes.
|
To connect a diskless node to a cluster, connect the two network interfaces of the diskless node to the two switches of the cluster.
For details on how to connect the diskless node to other nodes in the cluster, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
|
Check that an exported file system is configured for the diskless node on a shared partition of the master node.
The number of diskless nodes in a cluster depends on your hardware configuration and the disk space that is available in your shared file system. For each diskless node, there must be a mounted file system on the master node with a capacity of 100 Mbytes. The file system for diskless nodes is in the /export directory. For example disk partitions for a master-eligible node, see Defining Disk Partitions on the Master-Eligible Nodes.
Note - Each diskless node must be configured with sufficient physical memory so that swapping is not required. Swapping to a file system across NFS has a serious impact on performance. |
Install the Solaris Operating System for diskless nodes by using the smosservice command on the master node. You run this command only the first time you add a diskless node to a cluster to install the common Solaris services for all diskless nodes. The common Solaris services for the diskless node is installed in the directory /export/exec on the master node. You must also install the following packages: SMEvplr.u, SUNWsiox.u , SUNWkvm.u, SMEvplu.u. Install SMEvplr.u and SUNWsiox.u on the root file system for the diskless nodes. Install SUNWkvm.u and SMEvplu.u in the /usr directory for each diskless node.
For every additional diskless node, you only need to create the root file system for the new node by using the smdiskless command. The root file system is installed in the /export/root/diskless-node-name directory for each diskless node.
To install the Solaris Operating System for the diskless nodes, see the following procedures.
To Install the Common Solaris Services for Diskless Nodes on the Master Node
To Create a Root File System for a Diskless Node on the Master Node
To Install the SMEvplr.u and SUNWsiox.u Solaris Packages for Diskless Nodes
To Configure the Trivial File Transfer Protocol on the Master-Eligible Nodes
|
1. Ensure that the mount points to the software distributions have been configured.
For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.
2. Log in to the master node as superuser.
3. Start the Solaris Management Console.
# smc # ps -ef | grep smc root 474 473 0 Jul 29 ? 0:00 /usr/sadm/lib/smc/bin/smcboot root 473 1 0 Jul 29 ? 0:00 /usr/sadm/lib/smc/bin/smcboot |
For more information, see the smc(1M) man page.
4. Run the smosservice command:
# /usr/sadm/bin/smosservice add -p root-password -- \ -x mediapath=Solaris-distribution-dir \ -x platform=Solaris-platform \ -x cluster=Solaris-cluster \ -x locale=locale |
root-password is the superuser password. By default, this password is sunrules.
Solaris-distribution-dir is the mounted directory on the master node that contains the Solaris distribution.
Solaris-platform is the Solaris platform, for example, sparc.sun4u.Solaris_9.
Solaris-cluster is the Solaris cluster to install, for example, SUNWCuser.
locale is the locale to install. For U.S. English, the value is en_US.
For example, to install the Solaris services for diskless nodes, type:
# /usr/sadm/bin/smosservice add -p sunrules -- \ -x mediapath=/Solaris9-Distribution \ -x platform=sparc.sun4u.Solaris_9 \ -x cluster=SUNWCuser \ -x locale=en_US |
The common Solaris services for all diskless nodes are installed in the /export/exec directory on the master node.
For more information, see the smosservice(1M) man page.
![]() |
Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation. |
1. Ensure that the mount points to the software distributions have been configured.
For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.
2. Log in to the master node as superuser.
3. Install the SUNWkvm.u package:
4. Install the SMEvplu.u package:
|
After the common Solaris services for the diskless nodes are installed, use the smdiskless command on the master node to create a root file system for each diskless node in the cluster. You must create the root file system for each diskless node in the cluster.
1. Log in to the master node as superuser.
2. Create an entry in /etc/hosts for diskless-node-name on the first node.
3. Create the root file system for each diskless node:
# /usr/sadm/bin/smdiskless add -p root-password -- \ -i IP-address-NIC0 \ -e Ethernet-address \ -n diskless-node-name \ -x os=Solaris-platform \ -x locale=locale |
IP-address-NIC0 is the IP address of the diskless node on the NIC0 interface, for example, 10.250.1.30.
Solaris-platform is the Solaris platform, for example, sparc.sun4u.Solaris_8 or sparc.sun4u.Solaris_9.
For example, to add a new diskless node that is named netraDISKLESS1 that runs Solaris 9 on a Sun4U machine, type:
# /usr/sadm/bin/smdiskless add -p sunrules -- -i 10.250.1.20 \ -e 08:00:20:01:02:03 -n netraDISKLESS1 \ -x os=sparc.sun4u.Solaris_9 -x locale=en_US |
The root file system for the diskless node is created in the /export/root/netraDISKLESS1 directory.
For more information, see the smdiskless(1M) man page.
|
1. Ensure that the mount points to the software distributions have been configured.
For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.
2. Log in to the master node as superuser.
3. Install the SMEvplr.u package for each diskless node:
4. Install the SUNWsiox.u package:
|
The smdiskless command creates the directory /tftpboot on the master node. This directory contains the boot image for each diskless node. Create the same directory on the vice-master node. Then, after a switchover, the new master node can boot the diskless nodes.
1. Log in to the master node as superuser.
2. Modify the /etc/inetd.conf file to configure the Trivial File Transfer Protocol (TFTP).
Uncomment the tftp line, by deleting the comment mark at the beginning of the line, for example:
For more information, see the inetd.conf(4) man page.
3. Copy the /tftpboot directory to the vice-master node:
4. Log in to the vice-master node.
5. Repeat Step 2 on the vice-master node.
|
In the root directory for each diskless node on the master node, install the necessary Solaris patches. The Netra High Availability Suite Foundation Services 2.1 7/05 README contains the list of Solaris patches that you must install. The contents of this list depends on the version of the Solaris Operating System you installed.
Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
1. Log in to the master node as superuser.
2. Check that the directory containing the Foundation Services software distribution on the installation server is mounted on the master node:
# mount ... /NetraHASuite on 10.250.1.100:/software-distribution-dir \ remote/read/write/setuid/dev=3ec0004 on Tue Sep 24 17:06:09 2002 # |
10.250.1.100 is the IP address of the installation server network interface that is connected to the cluster.
software-distribution-dir is the directory that contains the Foundation Services product for the hardware architecture.
If the directory is not mounted, mount the directory as described in To Mount an Installation Server Directory on the Master-Eligible Nodes.
3. Install the Solaris services patches for the diskless nodes on the master node:
where x is 8 or 9 depending on the Solaris version installed.
4. Apply the patches for each diskless node:
The Reliable Boot Service ensures continuous availability of the DHCP server in a cluster. In the event of a failover of the master node, the vice-master node takes over from the master node. In the event of the failure of a diskless node, the Reliable Boot Service enables the diskless node to reboot automatically. This service also reassigns IP addresses to diskless nodes. For more information, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
The Reliable Boot Service is included in Foundation Services packages SUNWnhrbs and SUNWnhrbb. These packages contain a DHCP public module. These packages also contain template files for the DHCP service configuration file, the network containers, and dhcptab containers.
|
1. Log in to each master-eligible node as superuser.
2. Check that the Solaris DHCP packages are installed on the master-eligible nodes.
The DHCP is delivered in the SUNWdhcm, SUNWdhcsr, and SUNWdhcsu packages.
If not already installed, install the Solaris DHCP packages on each master-eligible node:
3. Install the SUNWnhrbs and SUNWnhrbb Reliable Boot Service packages on each master-eligible node:
To configure the DHCP for a diskless node, create the DHCP configuration table and network table for the node using the dhcpconfig, dhtadm, and pntadm commands. For more information about these commands and files, see the dhcpconfig(1M), dhtadm(1M), and pntadm(1M) man pages.
|
1. Log in to the master node as superuser.
3. Modify the /etc/inet/dhcpsvc.conf file:
DAEMON_ENABLED=TRUE RUN_MODE=server RESOURCE=SUNWnhrbs PATH=/SUNWcgha/remote/var/dhcp CONVER=1 INTERFACES=hme0,hme1 OFFER_CACHE_TIMEOUT=30 |
PATH enables you to specify the path to the DHCP configuration file. This path must be in a shared file system.
INTERFACES enables you to specify the network interfaces on the node, for example, hme0 and hme1.
If you are configuring a single network link for your cluster (that is, you do not plan to install the CGTP), specify only the first network interface, for example, hme0.
OFFER_CACHE_TIMEOUT enables you to specify the number of seconds before OFFER cache timeouts occur, for example, 30.
For more information, see the dhcpsvc.conf(4) man page.
4. Create the DHCP configuration table:
5. Modify the DHCP configuration table:
Note - If you are not planning to use CGTP (that is, you plan to configure a single network link for your cluster 0, do not configure the NhCgtpAddr macro. |
vendor-string is an ASCII string that identifies the client class names that are supported by the DHCP. Specify multiple client class names separated by spaces, for example:
floating-master-address is the floating IP address assigned to the CGTP interface of the current master node. For example, 10.250.3.1. For more information, see Configuring the Master-Eligible Node Addresses.
If you are not planning to use the CGTP (that is, you plan to configure a single network link for your cluster), use the IP address assigned to one of the NICs on the current master node, for example, 10.250.1.1.
For more information about the DHCP options, see the dhtadm(1M) man page.
6. Create the DHCP network table:
Configure a DHCP boot policy for the diskless nodes in the cluster by updating the DHCP configuration table and the DHCP network table. The boot policy is a way to assign IP addresses to a diskless node when the node is booted.
Diskless nodes can have a dynamic, static, or client ID boot policy. For more information about the DHCP boot policies, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
IP address is dynamically assigned from a pool of IP addresses when the diskless node is booted. See To Configure the DHCP Dynamic Boot Policy. |
|
IP address is statically assigned based on the Ethernet address of the diskless node. See To Configure the DHCP Static Boot Policy. |
|
IP address is generated from the node's client ID. See To Configure the DHCP Client ID Boot Policy. |
|
1. Log in to the master node as superuser.
2. Update the DHCP configuration table for the NIC0 interface of the diskless node:
macro-name is the NIC0 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC0 subnet.
9. Update the DHCP network table for the NIC0 interface of the diskless node:
IP-address is the NIC0 IP address of the node.
macro-name is the NIC0 IP address of the node.
subnet is the NIC0 subnet.
For the diskless node with the NIC0 IP address 10.250.1.30, type:
10. Update the DHCP configuration table for the NIC1 interface of the diskless node:
macro-name is the NIC1 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC1 subnet.
17. Update the DHCP network table for the NIC1 interface of the diskless node:
IP-address is the NIC1 IP address of the node.
macro-name is the NIC1 IP address of the node.
subnet is the NIC1 subnet.
For the diskless node with the NIC1 IP address 10.250.2.30, type:
|
1. Log in to the master node as superuser.
2. Update the DHCP configuration table for the NIC0 interface of the diskless node:
macro-name is the NIC0 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC0 subnet.
9. Update the DHCP container for the NIC0 interface of the diskless node.
IP-address is the NIC0 IP address of the node.
Ethernet-address of the board of the node. The letters of the address must be in uppercase.
macro-name is the NIC1 IP address of the node.
subnet is the NIC0 subnet.
For the diskless node with the NIC0 IP address 10.250.1.30 and Ethernet address 01080020F9B360, type:
10. Update the DHCP configuration table for the NIC1 interface of the diskless node:
macro-name is the NIC1 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC1 subnet.
17. Update the DHCP container for the NIC1 interface of the diskless node:
IP-address is the NIC1 IP address of the node.
Ethernet-address of the board of the node.
macro-name is the NIC1 IP address of the node.
subnet is the NIC1 subnet.
For the diskless node with the NIC1 IP address 10.250.2.30 and Ethernet address 01080020F9B361, type:
|
This procedure can only be performed on nodes with CompactPCI technology. For information specific to the hardware you are using, see the corresponding hardware documentation.
1. Create or retrieve the client ID for the diskless node.
a. Log in to the diskless node as superuser.
c. Check for the client ID of the diskless node:
If a client ID is not configured, configure it:
where client-id-name is an ASCII string. In this procedure, test is used as an example client ID.
d. Convert the ASCII string to hexadecimal.
For example, if test is your client ID, the hexadecimal equivalent is
74 65 73 74.
3. Declare the diskless node's client ID in the /export/root/diskless-node-name/etc/default/dhcpagent file.
For example, if the hexadecimal equivalent of your client ID is 74 65 73 74 on a Netra CT 810 machine, add the following line to the dhcpagent file:
For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.
4. Update the DHCP configuration table for the NIC0 interface of the diskless node:
macro-name is the NIC0 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC0 subnet.
11. Update the DHCP network table for the NIC0 interface of the diskless node:
IP-address is the NIC0 IP address of the node.
diskless-node-clientID is the hexadecimal equivalent of the client ID.
macro-name is the NIC0 IP address of the node.
subnet is the subnet of the NIC0 interface.
For a Netra CT 810 diskless node with the NIC0 IP address 10.250.1.30 and client ID 74657374, type:
For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.
12. Update the DHCP configuration table for the NIC1 interface of the diskless node:
macro-name is the NIC1 IP address of the node.
local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.
os is the operating system. Specify Solaris_8 or Solaris_9 depending on the Solaris version you installed.
diskless-node-name is the name of the node.
subnet is the NIC1 subnet.
19. Update the DHCP container for the NIC1 interface of the diskless node.
IP-address is the NIC1 IP address of the node.
diskless-node-clientID is the hexadecimal equivalent of the client ID.
macro-name is the NIC1 IP address of the node.
subnet is the NIC1 subnet.
For the diskless node with NIC1 IP address 10.250.2.30 and client ID 74657374, type:
For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.
The packages that are installed in the partitions for diskless nodes are a subset of the Foundation Services packages already installed on the master-eligible nodes. The following Foundation Services must be installed for each diskless node.
CGTP user-space components, configuration scripts, and files |
|
|
1. Log in to the master node as superuser.
2. Install the Foundation Services packages.
For example, to install the Foundation Services packages and the Java DMK package on Solaris 9, run the following command:
In the preceding command, you also install the Java DMK 5.0 runtime classes in the root directory of each diskless node.
CGTP enables a redundant network for your cluster.
Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
3. Install the Java DMK SNMP manager API classes package in the shared /usr directory for the diskless nodes:
where x is 8 or 9 depending on the Solaris version installed.
4. Install the Watchdog Timer packages appropriate to your hardware.
a. Refer to your hardware guides for the correct package names and installation for your configuration.
b. To enable the Watchdog Timer, modify the nhfs.conf file.
For instruction on how to configure the Watchdog Timer, see the nhfs.conf(4) man page. The Watchdog Timer can be configured differently on each node according to your requirements.
To configure the Foundation Services for a diskless node, see the following procedures:
|
1. Log in to the master node as superuser.
2. Create the /export/root/diskless-node-name/etc/hostname.NIC0, /export/root/diskless-node-name/etc/hostname.NIC1, /export/root/diskless-node-name/etc/dhcp.NIC0, and /export/root/diskless-node-name/etc/dhcp.NIC1 files.
where diskless-node-name is the hostname of the diskless node.
For example, if you are using a CP2160 board, create the files:
/export/root/diskless-node-name/etc/hostname.eri0 /export/root/diskless-node-name/etc/hostname.eri1 /export/root/diskless-node-name/etc/dhcp.eri0 /export/root/diskless-node-name/etc/dhcp.eri1 |
3. Create the /export/root/diskless-node-name/etc/hosts file.
4. Edit the /export/root/diskless-node-name/etc/hosts file to include the IP addresses and node names for all the network interfaces of all the nodes.
The interfaces are the NIC0, NIC1, and cgtp0 interfaces.
5. Create the /export/root/diskless-node-name/etc/nodename file.
6. Edit the /export/root/diskless-node-name/etc/nodename file to include the node name that is associated with the IP address of one of the network interfaces.
For example, add the node name associated with the IP address of the cgtp0 interface, that is, netraDISKLESS1-cgtp.
7. Create the /export/root/diskless-node-name/etc/netmasks file.
8. Edit the /export/root/diskless-node-name/etc/netmasks file to include a line for each subnet on the cluster:
|
To configure external IP addresses for a diskless node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME Ethernet card or QFE Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.
Configure an external IP address for the extra network interface based on your public network policy.
|
Because the cluster network is not routable, you must disable the diskless node as a router.
1. Log in to the master node as superuser.
For a description of the advantages of using a private cluster network, see the "Cluster Addressing and Networking" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
To set up file systems for a diskless node, create the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb. Add the NFS mount points for the directories that contain middleware data and services on the master node. Update the /etc/vfstab file in the root directory for the diskless node. Then, these file systems are exported from the master node through the NFS, and are automatically mounted for the diskless nodes at boot time.
TABLE 6-3 explains the file systems that are exported on the master node and the corresponding mount points for the diskless nodes. For information about how to export these file systems on the master node, see To Set Up File Systems on the Master-Eligible Nodes.
All file systems that you mount using NFS must be mounted with the options fg, hard, and intr. You can also set the noac mount option, which suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.
1. Log in to the master node as superuser.
2. Edit the entries in the /export/root/diskless-node-name/etc/vfstab file.
If you have configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the cgtp0 interface that is assigned to the master role, for example, master-cgtp.
For more information, see To Create the Floating Address Triplet Assigned to the Master Role.
If you have not configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the NIC0 interface that is assigned to the master role, for example, master-nic0.
3. Define the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb.
If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points:
If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role.
4. In the diskless node directory /export/root/diskless-node-name, create the mount points:
5. Repeat Step 2 and Step 4 for all diskless nodes.
|
Each node in the cluster has a cluster configuration file, nhfs.conf. Create this file for the new diskless node by performing the following procedure.
1. Log in to the master node as superuser.
2. Create the nhfs.conf file for the diskless node:
# cp /etc/opt/SUNWcgha/nhfs.conf.template \ /export/root/diskless-node-name/etc/opt/SUNWcgha/nhfs.conf |
3. Configure the /export/root/diskless-node-name/etc/opt/SUNWcgha/nhfs.conf file.
An example file for a diskless node on a cluster with the domain ID 250, with network interfaces eri0, eri1, and cgtp0 would be as follows:
Node.NodeId=30 Node.NIC0=eri0 Node.NIC1=eri1 Node.NICCGTP=cgtp0 Node.UseCGTP=True Node.Type=Diskless Node.DomainId=250 CMM.IsEligible=False CMM.LocalConfig.Dir=/etc/opt/SUNWcgha |
For more information, see the nhfs.conf(4) man page.
If you have not installed the CGTP patches and packages, do the following:
Disable the Node.NIC1 and Node.NICCGTP parameters.
To disable these parameters, add a comment mark (#) at the beginning of the line containing the parameter if this mark is not already present.
Configure the Node.UseCGTP and the Node.NIC0 parameters:
4. Repeat Step 2 and Step 3 for all diskless nodes.
You must update the /etc/hosts file on each peer node in the cluster to include the IP addresses of the diskless node. You must also update the nhfs.conf file and the cluster_nodes_table file on the master-eligible nodes to include the diskless node. See the following procedures.
|
To declare the diskless node to all peer nodes in the cluster, perform the following procedure:
1. Log in to the master node as superuser.
2. Edit the /etc/hosts file to add the following lines:
IP-address-NIC0 nic0-diskless-node-name IP-address-NIC1 nic1-diskless-node-name IP-address-cgtp0 cgtp0-diskless-node-name |
Now, the master node can "see" the three network interfaces of the new diskless node.
3. Log in to the vice-master node as superuser.
4. Repeat Step 2.
Now, the vice-master node can "see" the three network interfaces of the new diskless node.
5. Log in to a diskless or dataless node that is part of the cluster, if one already exists.
6. Repeat Step 2.
Now, the diskless node can "see" the three network interfaces of the new diskless node.
7. Repeat Step 5 and Step 6 on all other diskless or dataless nodes that are already part of the cluster.
|
Update the cluster node table file, cluster_nodes_table, and the cluster configuration file, nhfs.conf, with the addressing information for the new diskless node.
1. Log in to the master node as superuser.
2. Using the following format, edit the /etc/opt/SUNWcgha/cluster_nodes_table file to add an entry for the diskless node:
The nodeid that you define in the cluster_nodes_table file must be the decimal representation of the host part of the node's IP address. For more information, see the cluster_nodes_table(4) man page.
3. Create the cluster_nodes_table file on the master node disk:
4. Repeat Step 2 for each diskless node you are adding to the cluster.
|
Specify the shared directory configuration in the nhfs.conf file on the master node and the vice-master node. Ensure that there is no existing shared directory configuration already specified in the /etc/dfs/dfstab file.
1. Log in to the master node as superuser.
2. Edit the /etc/opt/SUNWcgha/nhfs.conf file to add the following:
3. Update the RNFS.Share.0 parameter that is used to share the /SUNWcgha/local/export directory to include the cgtp0-diskless-node-name of the diskless node.
4. Log in to the vice-master node.
5. Repeat Step 2 and Step 3 on the vice-master node.
6. On the master node, edit the /etc/dfs/dfstab file to remove all uncommented lines.
To integrate the new diskless node into the cluster, delete the not_configured file and reboot the master-eligible nodes. When the Solaris Operating System and the Foundation Services have been booted onto the diskless nodes, verify the new configuration before the cluster is restarted.
|
The /export/root/diskless-node-name/etc/opt/SUNWcgha/not_configured file is automatically created during the installation of the CMM packages for the diskless node. This file enables you to reboot a cluster node during the installation and configuration process without starting the Foundation Services.
After you complete the installation and configuration procedures, but before starting the cluster, delete this file for the diskless node.
|
1. Log in to the master node as superuser.
3. After the master node has completed booting, log in to the vice-master node as superuser.
4. Reboot the vice-master node:
5. After the vice-master node has completed booting, get the ok prompt on the diskless node:
6. Set your OpenBoot PROM parameters as follows:
ok> setenv local-mac-address? true ok> setenv auto-boot-retry? true ok> setenv diag-switch? false ok> setenv boot-device net:dhcp,,,,,5 net2:dhcp,,,,,5 |
Note - If you are going to use client_id on a Netra CT diskless node, set the Boot_Devices environment variable. For more information, see the Netra CT Server System Administration Guide. |
|
Use the nhadm tool to verify that the diskless nodes have been configured correctly and are integrated into the cluster.
1. Log in to the diskless node as superuser.
2. Run the nhadm tool to validate the configuration:
If all checks pass the validation, the installation of the Foundation Services software was successful. For more information, see the nhadm(1M) man page.
Copyright © 2006, Sun Microsystems, Inc. All Rights Reserved.