C H A P T E R 5 |
Installing the Software on the Master-Eligible Nodes |
After you have set up the installation environment, you are ready to manually install the Solaris Operating System and the Foundation Services manually on the master-eligible nodes of the cluster. The master-eligible nodes take on the roles of master node and vice-master node in the cluster. For more information about the types of nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
To manually install and configure the Foundation Services on the master-eligible nodes of your cluster, see the following sections:
Installing the Solaris Operating System on the Master-Eligible Nodes
Installing the Foundation Services on the Master-Eligible Nodes
Configuring the Foundation Services on the Master-Eligible Nodes
Configuring Solaris Volume Manager With Reliable NFS and Shared Disk
Defining Disk Partitions on the Master-Eligible Nodes |
The master-eligible nodes store current data for all nodes in the cluster, whether the cluster has diskless nodes or dataless nodes. One master-eligible node is to be the master node, while the other master-eligible node is to be the vice-master node. The vice-master node takes over the role of master in case the master node fails or is taken offline for maintenance. Therefore, the disks of both these nodes must have exactly the same partitions. Create the disk partitions of the master-eligible node according to the needs of your cluster. For example, the disks of the master-eligible nodes must be configured differently if diskless nodes are part of the cluster.
The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes.
The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. |
|||
Exported file system reserved for diskless nodes. This partition must be mounted with the logging option. This partition is further partitioned if diskless nodes are added to the cluster. |
|||
This partition is reserved for NFS status files, services, and configuration files. This partition must be mounted with the logging option. |
|||
Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /export file system. |
See TABLE 5-3 |
||
Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /SUNWcgha/local file system. |
See TABLE 5-3 |
||
For replication, create a bitmap partition for each partition containing an exported, replicated file system on the master-eligible nodes. The bitmap partition must be at least the following size.
In this example, the bitmaps are created on partitions 5 and 6. The bitmap partition sizes can be as shown in the following table.
For information, see the Sun StorEdge Availability Suite 3.1 Remote Mirror Software Installation Guide in the Sun StorEdge Availability Suite 3.1 documentation set.
Note - In a cluster without diskless nodes, the /export file system and the associated bitmap partition are not required. |
To install the Solaris Operating System on each master-eligible node, use the Solaris JumpStart tool on the installation server. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Preparing the Installation Environment.
|
1. Log in to the installation server as superuser.
2. Create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:
You can access these documents on http://docs.sun.com.
3. In the /etc/hosts file, add the names and IP addresses of the master-eligible nodes.
4. Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:
Solaris-distribution is the directory that contains the Solaris distribution.
Jumpstart-dir is the directory that contains the Solaris JumpStart files.
5. Share the directories that are defined in the /etc/dfs/dfstab file:
6. Change to the directory where the add_install_client command is located:
Solaris-dir is the directory that contains the Solaris installation software. This directory could be on a CD-ROM or in an NFS-shared directory.
x is 8 or 9 depending on the Solaris version installed.
7. Run the add_install_client command for each master-eligible node.
For information, see the add_install_client(1M) man page.
8. Connect to the console of each master-eligible node.
9. Boot each master-eligible node with the appropriate command using a network boot.
If you are unsure of the appropriate command, refer to the hardware documentation for your platform. The common command for SPARC systems is shown in the following example:
If the installation server is connected to the second Ethernet interface, type:
This command installs the Solaris Operating System on the master-eligible nodes.
To prepare the master-eligible nodes for the installation of the Foundation Services, you must configure the master-eligible nodes. You must also mount the installation server directory that contains the Foundation Services distribution.
|
1. Log in to a master-eligible node as superuser.
2. Create /etc/notrouter file:
3. Modify the /etc/default/login file so that you can connect to a node from a remote system as superuser:
# mv /etc/default/login /etc/default/login.orig # chmod 644 /etc/default/login.orig # sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login # chmod 444 /etc/default/login |
5. Modify the .rhosts file according to the security policy for your cluster:
# /usr/sbin/eeprom local-mac-address?=true # /usr/sbin/eeprom auto-boot?=true # /usr/sbin/eeprom diag-switch?=false |
7. (Optional) If you are using the Network Time Protocol (NTP) to run an external clock, configure the master-eligible node as an NTP server.
This procedure is described in the Solaris documentation.
8. (Optional) If your master-eligible node has an IDE disk, edit the /usr/kernel/drv/sdbc.conf file.
Change the value of the sdbc_max_fbas parameter from 1024 to 256.
9. Create the data/etc and data/var/dhcp directories in the /SUNWcgha/local/export/ file system on the master-eligible node:
/SUNWcgha/local/export/data/etc directory is required for the Cluster Membership Manager (CMM).
/SUNWcgha/local/export/data/var/dhcp directory is required for the Reliable Boot Service.
10. Repeat Step 1 through Step 9 on the second master-eligible node.
|
1. Log in to the installation server as superuser.
2. Check that the mountd and nfsd daemons are running on the installation server.
For example, use the ps command:
If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons:
3. Share the directory containing the distributions for the Foundation Services and the Solaris Operating System by adding the following lines to the /etc/dfs/dfstab file:
where software-distribution-dir is the directory that contains the Foundation Services packages and Solaris patches.
4. Share the directories that are defined in the /etc/dfs/dfstab file:
5. Log in to the a master-eligible node as superuser.
6. Create the mount point directories Solaris and NetraHASuite on the master-eligible node:
7. Mount the Foundation Services and Solaris distribution directories on the installation server:
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
x is the Solaris version.
software-distribution-dir is the directory that contains the Foundation Services packages.
Solaris-distribution-dir is the directory that contains the Solaris distribution.
8. Repeat Step 5 through Step 7 on the other master-eligible node.
|
After you have completed the Solaris installation,you must install the Solaris patches delivered in the Foundation Services distribution. See the Netra High Availability Suite Foundation Services 2.1 7/05 README for the list of patches.
Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
1. Log in to each master-eligible node as superuser.
2. Install the necessary Solaris patches on each master-eligible node:
|
1. Log in to a master-eligible node as superuser.
The man pages are installed in the /opt/SUNWcgha/man directory. To access the man pages, see the Netra High Availability Suite Foundation Services 2.1 7/05 Reference Manual.
3. Repeat Step 1 and Step 2 on the other master-eligible node.
The following procedures explain how to install the Foundation Services on the master-eligible nodes:
|
The nhadm tool is a cluster administration tool that can verify that the installation was completed correctly. You can run this tool when your cluster is up and running.
As superuser, install the nhadm tool package on each master-eligible node:
|
CGTP enables a redundant network for your cluster.
Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
1. Before you install the CGTP packages, make sure that you have installed the Solaris patches for CGTP.
See To Install Solaris Patches.
2. As superuser, install the following CGTP packages on each master-eligible node:
where x is 8 or 9 depending on the version of the Solaris Operating System you install.
|
As superuser, install the Node State Manager packages on each master-eligible node:
For more information about the Node State Manager, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
2. Type the following command:
For information on configuring the EAM, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
As superuser, install the following CMM packages on each master-eligible node:
For instructions on configuring the CMM, see Configuring the Foundation Services on the Master-Eligible Nodes.
For information about the CMM, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
Install the Reliable NFS packages to enable the Reliable NFS service and data-replication features of Foundation Services. For a description of the Reliable NFS service, see "File Sharing and Data Replication" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview. The Reliable NFS feature is enabled by the StorEdge Network Data Replicator (SNDR), which is provided with the Reliable NFS packages.
Note - SNDR is supplied for use only with the Foundation Services. Any use of this product other than on a Foundation Services cluster is not supported. |
1. As superuser, install the following Reliable NFS and SNDR packages on a master-eligible node in the following order:
# pkgadd -d /NetraHASuite/Packages/ SUNWscmr \ SUNWscmu SUNWspsvr SUNWspsvu SUNWrdcr SUNWrdcu SUNWnhfsa SUNWnhfsb |
2. Repeat Step 1 on the second master-eligible node.
3. Install the SNDR patches on each master-eligible node.
See the Netra High Availability Suite Foundation Services 2.1 7/05 README for a list of SNDR patches.
4. Edit the /usr/kernel/drv/rdc.conf file on each master-eligible node to change the value of the rdc_bitmap_mode parameter.
To have changes to the bitmaps written on the disk at each update, change the value of the rdc_bitmap_mode parameter to 1.
To have changes to the bitmaps stored in memory at each update, change the value of the rdc_bitmap_mode parameter to 2. In this case, changes are written on the disk when the node is shut down. However, if both master-eligible nodes fail, both disks must be synchronized.
For example: rdc_bitmap_mode=2.
|
Install the Reliable NFS packages to enable the Reliable NFS service and disk mirroring features of Foundation Services. For a description of the Reliable NFS service, see "File Sharing and Data Replication" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
1. As superuser, install the following Reliable NFS packages on a master-eligible node in the following order:
2. Repeat Step 1 on the second master-eligible node.
|
Install the Node Management Agent (NMA) packages to gather statistics on Reliable NFS, CGTP, and CMM. For a description of the NMA, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
The NMA consists of four packages. One NMA package is installed on both master-eligible nodes. Three packages are NFS-mounted as shared middleware software on the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing and configuring all the services on the master-eligible nodes.
The NMA requires the Java DMK packages, SUNWjsnmp and SUNWjdrt, to run. For information about installing the entire Java DMK software, see the Java Dynamic Management Kit 5.0 Installation Guide.
The following table describes the packages that are required on each type of node.
Java DMK 5.0 Simple Network Management Protocol (SNMP) manager API classes |
||
Follow this procedure to install and configure the NMA.
1. As superuser, install the following NMA package and Java DMK package on both master-eligible nodes:
Note - If you plan to use shared disks, do not advance to Step 2 until the metadevice used for shared disks has been created. See Step 2 in To Set Up File Systems on the Master-Eligible Nodes. |
2. On the first master-eligible node, install the following shared Java DMK package and NMA packages:
# pkgadd -d /NetraHASuite/Packages/ \ -M -R /SUNWcgha/local/export/services/ha_2.1.2 \ SUNWjdrt SUNWnhmaj SUNWnhmal SUNWnhmad |
The packages are installed with a predefined root path in the /SUNWcgha/local/export/services/ha_2.1.2 directory.
Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation. |
3. To configure the NMA, see the Netra High Availability Suite Foundation Services 2.1 7/05 NMA Programming Guide.
|
As superuser, install the following Daemon Monitor packages on each master-eligible node:
For a description of the Daemon Monitor, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
Install and configure the Watchdog Timer provided with the Foundation Services only if you are using Netra servers that have hardware watchdogs at the Lights Out Management (LOM) level. For a list of the types of watchdogs for different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
1. Before installing the Watchdog Timer, do the following:
Check that the SUNWnhcdt package is installed on each master-eligible node. For more information, see To Install the Cluster Membership Manager.
Check that the following LOM driver packages are installed:
2. As superuser, install the Watchdog Timer package on each master-eligible node:
The Watchdog Timer can be configured differently on each node, depending on your needs. See Configuring the nhfs.conf File.
Before assigning IP addresses to the network interfaces of the master-eligible nodes, see "Cluster Addressing and Networking" in the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
In the Foundation Services, three IP addresses must be configured for each master-eligible node:
An IP address for the first physical interface, NIC0, corresponding to the first network interface. This interface could be hme0.
An IP address for the second physical interface, NIC1, corresponding to the second network interface. This interface could be hme1.
An IP address for the virtual physical interface, cgtp0
The virtual physical interface should not be configured on a physical interface. The configuration is done automatically when you configure Reliable NFS. For more information about the cgtp0 interface, see the cgtp(7D) man page.
The IP addresses can be IPv4 addresses of any class with the following structure:
When you configure the IP addresses, make sure that the node ID, nodeid, is the decimal equivalent of host_id. You define the nodeid in the cluster_nodes_table file and the nhfs.conf file. For more information, see Configuring the Foundation Services on the Master-Eligible Nodes.
The following procedures explain how to create and configure IP addresses for master-eligible nodes.
Examples in these procedures use IPv4 Class C addresses.
|
1. Log in to each master-eligible node as superuser.
2. In the /etc/hosts file on each master-eligible node, add the three IP addresses, followed by the name of each interface:
In the rest of this book, the node netraMEN1 is the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing the Foundation Services. The node netraMEN2 is the second master-eligible node that is booted after the first master-eligible node has completed booting.
|
In the /etc directory on each master-eligible node, you must create a hostname file for each of the three interfaces. In addition, update the nodename and netmasks files.
1. Create or update the file /etc/hostname.NIC0 for the NIC0 interface.
This file must contain the name of the master-eligible node on the first interface, for example, netraMEN1-nic0.
2. Create or update the file /etc/hostname.NIC1 for the NIC1 interface.
This file must contain the name of the master-eligible node on the second interface, for example, netraMEN1-nic1.
3. Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.
This file must contain the name of the master-eligible node on the cgtp0 interface, for example, netraMEN1-cgtp.
4. Update the /etc/nodename file with the IP address of the master-eligible node.
If you have not installed CGTP, add the name set on the NIC0 interface, for example, netraMEN1-nic0.
5. Create a /etc/netmasks file with a netmask of 255.255.255.0 for all subnetworks in the cluster.
|
To configure external IP addresses for a master-eligible node, the node must have an extra physical network interface or logical network interface. An extra physical network interface is an unused interface on an existing Ethernet card or a supplemental Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.
Configure an external IP address for the extra network interface based on your public network policy.
|
1. Add, if required, the hostname associated with the external floating address in /etc/host on each master-eligible node.
2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.
3. Create or update the file /etc/hostname.interface for the interface supporting the external floating address on each master-eligible node.
If the file does not exist, create the following lines (the file must contain at least two lines for the arguments to be taken into account):
If the file already exists, add the following line:
4. Configure the external floating address parameter in the nhfs.conf file on each master-eligible node.
For more information, see the nhfs.conf(4) man page.
|
To configure the external floating address, the node must have two network interfaces not already used for a CGTP network. Using a different VLAN can be considered if no network interfaces are available.Each interface must be configured with a special IP address used for monitoring. The external floating address must be configured in one of them, and all of these IP addresses must be part of the same subnetwork.
1. Add, if required, the hostname associated to test IP addresses and the external floating address in /etc/host on each master-eligible node.
IP addresses for testing must be different on each node.
2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.
3. Create or update the file /etc/hostname.interface for the first interface on each master-eligible node.
The file must contain the definition of the test IP address for this interface and the external floating address in this format:
test IP address #1 netmask + broadcast + -failover deprecated group name up addif floating address netmask + broadcast + failover down
test-ipmp-1 netmask + broadcast + -failover deprecated group ipmp-group up addif ipmp-float netmask + broadcast + failover down |
4. Create or update the file /etc/hostname.interface for the second interface on each master-eligible node.
The file must contain the definition of the test IP address for this interface in this format:
test IP address #1 netmask + broadcast + -failover deprecated group name up
5. Configure the external floating address parameters (floating address and IPMP group to be monitored) in the nhfs.conf file on each master-eligible node.
For more information, see the nhfs.conf(4) man page.
Configure the services that are installed on the master-eligible nodes by modifying the nhfs.conf and the cluster_nodes_table files on each master-eligible node in the cluster. Master-eligible nodes have read-write access to these files. Diskless nodes or dataless nodes in the cluster have read-only access to these files.
nhfs.conf
This file contains configurable parameters for each node and for the Foundation Services. This file must be configured on each node in the cluster.
cluster_nodes_table
This file contains information about nodes in the cluster, such as nodeid and domainid. This file is used to elect the master node in the cluster. Therefore, this file must contain the most recent information about the nodes in the cluster.
There is one line in the table for each peer node. When the cluster is running, the table is updated by the nhcmmd daemon on the master node. The file is copied to the vice-master node every time the file is updated. The cluster_nodes_table must be located on a local partition that is not exported. For information about the nhcmmd daemon, see the nhcmmd(1M) man page.
The following procedures describe how to configure the nhfs.conf file.
To Create the Floating Address Triplet Assigned to the Master Role
To Configure a Direct Link Between the Master-Eligible Nodes
For more information, including parameter descriptions, see the nhfs.conf.4 man page.
|
The nhfs.conf file enables you to configure the node after you have installed the Foundation Services on the node. This file provides parameters for configuring the node, CMM, Reliable NFS, the direct link between the master-eligible nodes, the Node State Manager, the Watchdog Timer, and daemon scheduling.
1. As superuser, copy the template /etc/opt/SUNWcgha/nhfs.conf.template file:
2. For each property that you want to change, uncomment the associated parameter (delete the comment mark at the beginning of the line).
3. Modify the value of each parameter that you want to change.
For descriptions of each parameter, see the nhfs.conf(4) man page.
If you have not installed the CGTP patches and packages, do the following:
Disable the Node.NIC1 and Node.NICCGTP parameters.
To disable these parameters, add a comment mark (#) at the beginning of the line containing the parameter if this mark is not already present.
Configure the Node.UseCGTP and the Node.NIC0 parameters:
|
The floating address triplet is a triplet of three logical addresses active on the node holding the master role. When the cluster is started, the floating address triplet is activated on the master node. In the event of a switchover or a failover, these addresses are activated on the new master node. Simultaneously, the floating address triplet is deactivated automatically on the old master node, that is, the new vice-master node.
To create the floating address triplet, you must define the master ID in the nhfs.conf file.
The floating address triplet is calculated from the master ID, the netmask, and the network interface addresses.
For more information about the floating address triplet of the master node, see "Cluster Addressing and Networking" in Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
|
You can configure a direct link between the master-eligible nodes to prevent a split brain cluster. A split brain cluster is a cluster that has two master nodes because the network between the master node and the vice-master node has failed.
1. Connect the serial ports of the master-eligible nodes.
For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
2. Configure the direct link parameters.
For more information, see the nhfs.conf(4) man page.
The cluster_nodes_table file contains the configuration data for each node in the cluster. Create this file on each master-eligible node. Once the cluster is running, this file is accessed by all nodes in the cluster. Therefore, the cluster_nodes_table on both master-eligible nodes must be exactly the same.
|
1. Log in to a master-eligible node as superuser.
2. Copy the template file from /etc/opt/SUNWcgha/cluster_nodes_table.template to /etc/opt/SUNWcgha/cluster_nodes_table.
You can save the cluster_nodes_table file in a directory other than the /etc/opt/SUNWcgha directory. By default, the cluster_nodes_table file is located in the /etc/opt/SUNWcgha directory.
3. Edit the cluster_nodes_table file to add a line for each node in the cluster.
For more information, see the cluster_nodes_table(4) man page.
4. Edit the nhfs.conf file to specify the directory that contains the cluster_nodes_table file:
For more information, see the nhfs.conf(4) man page.
5. Log in to the other master-eligible node as superuser.
6. Copy the /etc/opt/SUNWcgha/cluster_nodes_table file from the first master-eligible node to the same directory on the second master-eligible node.
If you saved the cluster_nodes_table file in a directory other than /etc/opt/SUNWcgha, copy the file to that other directory on the second master-eligible node. The cluster_nodes_table file must be available in the same directory on both master-eligible nodes.
7. Repeat Step 4 on the second master-eligible node.
|
This procedure uses the following values for its code examples:
c0t0d0 is the system disk
c1t8d0 is the primary shared disk
c1t9d0 is the secondary shared disk used to mirror the primary one
Detailed information about SVM and how to set up a shared disk can be found in the Solaris Volume Manager Administration Guide.
1. On the first master-eligible node, change the node name with the name of the host associated to the CGTP interface:
2. Repeat Step 1 for the second master-eligible node:
3. On the first master-eligible node, restart the rpcbind daemon to make it use the new node name:
4. Repeat Step 3 on the second master-eligible node.
5. Create the database replicas for the dedicated root disk slice on each master-eligible node:
6. Repeat Step 9 for the second master-eligible node:
7. (Optional) If you plan to use CGTP, configure a temporary network interface on the first private network and make it match the name and IP address of the CGTP interface on the first master-eligible node:
# ifconfig hme0:111 plumb # ifconfig hme0:111 10.250.3.10 netmask + broadcast + up Setting netmask of hme0:111 to 255.255.255.0 |
8. (Optional) If you plan to use CGTP, repeat Step 7 for the second master-eligible node:
# ifconfig hme0:111 plumb # ifconfig hme0:111 10.250.3.11 netmask + broadcast + up Setting netmask of hme0:111 to 255.255.255.0 |
9. On the first master-eligible node, verify that the /etc/nodename file matches the name of the CGTP interface (or the name of the private network interface, if CGTP is not used) :
Note - The rest of the procedure only applies to the first master-eligible node. |
10. Create the SVM diskset that manages the shared disks:
11. Remove any possible existing SCSI3-PGR keys from the shared disks.
In the following example, there was no key lying on the disks):
12. Add the names of the shared disks to the previously created diskset:
Note - This step will reformat the shared disks, and all existing data on the shared disks will be lost. |
13. Verify that the SVM configuration is set up correctly:
# metaset Set name = nhas_diskset, Set number = 1 Host Owner netraMEN1-cgtp Yes netraMEN2-cgtp Drive Dbase c1t8d0 Yes c1t9d0 Yes |
Note - If you do not plan to install diskless nodes, jump to Step 18. |
14. Retrieve disk geometry information using the prtvtoc command.
A known problem in the diskless management tool, smosservice, prevents the creation of the diskless environment on a metadevice. To avoid this problem, mount the /export directory on a physical partition during the diskless environment creation.
To support access to the /export via a metadevice without preventing its access on a physical partition, the disk must be re-partitioned in a particular way after it has been inserted into a diskset. This re-partitioning preserves data already stored by SVM, since there is no formatting of created partitions.
TABLE 5-4 gives an example of the prtvtoc command output after inserting a disk into a diskset.
15. Create the data file using the fmthard command.
The fmthard command (see its man page for more information) is used to create physical partitions. It requires you to input a data file describing the partitions to be created. There is one entry per partition, using the following format:
starting sector and size in sectors values must be rounded to a cylinder boundary and must be computed as explained below.
starting sector = starting sector of the previous slice + size in sectors of the previous slice
size in sectors = the required partition size in bytes divided by bytes per sector, the result being rounded to sectors per cylinder (upper value)
Three particular slices must be created:
Slice 7 containing the meta-database (also called metadb). This slice must be created the same size as that created by SVM to overlap the existing one (to preserve data).
A slice to support /export (diskless environment)
A slice to support /SUNWcgha/local (shared NHAS packages and files)
Other slices can be added depending on your application requirements. TABLE 5-5 gives an example for partitioning::
The following slice constraints must be respected:
Slice 7 (metadb) is the first slice of the disk starting at sector # 0, with the size size of slice 7 with tag 4 (user partition) and flag 0x01: (unmountable)
Slice 2 maps the whole disk: size in bytes = accessible cylinders* sectors per cylinder * bytes per sector with tag 5 (backup) and flag 0x01 (unmountable)
Other slices use tag 0 (unassigned) and flag 0x00 (mountable in R/W)
An example of computing for slice 0 (located after slice 7) :
starting sector = (0 + 8667) = 8667
size in bytes = (4096 * 1024 ^ 2) = 4294967296
size in sectors = 4294967296 / 512 = 8388608
size in sector roumded to cylinder boundaries (2889) = 8389656
These values would display the following content in the data file (datafile.txt):
Note that this example leaves some unallocated spaces on the disk that can be used for user-specific partitions.
Execute the following commands for the primary and for the secundary disk:
17. Create the metadevices for partition mapping and mirroring.
Create the metadevices on the primary disk:
# metainit -s nhas_diskset d11 1 1 /dev/rdsk/c1t8d0s0 # metainit -s nhas_diskset d12 1 1 /dev/rdsk/c1t8d0s1 |
Create the metadevices on the secondary disk:
# metainit -s nhas_diskset d21 1 1 /dev/rdsk/c1t9d0s0 # metainit -s nhas_diskset d22 1 1 /dev/rdsk/c1t9d0s1 |
# metainit -s nhas_diskset d1 -m d11 # metattach -s nhas_diskset d1 d21 # metainit -s nhas_diskset d2 -m d12 # metattach -s nhas_diskset d2 d22 |
Note - This ends the section specific to the configuration for diskless installation. To complete diskless installation, jump to Step 20. |
18. Create your specific SVM RAID configuration (refer to the Solaris Volume Manager Administration Guide for information on specific configurations).
In the following example, the two disks form a mirror called d0:
# metainit -s nhas_diskset d18 1 1 /dev/rdsk/c1t8d0s0 # metainit -s nhas_diskset d19 1 1 /dev/rdsk/c1t9d0s0 # metainit -s nhas_diskset d0 -m d18 # metattach -s nhas_diskset d0 d19 |
19. Create soft partitions to host the shared data.
These soft partitions are the file systems managed by Reliable NFS. In the following example, d1 and d2 are managed by Reliable NFS.
The devices managed by Reliable NFS are now accessible through /dev/md/nhas_diskset/dsk/d1 and /dev/md/nhas_diskset/dsk/d2.
20. Create the file systems on the soft partitions:
21. Create the following directories on both master-eligible nodes:
22. Mount the file systems on the metadevice on the first node:
|
1. Ensure that the following directories exist on the first master-eligible node:
# mkdir /SUNWcgha/local/export # mkdir /SUNWcgha/local/export/data # mkdir /SUNWcgha/local/export/services # mkdir /SUNWcgha/local/export/services/NetraHASuite_version/opt |
where NetraHASuite_version is the version of the Foundation Services you install, for example, ha_2.1.2.
These directories contain packages and data shared between the master-eligible nodes.
2. If you are using shared disks, install the shared Java DMK package and NMA packages onto the first master-eligible node as explained in Step 2 of To Install the Node Management Agent.
3. Create the following mount points on each master-eligible node:
These directories are used as mount points for the directories that contain shared data.
4. Add the following lines to the /etc/vfstab file on each master-eligible node:
If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points.
where master-cgtp is the host name associated with the floating address of the cgtp0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.
If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role.
where master-nic0 is the host name associated with the floating address of the NIC0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.
Note - The noac mount option suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable. |
5. Check the following in the /etc/vfstab file:
The mount at boot field is set to no for all RNFS-managed partitions.
The root file system (/) has the logging option.
Note - Only partitions identified in the nhfs.conf file can be managed by RNFS. For more information about the nhfs.conf file, see Configuring the nhfs.conf File. |
6. (Only applicable to SNDR) Create the file systems on the replicated partitions:
|
The Reliable NFS daemon, nhcrfsd, is installed on each master-eligible node. To determine which partitions are managed by this daemon, do the following:
Check the RNFS.Slice parameters of the /etc/opt/SUNWcgha/nhfs.conf file.
For SNDR:
# grep -i RNFS.slice /etc/opt/SUNWcgha/nhfs.conf RNFS.Slice.0=/dev/rdsk/c0t0d0s3 /dev/rdsk/c0t0d0s5 /dev/rdsk/c0t0d0s3 /dev/rdsk/c0t0d0s5 1 |
This means that slice /dev/rdsk/c0t0d0s3 is being replicated and slice /dev/rdsk/c0t0d0s5 is the corresponding bitmap partition.
For SVM:
This means that soft partition d1 of diskset nhas_diskset is being managed by Reliable NFS.
|
The /etc/opt/SUNWcgha/not_configured file was installed automatically when you installed the CMM packages. This file enables you to reboot a cluster node during the installation process without starting the Foundation Services.
After you have installed the Foundation Services packages on each master-eligible node, delete the not_configured file on each master-eligible node.
|
1. Unmount the shared file system, /NetraHASuite, on each master-eligible node by using the umount command.
See the umount(1M) man page and To Mount an Installation Server Directory on the Master-Eligible Nodes.
2. Reboot the first master-eligible node, which becomes the master node:
3. After the first master-eligible node has completed rebooting, reboot the second master-eligible node:
This node becomes the vice-master node. To check the role of each node in the cluster, see the nhcmmrole(1M) man page.
4. Create the INST_RELEASE file to allow patching of shared packages:
|
Use the nhadm tool to verify that the master-eligible nodes have been configured correctly.
1. Log in to the master-eligible node as superuser.
2. Run the nhadm tool to validate the configuration:
If all checks pass the validation, the installation of the Foundation Services was successful. See the nhadm(1M) man page.
A floating external address is a logical address assigned to an interface that is used to connect the master node to an external network. The External Address Manager (EAM) uses the Cluster Membership Manager (CMM) notifications to determine when a node takes on or loses the master role. When notified that a node has become the master node, the EAM configures the floating external addresses on one of the node's external interfaces. When notified that a node has lost the master role, the EAM unconfigures the floating external addresses.
The EAM can be installed when you first install the software on the cluster or after you have completed the installation process and have a running cluster. The following procedure describes how to install the EAM on a running cluster.
At the same time, the floating external addresses can be managed by IP Network Multipathing (IPMP). When a node has two or more NICs connected to the external network, IPMP will failover the floating external addresses from one NIC to the other if the interface they are configured on fails. Additionally, EAM can be configured to monitor the status of those NICs and trigger a switch-over when all NICs in a monitored group have failed.
For more information on IPMP, see Solaris' System Administration Guide: IP Services.
Note - Both IPv4 and IPv6 addresses are now supported. Only IPv4 addresses are used in the examples below. |
|
1. Log in to the vice-master node as superuser.
2. Create a file named not_configured in the /etc/opt/SUNWcgha directory.
If the node is rebooted during this procedure, the node does not start the Foundation Services.
3. Reboot the vice-master node.
4. Install the EAM packages, SUNWnheaa and SUNWnheab on the vice-master node:
# pkgadd -d /software-distribution-dir/Product/NetraHASuite_2.1.2/ FoundationServices/Solarisx/sparc/Packages/ SUNWnheaa SUNWnheab |
where software-distribution-dir is the directory that contains the Foundation Services packages.
5. Edit the /etc/opt/SUNWcgha/nhfs.conf file to define the EAM parameters.
An example entry to configure the EAM is as follows:
One floating external addresses is declared. When the node changes its role, the address is configured UP or DOWN accordingly. For more details on nhfs.conf parameters, see the nhfs.conf(4) man page.
6. Create the interface in the standard Solaris way in DOWN state:
The interface hme0 is configured with the floating external address in DOWN state. This interface must be connected to the public network.
7. Repeat Step 1 through Step 6 on the master node.
8. On both the master node and the vice-master node, delete the /etc/opt/SUNWcgha/not_configured file.
9. Reboot both the master node and the vice-master node.
10. Log in to the master node.
11. Run the ifconfig command on the master node:
In this output, you can see the entry for the hme0:1 interface with the floating external address 192.168.12.39 in state UP.
12. Run the ifconfig command on the vice-master node:
In this output, the entry for the hme0:1 interface with the floating external address 192.168.12.39 is also configured, but as it is in DOWN state it is not working.
14. Run the ifconfig command on the new master node.
15. From a remote system, ping the master node floating address.
For information on how to configure IPMP, refer to Solaris' System Administration Guide: IP Services.
Copyright © 2006, Sun Microsystems, Inc. All Rights Reserved.