C H A P T E R 2 |
Installing and Configuring the nhinstall Tool |
The nhinstall tool enables you to install and configure the software and services on your cluster nodes. You install and configure the nhinstall tool on the installation server.
You can use the nhinstall tool to install a cluster that consists of master-eligible nodes, diskless nodes, and dataless nodes.
For information about setting up the installation environment and configuring the nhinstall tool, see the following sections:
The nhinstall tool enables you to install and configure the Foundation Services on the cluster. This tool must be installed on an installation server. The installation server must be connected to your cluster. For details on how to connect nodes of the cluster and the installation server, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
The nhinstall tool runs on the installation server. This tool installs the Solaris Operating System and the Foundation Services on the cluster nodes. For a description of the types of nodes in a cluster, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
The following table lists the tasks for installing the software with the nhinstall tool. Perform the tasks in the order shown.
Before installing the nhinstall tool on the installation server, you must create a Solaris distribution on the installation server. You must also prepare the installation server to install the Solaris Operating System and the Foundation Services on the cluster nodes.
|
To install the Solaris Operating System on the cluster, create a Solaris distribution on the installation server. The Solaris distribution is used to install the Solaris Operating System on the cluster nodes. If you are installing more than one Solaris distribution on the cluster, perform the steps in the procedure for each Solaris distribution.
1. Make sure that you have at least 1.5 GBytes of free disk space on the installation server.
2. Log in as superuser on the installation server.
3. Create a directory for the Solaris distribution:
where Solaris-distribution-dir is the directory where the distribution is to be stored on the installation server.
4. Change to the directory where the setup_install_server command is located:
Solaris-dir is the directory that contains the Solaris installation software. This directory could be on a CD-ROM or in an NFS-shared directory.
x is 8 or 9 depending on the Solaris version you want to install.
5. Run the setup_install_server command:
For more information about the setup_install_server command, see the appropriate do cumentation:
Solaris 8 Advanced Installation Guide and the setup_install_server(1M) man page
Solaris 9 Installation Guide and the setup_install_server(1M) man page
Solaris 10 Release and Installation Collection and the setup_install_server(1M) man page
|
Before you begin the installation process, make sure that the installation server is configured correctly.
1. Configure the installation server as described in the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
2. If you are planning to install remotely from another system, open a shell window to connect to the installation server.
3. Confirm that the Solaris software packages that contain Perl 5.0 are installed on the installation server.
Use the pkginfo command to check for the SUNWp15u, SUNWp15p, and SUNWp15m Perl packages.
4. Delete any entries for your cluster nodes in the following files:
5. Disable the installation server as a router by creating an /etc/notrouter file:
If a system running the Solaris Operating System has two network interfaces, the system is configured as a router by default. However, for security reasons, a Foundation Services cluster network must not be routed.
6. Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:
7. From the installation server, open a terminal window to connect to the console of each cluster node.
You can also connect to the consoles from the system that you use to connect to the installation server.
Install the package containing the nhinstall tool on the installation server described in the following procedure.
|
1. Log in to the installation server as superuser.
2. Install the nhinstall package, SUNWnhins:
where software-distribution-dir is the directory that contains the Foundation Services packages.
3. To access the man pages on the installation server, install the man page package, SUNWnhman:
where software-distribution-dir is the directory that contains the Foundation Services packages.
After you have installed the package containing the nhinstall tool, configure the nhinstall tool to install the Foundation Services on your cluster. To configure the nhinstall tool, modify the following configuration files:
env_installation.conf
Use this configuration file to define the installation environment. This file enables you to define the IP address of the installation server and the locations of the software distributions for the Solaris Operating System and Foundation Services. You must modify this configuration file. For details on each available option, see the env_installation.conf(4) man page.
cluster_definition.conf
Use this configuration file to define the nodes, disks, and options in your cluster configuration. You must modify this configuration file. For details on each available option, see the cluster_definition.conf(4) man page.
addon.conf
Use this configuration file to specify additional packages and patches that you want to install during the installation process. You must configure your addon.conf file with packages specific to your hardware. For help with your specific configuration, contact your Foundation Services representative. This file is optional. If this file is not configured, the nhinstall tool does not install any additional patches or packages. For more information, see the addon.conf(4) man page and the Netra High Availability Suite Foundation Services 2.1 7/05 README.
nodeprof.conf
Use this configuration file if you want to specify the set of Solaris packages to be installed on the cluster. The default package set is defined in the nodeprof.conf.template file. For more information, see the nodeprof.conf(4) man page.
dataless_nodeprof.conf
If you do not create this file, the same set of Solaris packages is installed on the master-eligible and dataless nodes. Create the dataless_nodeprof.conf file, if you want to customize the Solaris installation on the dataless nodes. For more information, see the dataless_nodeprof.conf(4) man page.
diskless_nodeprof.conf
If you do not create this file, the same set of Solaris packages is installed on the master-eligible and diskless nodes. Create the diskless_nodeprof.conf file, if you want to customize the Solaris installation on the diskless nodes. For more information, see the diskless_nodeprof.conf(4) man page.
The following sections describe in detail the main configuration options of the nhinstall tool:
Configuring DHCP Configuration Files Locally on Master-Eligible Nodes
Configuring the Floating External Address of the Master Node
Sharing Physical Interfaces Between CGTP and IPMP Using VLAN
Specifying the Version of the Operating System to be Installed on the Cluster
Installing a Different Version of the Operating System on Diskless and Dataless Nodes
Use the SLICE or SHARED_SLICE parameters to specify the disk partitions on the master-eligible nodes.
If you plan to use Netra High Availability Suite for replicating NFS-served data over IP, use the SLICE parameter for all partitions.
If you plan to locate NFS-served data on shared disks, use the SHARED_SLICE parameter for the partition storing this data and use SLICE for the local partitions (the root filesystem, for example).
TABLE 2-2 through TABLE 2-4 list the space requirements for sample disk partitions of master-eligible nodes in a cluster with diskless nodes, either with IP-replicated data or with a shared disk.
The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. |
|||
Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster. |
|||
This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option. |
|||
Bitmap partition reserved for nhcrfsd. This volume is associated with the /export file system. |
|||
Bitmap partition reserved for nhcrfsd. This partition is associated with the /SUNWcgha/local file system. |
|||
If you have configured volume management, this partition must be named replica. This partition is mounted with the logging option. See Configuring Volume Management. |
Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined. |
Configure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the dataless nodes.
TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes.
The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. |
|||
Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined. |
Configure the MIRROR parameter to mirror a shared disk to another shared disk.
To prevent simultaneous access to the shared data in case of split-brain, SCSI disk reservation is used. The SCSI version is configured by the SHARED_DISK_FENCING parameter. It can be set to SCSI2 or SCSI3.
You can configure the nhinstall tool to store the scoreboard bitmaps of IP-replicated partitions either in memory or on the disk.
If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk.
If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update.
You can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching.
If the NFS_USER_DIR_NOAC parameter is set to YES in the cluster_definition.conf file, the noac option is configured when mounting remote directories.
If the NFS_USER_DIR_NOAC parameter is set to NO, the noac option is not configured, which enables data and attribute caching.
You can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:
You can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation.
If the AUTO_REBOOT parameter is set to YES in the env_installation.conf file, you are prompted to boot the master-eligible nodes the first time only. After the first boot, the master-eligible nodes are automatically rebooted by the nhinstall tool.
If AUTO_REBOOT is set to NO, the nhinstall tool prompts you to reboot the master-eligible nodes at different stages of the installation. This process requires you to move between console windows to perform tasks directly on the nodes.
You can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP).
If the USE_CGTP parameter is set to YES in the cluster_definition.conf file, the nhinstall tool installs CGTP.
If the USE_CGTP parameter is set to NO, nhinstall does not install the CGTP packages and patches. In this case, your cluster is configured with a single network interface. You do not have a redundant cluster network. For information about the advantages of redundant network interfaces, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
If you define diskless nodes with the NODE or DISKLESS parameters in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes.
If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool gives you the choice of installing the Solaris services for diskless nodes anyway. Type y if you plan to add diskless nodes to the cluster at a later date. Otherwise, the nhinstall tool does not install the Solaris services for the diskless nodes on the master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software. Therefore, try to include possible future nodes in your cluster configuration.
Note - You can manually add diskless nodes to a running cluster as described in Chapter 8. |
You can configure the nhinstall tool to have the diskless nodes in the cluster boot dynamically, statically, or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy.
The following table summarizes the boot policies supported by the nhinstall tool.
For further information about the boot policies for diskless nodes, see the Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the highly available directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.
When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node.
Note - Do not use this feature if the DHCP configuration is dynamic, that is, if information is stored in the DHCP configuration files at run time. |
If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages.
You can configure the nhinstall tool to install the Foundation Services Watchdog Timer on each node in the cluster.
Set the USE_WDT parameter to YES in the cluster_definition.conf file only if you are using Netra servers that have hardware watchdogs at the Lights-Off Management (LOM) level. You might need to install additional software packages. For further information, see the addon.conf.template file. When this parameter is set to YES, the Foundation Services Watchdog Timer is installed and configured.
Set the USE_WDT parameter to NO if you are using Netra servers with hardware watchdogs at the OpenBoot PROM (OBP) level. These hardware watchdogs are monitored by the server's software. For a list of the types of watchdogs of different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
By default, nhinstall configures the installation server to be the default router to the public network. To choose another machine as the router to the public network specify the IP address of the default router of your choice in the cluster_definition.conf file as follows:
For more information, see the cluster_definition.conf(4) man page.
You can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:
You can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network.
As an option, IPMP (IP Multipathing) can be used to support a floating external address on dual redundant links.
EXTERNAL_MASTER_ADDRESS controls an external floating address not managed by IPMP. It makes EXTERNAL_ACCESS (the former directive) obsolete.
EXTERNAL_IPMP_MASTER_ADDRESS controls an external floating address managed by IPMP.
If you specify an IP address and a network interface for the external address parameter in the cluster_definition.conf file, the floating external address is configured. The External Address Manager daemon, nheamd, that monitors floating addresses and IPMP groups on master-eligible nodes is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nheamd(1M) man page.
If you do not configure the external address parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network.
You can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network.
|
1. Set the PUBLIC_NETWORK parameter in the cluster_definition.conf file specifying the subnet and netmask for the subnet.
This parameter also configures the network interface of the installation server. Therefore, the SERVER_IP parameter is an IP address that is on the same subnetwork as defined for PUBLIC_NETWORK. The SERVER_IP parameter is defined in the env_installation.conf file. For more information, see the env_installation.conf(4) man page.
2. Specify the external IP address, external node name, and the external network interface for each NODE definition. For example:
MEN=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1 MEN=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1 |
192.168.12.5 and 192.168.12.6 are the external IP addresses.
FSNode1 and FSNode2 are the external node names.
hme1 is the external network interface.
Physical links can be shared between CGTP and IPMP only when CGTP is used over a VLAN. Before using this configuration, refer to detailed information about Solaris VLAN and IPMP in the Solaris System Administration Guide: IP Services.Not all network interfaces support VLAN. Check that your interfaces support this use. Solaris shows VLAN interfaces as separate physical interfaces, even though there is only one. Since VLANs are configured by using special names for the interfaces, you must define the topology and the interface names for that topology Keep the following points in mind when defining your topology:
Be careful not to set the booting interface on a VLAN. Installation is impossible unless the installation server and boot server are both configured to be part of the VLAN.
Do not set the IPMP interfaces on a VLAN unless all other interfaces on all nodes in the group can belong to the same VLAN (including the clients).
CGTP can be configured with both links on a VLAN, or with only one.
The VLANs on the switches must be configured before starting the installation.
It is IMPORTANT to have a third node (the client, for example, or a router) with an address in the same subnetwork as the IPMP test addresses, as a reference. Many reference nodes are available in order to avoid SPOFs.
For example, consider the three-node cluster shown in FIGURE 2-1. Three ce NICs are on each MEN. In both MENs, ce0 is connected to switch 1, ce1 to switch 2 and ce2 to switch 3. The external router, to which clients connect, is connected to switches 2 and 3. This restricts ce1 and ce2 for external access. CGTP can be used on any two NICs. In this case, ce0 and ce1 were chosen, making ce1 a shared interface.
The VLAN is created with VID 123 over the interface ce1 by plumbing an interface called ce123001. In this example, ce0 and ce123001 will be used for CGTP, and ce1 and ce2 for IPMP. Create the tagged VLAN on SW2 (for information on how to create a VLAN, please refer to your switch's documentation), create a cluster_definition.conf file respecting these interfaces, and launch the installation as for any other case.
The volume management feature enables you to do the following:
Increase data availability, because you can mirror disks locally
Increase the number of available replicated partitions, because you can create multiple soft partitions
The volume management software that is installed depends on the version of the Solaris Operating System that you plan to install. For information on supported software versions, see the Netra High Availability Suite Foundation Services 2.1 7/05 Release Notes.
For a Netra 20 server with a Fibre Channel-Arbitrated Loop (FC-AL) disk as a master-eligible node, you must install the Volume Management feature of the Solaris Operating System. For more information, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
To install the Volume Management software on the nodes of your cluster, perform one of the following procedures:
|
You can use the nhinstall tool to install and configure volume management for Netra 20 servers with FC-AL disks. Configure the nhinstall tool to support logical disk partitions for FC-AL disks by installing the volume management feature as follows:
1. In the env_installation.conf file, set SOLARIS_INSTALL to ALL.
2. Configure the cluster_definition.conf file:
For a detailed example, see the cluster_definition.conf(4) man page.
3. Run the nhinstall tool to install the Solaris Operating System and Foundation Services on the master-eligible nodes.
For more information, see To Launch the nhinstall Tool.
The nhinstall tool installs and configures the appropriate volume management software depending on the version of the Solaris Operating System you chose to install.
|
To configure advanced volume management, install the Solaris Operating System and configure the Volume Management feature to suit your needs. Then configure nhinstall to install only the Foundation Services.
1. Install the Solaris Operating System with volume management on the master-eligible nodes.
For more information, see the documentation for your volume management software:
For Solaris 8, Solstice DiskSuite 4.2.1 Installation and Product Notes
For Solaris 9 or Solaris 10, Solaris Volume Manager Administration Guide
This documentation is available at http://docs.sun.com.
Note - Install the same packages of the same version of the Solaris Operating System on both master-eligible nodes. Create identical disk partitions on the disks of both master-eligible nodes. |
2. Configure a physical Ethernet card interface that corresponds to the first network interface, NIC0.
3. Configure the /etc/netmasks file.
4. Configure the sizes of the disk partitions.
For more information, see TABLE 2-2.
5. In the env_installation.conf file, set SOLARIS_INSTALL to DISKLESS_DATALESS_ONLY.
The Solaris Operating System is configured on the dataless nodes and the Solaris services are configured for the diskless environment.
6. In the cluster_definition.conf file, do the following:
a. Set the LOGICAL_SLICE_SUPPORT parameter to NO.
b. For the SLICE parameter, specify the metadevice names of the disk partitions.
For details on the SLICE parameter, see the cluster_definition.conf(4) man page.
7. Run the nhinstall tool to install the Foundation Services on the master-eligible nodes.
For more information, see To Launch the nhinstall Tool.
Some hardware types require specific or modified versions of the Solaris Operating System that nhinstall is unable to detect automatically. In these cases, you must explicitly force nhinstall to recognize the version of the operating system you want to install on the cluster. To determine if your cluster hardware requires such action, see the Netra High Availability Suite Foundation Services 2.1 7/05 Hardware Guide.
To install a Solaris package set on cluster nodes other than the default package set, specify the Solaris package set to be installed. For a list of the contents of the default package set, see the /opt/SUNWcgha/config.standard/nodeprof.conf.template file. For information about installing a Solaris package set on cluster nodes, see the nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the diskless nodes, see the diskless_nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the dataless nodes, see the dataless_nodeprof.conf(4).
To install a version of the Solaris operating system on diskless nodes that is different from the one you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:
To install a version of the Solaris operating system on dataless nodes that is different from the one you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:
By default, the values provided to the DISKLESS_SOLARIS_DIR and DATALESS_SOLARIS_DIR parameters are set to be the same as that provided to the SOLARIS_DIR parameter. For more information, see the env_installation.conf(4) man page.
There are three data management policies available with the Foundation Services. By default, the nhinstall tool sets the data management policy to be Integrity for data replication over IP, and Availability when using shared disks. To choose another policy, change the value of the following variable in the cluster_definition.conf file.
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default, diskless and dataless nodes reboot if there is no master in the cluster. If you do not want the diskless and dataless nodes to reboot in this situation, add the following line to the cluster_definition.conf file:
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default nhinstall enables this feature. It reduces the time taken for full synchronization between the master and the vice-master disks by synchronizing only the blocks that contain replicated data.
To disable this feature and have all blocks replicated, add the following line to the cluster definition.conf file:
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
To activate the sanity check of replicated slices, add the following line to the cluster_definition.conf file:
By default, the nhinstall tool does not activate this feature. For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default, disk synchronization starts automatically when the cluster software is installed. If you want to delay the start of disk synchronization, add the following line to the cluster_definition.conf file:
You can trigger disk synchronization at a time of your choice using the nhenablesync tool. For more information, see the cluster_definition.conf(4) and nhenablesync(1M) man pages and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default, nhinstall configures the cluster so that slices are synchronized in parallel. Synchronizing slices one slice at a time reduces the network and disk overhead but increases the time it takes for the vice-master to synchronize with the master. During this time, the vice-master is not eligible to take on the role of master. To enable serialized slice synchronization, add the following line to the cluster_definition.conf file:
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite Foundation Services 2.1 7/05 Overview.
By default, the Node Management Agent is installed.Set the INSTALL_NMA parameter to NO to avoid installing this agent.
By default, the Node State Manager is not installed.Set the INSTALL_NSM parameter to YES to install NSM.
By default, the SAF CLM API is not installed.Set the INSTALL_SAFCLM parameter to YES to install NSM.
Copyright © 2006, Sun Microsystems, Inc. All Rights Reserved.