![]() |
|||
![]() |
![]() ![]() |
![]() |
![]() ![]() |
![]() |
![]() ![]() |
![]() |
| ||||||||||||||||||||||||||||||||||||||||||||||||
Configuring the Disk PartitionsConfigure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the master-eligible nodes. The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes. Table 3-2 Example Disk Partitions of Master-Eligible Nodes
Note - In a two-node cluster without diskless nodes, partition 3 and partition 5 in the preceding table are not required. Configuring the Scoreboard BitmapsYou can configure the nhinstall tool to store the scoreboard bitmaps of shared partitions either in memory or on the disk. If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk. If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update. Configuring the NFS Option noacYou can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching. If the NFS_USER_DIR_NOAC parameter is set to YES in the cluster_definition.conf file, the noac option is configured when mounting remote directories. If the NFS_USER_DIR_NOAC parameter is set to NO, the noac option is not configured, which enables data and attribute caching. Configuring a Direct Link Between the Master-Eligible NodesYou can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 6/03 Hardware Guide. The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:
Configuring Automatic Reboot for the Master-Eligible NodesYou can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation. If the AUTO_REBOOT parameter is set to YES in the env_installation.conf file, you are prompted to boot the master-eligible nodes the first time only. After the first boot, the master-eligible nodes are automatically rebooted by the nhinstall tool. If AUTO_REBOOT is set to NO, the nhinstall tool prompts you to reboot the master-eligible nodes at different stages of the installation. This process requires you to move between console windows to perform tasks directly on the nodes. Configuring the Carrier Grade Transport ProtocolYou can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP). If the USE_CGTP parameter is set to YES in the cluster_definition.conf file, the nhinstall tool installs CGTP. If the USE_CGTP parameter is set to NO, nhinstall does not install the CGTP packages and patches. In this case, your cluster is configured with a single network interface. You do not have a redundant cluster network. For information about the advantages of redundant network interfaces, see the Netra High Availability Suite Foundation Services 2.1 6/03 Overview. Configuring the Environment for Diskless NodesYou can configure the nhinstall tool to install diskless nodes. If you define diskless nodes with the NODE parameter in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes. If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool gives you the choice of installing the Solaris services for diskless nodes anyway. Type y if you plan to add diskless nodes to the cluster at a later date. Otherwise, the nhinstall tool does not install the Solaris services for the diskless nodes on the master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software on all nodes of the cluster. Therefore, try to include possible future nodes in your cluster configuration. Note - You can manually add diskless nodes to a running cluster as described in Chapter 10, Adding a Diskless or a Dataless Node to a Cluster Originally Created Manually. Configuring the Boot Policy for Diskless NodesYou can configure the nhinstall tool to have the diskless nodes in the cluster boot dynamically, statically, or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy. The following table summarizes the boot policies supported by the nhinstall tool. Table 3-3 Boot Policies for Diskless Nodes
For further information about the boot policies for diskless nodes, see the Netra High Availability Suite Foundation Services 2.1 6/03 Overview. Configuring DHCP Configuration Files Locally on Master-Eligible NodesBy default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the replicated directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.
When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node. Note - Do not use this feature if the DHCP configuration is dynamic, that is if information is stored in the DHCP configuration files at run-time. If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages. Configuring the Watchdog TimerYou can configure the nhinstall tool to install the Foundation Services Watchdog Timer on each node in the cluster. Set the USE_WDT parameter to YES in the cluster_definition.conf file only if you are using Netra servers that have hardware watchdogs at the Lights-Off Management (LOM) level. You might need to install additional software packages. For further information, see the addon.conf.template file. When this parameter is set to YES, the Foundation Services Watchdog Timer is installed and configured. Set the USE_WDT parameter to NO if you are using Netra servers with hardware watchdogs at the OpenBoot PROM (OBP) level. These hardware watchdogs are monitored by the server's software. For a list of the types of watchdogs of different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 6/03 Hardware Guide. Configuring the Cluster IP AddressesYou can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:
Configuring the Floating External Address of the Master NodeYou can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network. If you specify an IP address and a network interface for the EXTERNAL_ACCESS parameter in the cluster_definition.conf file, the floating external address is configured. The Node State Manager daemon, nhnsmd, that monitors the master-eligible nodes for switchovers and failovers is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nhnsmd(1M) man page. If you do not configure the EXTERNAL_ACCESS parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network. Configuring External IP Addresses for Cluster NodesYou can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network. To create an external IP address for a node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME Ethernet card or QFE Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.
|
NODE=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1:5 NODE=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1:101 |
192.168.12.5 and 192.168.12.6 are the external IP addresses.
FSNode1 and FSNode2 are the external node names.
hme1:5 and hme1:101 are the external network interfaces.
![]() ![]() |