Sun Microsystems
Products & Services
 
Support & Training
 
 

Previous Previous     Contents     Index     Next Next

Configuring the Disk Partitions

Configure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the master-eligible nodes.

The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes.

Table 3-2 Example Disk Partitions of Master-Eligible Nodes

Disk Partition

File System Name

Description

Example Size

0

/

The root file system, boot partition, and volume management software. This partition must be mounted with the logging option.

2 Gbytes minimum

1

/swap

Minimum size when physical memory is less than 1 Gbyte.

1 Gbyte

2

overlap

Entire disk.

Size of entire disk

3

/export

Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option.

This partition is further sliced if diskless nodes are added to the cluster.

1 Gbyte + 100 Mbytes per diskless node

4

/SUNWcgha/local

This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option.

2 Gbytes

5

Reserved for Reliable NFS internal use

Bitmap partition reserved for nhcrfsd. This volume is associated with the /export file system.

1 Mbyte

6

Reserved for Reliable NFS internal use

Bitmap partition reserved for nhcrfsd. This partition is associated with the /SUNWcgha/local file system.

1 Mbyte

7

replica

OR

/test1

If you have configured volume management, this partition must be named replica. This partition is mounted with the logging option. See Configuring Volume Management.

The remaining space


Note - In a two-node cluster without diskless nodes, partition 3 and partition 5 in the preceding table are not required.


Configuring the Scoreboard Bitmaps

You can configure the nhinstall tool to store the scoreboard bitmaps of shared partitions either in memory or on the disk.

If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk.

If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update.

Configuring the NFS Option noac

You can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching.

If the NFS_USER_DIR_NOAC parameter is set to YES in the cluster_definition.conf file, the noac option is configured when mounting remote directories.

If the NFS_USER_DIR_NOAC parameter is set to NO, the noac option is not configured, which enables data and attribute caching.

Configuring a Direct Link Between the Master-Eligible Nodes

You can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite Foundation Services 2.1 6/03 Hardware Guide.

The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:

DIRECT_LINK=/dev/ttya 	/dev/ttya	115200	20

Configuring Automatic Reboot for the Master-Eligible Nodes

You can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation.

If the AUTO_REBOOT parameter is set to YES in the env_installation.conf file, you are prompted to boot the master-eligible nodes the first time only. After the first boot, the master-eligible nodes are automatically rebooted by the nhinstall tool.

If AUTO_REBOOT is set to NO, the nhinstall tool prompts you to reboot the master-eligible nodes at different stages of the installation. This process requires you to move between console windows to perform tasks directly on the nodes.

Configuring the Carrier Grade Transport Protocol

You can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP).

If the USE_CGTP parameter is set to YES in the cluster_definition.conf file, the nhinstall tool installs CGTP.

If the USE_CGTP parameter is set to NO, nhinstall does not install the CGTP packages and patches. In this case, your cluster is configured with a single network interface. You do not have a redundant cluster network. For information about the advantages of redundant network interfaces, see the Netra High Availability Suite Foundation Services 2.1 6/03 Overview.

Configuring the Environment for Diskless Nodes

You can configure the nhinstall tool to install diskless nodes.

If you define diskless nodes with the NODE parameter in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes.

If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool gives you the choice of installing the Solaris services for diskless nodes anyway. Type y if you plan to add diskless nodes to the cluster at a later date. Otherwise, the nhinstall tool does not install the Solaris services for the diskless nodes on the master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software on all nodes of the cluster. Therefore, try to include possible future nodes in your cluster configuration.


Note - You can manually add diskless nodes to a running cluster as described in Chapter 10, Adding a Diskless or a Dataless Node to a Cluster Originally Created Manually.


Configuring the Boot Policy for Diskless Nodes

You can configure the nhinstall tool to have the diskless nodes in the cluster boot dynamically, statically, or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy.

The following table summarizes the boot policies supported by the nhinstall tool.

Table 3-3 Boot Policies for Diskless Nodes

Boot Policy

Description

DHCP dynamic boot policy

IP address is dynamically assigned from a pool of IP addresses when the diskless node is booted.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_DYNAMIC, nhinstall configures the diskless nodes with a dynamic boot policy. This option is configured by default if you do not define the DISKLESS_BOOT_POLICY parameter.

DHCP static boot policy

IP address based on the Ethernet address of the diskless node. The Ethernet address is specified in the cluster_definition.conf file.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_STATIC, nhinstall configures the diskless nodes with a static boot policy.

DHCP client ID boot policy

IP address generated from the diskless node's client ID in a CompactPCI server.

If you set the DISKLESS_BOOT_POLICY parameter to DHCP_CLIENT_ID, nhinstall configures the diskless nodes to use the client ID to generate the IP address.

For further information about the boot policies for diskless nodes, see the Netra High Availability Suite Foundation Services 2.1 6/03 Overview.

Configuring DHCP Configuration Files Locally on Master-Eligible Nodes

By default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the replicated directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.

REPLICATED_DHCP_FILES=NO

When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node.


Note - Do not use this feature if the DHCP configuration is dynamic, that is if information is stored in the DHCP configuration files at run-time.


If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages.

Configuring the Watchdog Timer

You can configure the nhinstall tool to install the Foundation Services Watchdog Timer on each node in the cluster.

Set the USE_WDT parameter to YES in the cluster_definition.conf file only if you are using Netra servers that have hardware watchdogs at the Lights-Off Management (LOM) level. You might need to install additional software packages. For further information, see the addon.conf.template file. When this parameter is set to YES, the Foundation Services Watchdog Timer is installed and configured.

Set the USE_WDT parameter to NO if you are using Netra servers with hardware watchdogs at the OpenBoot™ PROM (OBP) level. These hardware watchdogs are monitored by the server's software. For a list of the types of watchdogs of different Netra servers, see the Netra High Availability Suite Foundation Services 2.1 6/03 Hardware Guide.

Configuring the Cluster IP Addresses

You can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:

CLUSTER_NETWORK=255.255.0.0 192.168.0.0 192.169.0.0 192.170.0.0

Configuring the Floating External Address of the Master Node

You can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network.

If you specify an IP address and a network interface for the EXTERNAL_ACCESS parameter in the cluster_definition.conf file, the floating external address is configured. The Node State Manager daemon, nhnsmd, that monitors the master-eligible nodes for switchovers and failovers is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nhnsmd(1M) man page.

If you do not configure the EXTERNAL_ACCESS parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network.

Configuring External IP Addresses for Cluster Nodes

You can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network.

To create an external IP address for a node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME Ethernet card or QFE Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.

ProcedureTo Configure External IP Addresses for Cluster Nodes

  1. Set the PUBLIC_NETWORK parameter in the cluster_definition.conf file specifying the subnet and netmask for the subnet.

    This parameter also configures the network interface of the installation server. Therefore, the SERVER_IP parameter is an IP address that is on the same subnetwork as defined for PUBLIC_NETWORK. The SERVER_IP parameter is defined in the env_installation.conf file. For more information, see the env_installation.conf(4) man page.

  2. Specify the external IP address, external node name, and the external network interface for each NODE definition. For example:

    NODE=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1:5
    NODE=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1:101

    • 192.168.12.5 and 192.168.12.6 are the external IP addresses.

    • FSNode1 and FSNode2 are the external node names.

    • hme1:5 and hme1:101 are the external network interfaces.

Previous Previous     Contents     Index     Next Next