3    Software Installation

This chapter describes the interconnect-specific issues that are involved in creating a cluster or adding a member to an existing cluster. It discusses the following topics:

Note

This information complements information in chapters 1 through 5 of the Cluster Installation manual. See that manual for full procedural discussions of cluster installation.

In this release, there are some limitations on the cluster software's ability to facilitate the configuration of a LAN interconnect by performing hardware probes, enforcing configuration restrictions (such as 100 Mb/s speed), and detecting illegal and unwise configurations. For example, the cluster configuration scripts do not probe for all existing network adapters on a member. Rather, the clu_create command prompts you for the name of an adapter and verifies:

The clu_add_member command prompts you for an adapter name and verifies its syntax. Because clu_add_member runs on an existing cluster member before the new member has been booted, it cannot verify the existence of an adapter on the member that it is adding.

Finally, there are no configuration tests for the LAN interconnect in the clu_check_config utility. If you misconfigure the LAN interconnect in a cluster (for example, by specifying nonexistent adapters or NetRAIN virtual interfaces), the system may not be able to boot and form or join a cluster. (See Section 4.7 for information on how to detect and resolve such problems.)

3.1    Preparation

Before running the clu_create and clu_add_member commands to configure a cluster using a LAN interconnect, perform the following tasks. (If you are migrating from Memory Channel to a LAN interconnect, see Section 4.5.)

3.1.1    Obtain the Device Names of the Network Adapters

Obtain the names of eligible Ethernet network adapters on the member to be configured before issuing the clu_create or clu_add_member command. To be eligible, an adapter must:

The cluster installation commands accept the names of either physical Ethernet network adapters or NetRAIN virtual interfaces.

Caution

The cluster installation commands automatically configure the NetRAIN virtual interfaces for the LAN interconnect. Do not manually create the NetRAIN devices prior to running the clu_create script. See Section 4.1 for a discussion of the consequences of doing so.

To learn the device names of eligible network adapters, run the ifconfig -a command on the system that will become the first member of the cluster. Use the hwmgr -get attr -cat network command to determine their speed and transmission mode.

To learn the device names for systems that you intend to add to the cluster, you must first boot the system from the Tru64 UNIX Operating System Volume 1 CD-ROM. The UNIX device names of the Ethernet adapters scroll on the console during the boot process. If you enter a UNIX shell after the system boots, you can enter an ifconfig -a command to list the network adapter device names and the hwmgr -get attr -cat network command to list their properties.

3.1.2    Obtain IP Names and IP Addresses for Each Member's Cluster Interconnect

To allow cluster members to use TCP/IP mechanisms to communicate over the cluster interconnect, regardless of its underlying hardware, the cluster software creates a virtual network within the cluster (Figure 3-1).

Figure 3-1:  Cluster Virtual Network and Physical Communication Channel

This virtual network exists side-by-side with the physical communications channel provided by the cluster interconnect.

For each member, the cluster software establishes a virtual network device for the cluster interconnect. This device is named ics0 and its IP name and IP address are used when establishing the system's membership in the cluster. This name and address represent a member's cluster interconnect in the IFCONFIG and NETDEV entries in its /etc/rc.config file.

Note

For TruCluster Server Version 5.1A, the name of a member's cluster interconnect virtual device has changed from mc0 to ics0. If you perform a full installation of Version 5.1A, or perform a rolling upgrade (as described in the Cluster Installation manual) from Version 5.1, the NETDEV_x configuration variable in each member's /etc/rc.config file that corresponds to this device will be defined as ics0.

Similarly the form of a member's default cluster interconnect IP name offered by the cluster installation scripts (clu_create and clu_add_member) has also changed. The default cluster interconnect IP name is visible in the value of the CLUSTER_NET configuration variable in the each member's /etc/rc.config file and in the value of the cluster_node_inter_name variable of the clubase kernel subsystem in the each member's /etc/sysconfigtab file. If you perform a full installation of Version 5.1A, the default for these attributes (formerly member-name-mc0) will be offered as member-name-ics0. If you perform a rolling upgrade to Version 5.1A, their file values remain member-name-mc0.

The number of IP names and IP addresses required for the cluster interconnect thus depends upon the type of cluster interconnect:

The following example shows the cluster interconnect IP names and IP addresses for two members of the deli cluster, pepicelli and polishham, which is running on a Memory Channel cluster interconnect:

10.0.0.1 pepicelli-ics0  # first member's  virtual interconnect IP name and address
10.0.0.2 polishham-ics0  # second member's virtual interconnect IP name and address
 

The following example shows the cluster interconnect IP names and IP addresses for two members of the same cluster running on a LAN interconnect:

# first member's cluster interconnect virtual interface IP name and address
10.0.0.1 pepicelli-ics0 
# first member's cluster interconnect physical interface IP name and address
10.1.0.1 member1-icstcp0
# second member's cluster interconnect virtual interface IP name and address
10.0.0.2 polishham-ics0
# second member's cluster interconnect physical interface IP name and address
10.1.0.2 member2-icstcp0
 

The cluster installation scripts mark both the cluster interconnect virtual interface and physical interface with the cluster interface (CLUIF) flag. For example, the following output of the ifconfig -a command shows the cluster interconnect virtual interface (ics0) and the cluster interconnect physical interface (ee0):

# ifconfig -a | grep -p CLUIF
ee0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF>
     inet 10.1.0.2 netmask ffffff00 broadcast 10.1.0.255 ipmtu 1500
ics0: flags=1100063<UP,BROADCAST,NOTRAILERS,RUNNING,NOCHECKSUM,CLUIF>
     inet 10.0.0.2 netmask ffffff00 broadcast 10.0.0.255 ipmtu 1500
 

The following example shows a cluster interconnect physical interface (nr0 that is a NetRAIN virtual interface:

# ifconfig -a | grep -p CLUIF
ee0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF>
     NetRAIN Virtual Interface: nr0 
     NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 )
ee1: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF>
     NetRAIN Virtual Interface: nr0 
     NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 )
ics0: flags=11000c63<BROADCAST,NOTRAILERS,NOCHECKSUM,CLUIF>
     inet 10.0.0.2 netmask ffffff00 broadcast 10.0.0.255 ipmtu 1500
nr0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF>
     NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 )
     inet 10.1.0.2 netmask ffffff00 broadcast 10.1.0.255 ipmtu 1500 

3.2    Create a Single-Member Cluster

When you create a cluster, the clu_create command prompts for the type of cluster interconnect type (LAN or Memory Channel), offering Memory Channel as a default if a Memory Channel adapter is installed:

See Appendix C for a list of /etc/sysconfigtab attributes written by the clu_create command to define the cluster interconnect.

3.3    Add Members

When you add a member to an existing cluster, the clu_add_member command prompts you for a physical cluster interconnect device name for the LAN interconnect (if the current cluster member was not configured to use Memory Channel). You have the following options:

Note

If you specify the device name of a NetRAIN device that is defined as the physical cluster interconnect device for the member on which you are running the clu_add_member command, the command prompts you to indicate whether you intend to use an identical NetRAIN device (same device name and same participating adapters) on the member you are adding. If you respond "yes," the clu_add_member command defines the device as the cluster interconnect device in the ics_ll_tcp stanza of the /etc/sysconfigtab file.

The clu_add_member command then creates an IP name for the physical cluster interconnect device of the form membermember-ID-icstcp0 and by default offers an IP address of 10.1.0.member-ID for this device.

See Appendix C for a list of /etc/sysconfigtab attributes written by the clu_add_member command to define the cluster interconnect.