This chapter describes the interconnect-specific issues that are involved in creating a cluster or adding a member to an existing cluster. It discusses the following topics:
Preparing to configure a LAN interconnect (Section 3.1)
Creating a single-member cluster using the
clu_create
command (Section 3.2)
Adding members to the cluster using the
clu_add_member
command (Section 3.3)
Note
This information complements information in chapters 1 through 5 of the Cluster Installation manual. See that manual for full procedural discussions of cluster installation.
In this release, there are some limitations on the cluster software's
ability to facilitate the configuration of a LAN interconnect by performing
hardware probes, enforcing configuration restrictions (such as 100 Mb/s
speed), and
detecting illegal and unwise configurations.
For example, the cluster configuration scripts do not
probe for all existing network adapters on a member.
Rather, the
clu_create
command prompts you for the name of an
adapter and verifies:
That it exists
That it is unconfigured
The
clu_add_member
command prompts you
for an adapter name and verifies its
syntax.
Because
clu_add_member
runs on
an existing cluster member before the new member has been booted,
it cannot verify the existence of an adapter on the member that it is
adding.
Finally, there are no configuration tests for the LAN interconnect in
the
clu_check_config
utility.
If you misconfigure
the LAN interconnect in a cluster (for example, by specifying nonexistent
adapters or NetRAIN virtual interfaces), the system may not be able to
boot and form or join a cluster.
(See
Section 4.7
for information on how to detect and resolve such problems.)
3.1 Preparation
Before running the
clu_create
and
clu_add_member
commands to configure a cluster using
a LAN interconnect, perform the following tasks.
(If you are migrating
from Memory Channel to a LAN interconnect, see
Section 4.5.)
Make sure that the
/vmunix
kernel
contains support for the Ethernet hardware you have connected.
If it
does not, you must boot
/genvmunix
and rebuild the
/vmunix
kernel using the
doconfig -c
command.
Configure the Ethernet hardware
intended for use as the LAN interconnect so that it can be used as a
standard network.
Use various networking utilities like
ifconfig
,
ping
,
ftp
,
and
telnet
to verify that the hardware is
set up correctly.
Obtain the device names of the physical Ethernet network adapters on each member system to be used for the LAN interconnect (Section 3.1.1).
Obtain IP names and addresses for the cluster interconnect virtual
interface (ics0
) and the cluster interconnect
physical interface (Section 3.1.2).
3.1.1 Obtain the Device Names of the Network Adapters
Obtain the names of eligible Ethernet network adapters on the member to
be configured before issuing the
clu_create
or
clu_add_member
command.
To be eligible, an adapter must:
Be installed
Not be configured
Be set to run at 100 Mb/s
The cluster installation commands accept the names of either physical Ethernet network adapters or NetRAIN virtual interfaces.
Caution
The cluster installation commands automatically configure the NetRAIN virtual interfaces for the LAN interconnect. Do not manually create the NetRAIN devices prior to running the
clu_create
script. See Section 4.1 for a discussion of the consequences of doing so.
To learn the device names of eligible network adapters, run the
ifconfig -a
command on the system that will become
the first member of the cluster.
Use the
hwmgr
-get attr -cat network
command to determine their speed
and transmission mode.
To learn the device names
for systems that you intend to add to the cluster, you must first
boot the system from the Tru64 UNIX
Operating System Volume
1
CD-ROM.
The UNIX device names of the Ethernet adapters
scroll on the console during the boot process.
If you enter a UNIX
shell after the system boots, you can enter an
ifconfig
-a
command to list the network adapter device names and
the
hwmgr
-get attr -cat network
command to list their properties.
3.1.2 Obtain IP Names and IP Addresses for Each Member's Cluster Interconnect
To allow cluster members to use TCP/IP mechanisms to communicate over
the cluster interconnect, regardless of its underlying hardware, the
cluster software creates a virtual network within the cluster
(Figure 3-1).
Figure 3-1: Cluster Virtual Network and Physical Communication Channel
This virtual network exists side-by-side with the physical communications channel provided by the cluster interconnect.
For each member, the cluster software
establishes a virtual network device for the cluster
interconnect.
This device is named
ics0
and its IP name and IP address are used when
establishing the system's membership in the cluster.
This name and address represent a member's cluster interconnect
in the
IFCONFIG
and
NETDEV
entries
in its
/etc/rc.config
file.
Note
For TruCluster Server Version 5.1A, the name of a member's cluster interconnect virtual device has changed from
mc0
toics0
. If you perform a full installation of Version 5.1A, or perform a rolling upgrade (as described in the Cluster Installation manual) from Version 5.1, theNETDEV_
x configuration variable in each member's/etc/rc.config
file that corresponds to this device will be defined asics0
.Similarly the form of a member's default cluster interconnect IP name offered by the cluster installation scripts (
clu_create
andclu_add_member
) has also changed. The default cluster interconnect IP name is visible in the value of theCLUSTER_NET
configuration variable in the each member's/etc/rc.config
file and in the value of thecluster_node_inter_name
variable of theclubase
kernel subsystem in the each member's/etc/sysconfigtab
file. If you perform a full installation of Version 5.1A, the default for these attributes (formerly member-name-mc0
) will be offered as member-name-ics0
. If you perform a rolling upgrade to Version 5.1A, their file values remain member-name-mc0
.
The number of IP names and IP addresses required for the cluster interconnect thus depends upon the type of cluster interconnect:
For a cluster using Memory Channel, you need an IP name and IP address for the virtual cluster interconnect device on each member system.
By default, the installation programs offer IP addresses on
the 10.0.0 subnet for virtual cluster interconnect, with the host portion
of the address set to the
member ID and the IP name set to the short form of the member's host
name followed by
-ics0
.
For a cluster using a LAN interconnect, where communications between members traverse two TCP/IP layers, you need:
An IP name and IP address for the virtual cluster interconnect device on each member system.
By default, the installation programs offer IP addresses on
the 10.0.0 subnet for virtual cluster interconnect, with the host portion
of the address set to the
member ID and the IP name set to the short form of the member's host
name followed by
-ics0
.
An IP name and IP address on a different subnet for the physical LAN interface.
By default, the cluster installation programs offer IP addresses on
the 10.1.0 subnet for the physical cluster interconnect, with the host portion
of the address set to the member ID.
The IP name is set to
member
member-ID-icstcp0
.
Notes
Manufacturers typically associate a default address with an Ethernet switch to facilitate its management. This address may conflict with the default IP address the cluster installation scripts provide for the virtual cluster interconnect device or the physical LAN interface. In this case, you must ensure that the IP addresses selected for the cluster interconnect differ from that used by the switch. For example, in Figure 2-3, because the switch is addressable by the IP address 10.1.0.1, we have assigned the address 10.1.0.100 to member 1's physical LAN interface.
By default, the installation programs use Class C subnet masks for the IP addresses of both the virtual and physical LAN interconnect interfaces.
Cluster interconnect IP addresses cannot end with either
.0
or.255
. Addresses of this type are considered broadcast addresses. A system with this type of address cannot join a cluster.
The following example shows the cluster interconnect IP names and IP addresses
for two members of the
deli
cluster,
pepicelli
and
polishham
, which is
running on a Memory Channel cluster interconnect:
10.0.0.1 pepicelli-ics0 # first member's virtual interconnect IP name and address 10.0.0.2 polishham-ics0 # second member's virtual interconnect IP name and address
The following example shows the cluster interconnect IP names and IP addresses for two members of the same cluster running on a LAN interconnect:
# first member's cluster interconnect virtual interface IP name and address 10.0.0.1 pepicelli-ics0 # first member's cluster interconnect physical interface IP name and address 10.1.0.1 member1-icstcp0 # second member's cluster interconnect virtual interface IP name and address 10.0.0.2 polishham-ics0 # second member's cluster interconnect physical interface IP name and address 10.1.0.2 member2-icstcp0
The cluster installation scripts mark both the cluster
interconnect virtual interface and physical interface with the
cluster interface (CLUIF
) flag.
For example, the
following output of the
ifconfig -a
command shows the
cluster interconnect virtual interface (ics0
) and
the cluster interconnect physical interface
(ee0
):
# ifconfig -a | grep -p CLUIF ee0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF> inet 10.1.0.2 netmask ffffff00 broadcast 10.1.0.255 ipmtu 1500 ics0: flags=1100063<UP,BROADCAST,NOTRAILERS,RUNNING,NOCHECKSUM,CLUIF> inet 10.0.0.2 netmask ffffff00 broadcast 10.0.0.255 ipmtu 1500
The following example shows a cluster interconnect physical interface
(nr0
that is a NetRAIN virtual interface:
# ifconfig -a | grep -p CLUIF ee0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF> NetRAIN Virtual Interface: nr0 NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 ) ee1: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF> NetRAIN Virtual Interface: nr0 NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 ) ics0: flags=11000c63<BROADCAST,NOTRAILERS,NOCHECKSUM,CLUIF> inet 10.0.0.2 netmask ffffff00 broadcast 10.0.0.255 ipmtu 1500 nr0: flags=1000c63<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,SIMPLEX,CLUIF> NetRAIN Attached Interfaces: ( ee1 ee0 ) Active Interface: ( ee1 ) inet 10.1.0.2 netmask ffffff00 broadcast 10.1.0.255 ipmtu 1500
3.2 Create a Single-Member Cluster
When you create a cluster, the
clu_create
command
prompts for the type of cluster interconnect type (LAN or Memory Channel),
offering Memory Channel as a default if a
Memory Channel adapter is installed:
If you specify Memory Channel as the cluster interconnect type,
clu_create
offers a default physical cluster
interconnect interface of
mchan0
.
If you specify LAN, the
clu_create
command
prompts you for a physical cluster interconnect device name.
You have the following options:
Specify the device name of a single network interface, such as
tu0
or
ee0
.
Iteratively specify the device names of multiple network interfaces.
The
clu_create
command allows you to specify that
these interfaces be configured in a NetRAIN virtual interface.
Note
If you specify the device name of an existing NetRAIN device (for example, one defined in the
/etc/rc.config
file), theclu_create
command prompts you to confirm that you want to redefine this NetRAIN device as a physical cluster interconnect device. If you respond "yes," theclu_create
command removes the definition of the NetRAIN device from the/etc/rc.config
file and defines the device as the cluster interconnect device in theics_ll_tcp
stanza of the/etc/sysconfigtab
file.
The
clu_create
command then creates an IP name for
the physical cluster interconnect device of the form
member
member-ID-icstcp0
, and by default
offers an IP address of
10.1.0.
member-ID
for this device.
See
Appendix C
for a list of
/etc/sysconfigtab
attributes written by the
clu_create
command to define the cluster interconnect.
3.3 Add Members
When you add a member to an existing cluster, the
clu_add_member
command
prompts you for a physical cluster interconnect device name for
the LAN interconnect (if the current cluster member was not configured to
use Memory Channel).
You have the following options:
Specify the device name of a single network interface, such as
tu0
or
ee0
.
Iteratively specify the device names of multiple network interfaces.
The
clu_add_member
command allows you to specify that
these interfaces be configured in a NetRAIN virtual interface.
Note
If you specify the device name of a NetRAIN device that is defined as the physical cluster interconnect device for the member on which you are running the
clu_add_member
command, the command prompts you to indicate whether you intend to use an identical NetRAIN device (same device name and same participating adapters) on the member you are adding. If you respond "yes," theclu_add_member
command defines the device as the cluster interconnect device in theics_ll_tcp
stanza of the/etc/sysconfigtab
file.
The
clu_add_member
command then creates an IP name for
the physical cluster interconnect device of the form
member
member-ID-icstcp0
and by default
offers an IP address of
10.1.0.
member-ID
for this device.
See
Appendix C
for a list of
/etc/sysconfigtab
attributes written by the
clu_add_member
command to define the cluster interconnect.