2    Hardware Configuration

This chapter provides basic information on how to configure local area network (LAN) hardware for use as a cluster interconnect. It discusses the following topics:

This chapter focuses on configuring LAN hardware as a cluster interconnect. For full cluster and storage configuration information, see the Cluster Hardware Configuration manual.

2.1    Configuration Guidelines

Any Ethernet adapter, switch, or hub that works in a standard LAN at 100 Mb/s should work within a LAN interconnect.

Note

Fiber Distributed Data Interface (FDDI), ATM LAN Emulation (LANE), 10 Mb/s Ethernet, and Gigabit Ethernet are not supported in a LAN interconnect.

The following features are required of Ethernet hardware participating in a cluster LAN interconnect:

2.2    Supported LAN Interconnect Configurations

TruCluster Server currently supports up to eight members in a cluster, regardless of whether its cluster interconnect is based on LAN or Memory Channel. Chapter 1 of the Cluster Hardware Configuration manual illustrates some cluster configurations using Memory Channel. The following sections supplement that chapter by discussing the supported LAN interconnect configurations:

2.2.1    Two Cluster Members Directly Connected by a Single Crossover Cable

You can configure a LAN interconnect in a two-member cluster by using a single crossover cable to connect the Ethernet adapter of one member to that of the other, as shown in Figure 2-1. (See Section 3.1.2 for an explanation of the IP addresses shown in the figure.)

Figure 2-1:  Two Cluster Members Directly Connected by a Single Crossover Cable

Note

A crossover cable for point-to-point Ethernet connections is required to directly connect the network adapters of two members when no switch or hub is configured between them.

From a member's perspective, because this cluster does not employ redundant LAN interconnect components (each member has a single Ethernet adapter and a single cable connects the two members), a break in the LAN interconnect connection (for example, the servicing of a member's Ethernet adapter or a detached cable) will cause a member to leave the cluster. However, if you configure a voting quorum disk in this cluster, the cluster itself will survive the failure of either member or of the quorum disk, or a break in the LAN interconnect connection. Similarly, if you configure one member with a vote and the other with no votes, the cluster will survive the failure of the nonvoting member or of its LAN interconnect connection.

You can expand this configuration by adding a switch between the two members. A switch is required in the following cases:

2.2.2    Cluster Using a Single Ethernet Switch

You can configure a cluster with a single Ethernet hub or switch connecting two through eight members. For optimal performance, we recommend a switch for clusters of three or more members.

Any member that has multiple Ethernet adapters can configure them as a NetRAIN set to be used as its LAN interconnect interface. Doing so allows those members to remain cluster members even if they lose one internal connection to the LAN interconnect.

The three-member cluster in Figure 2-2 uses a LAN interconnect incorporating a single Ethernet switch. Each member's cluster interconnect is a NetRAIN virtual interface consisting of two network adapters. (See Section 3.1.2 for an explanation of the IP addresses shown in the figure.)

Figure 2-2:  Three-Member Cluster Using a Single Ethernet Switch

Assuming that each member has one vote, this cluster can survive the failure of a single member or a single break in a member's LAN interconnect connection (for example, the servicing of an Ethernet adapter or a detached cable). From a member's perspective, any member can survive a single break in its LAN interconnect connection. However, the servicing or failure of the switch will make the cluster nonoperational. The switch remains a single point of failure in a cluster of any size, except when it is used in one of the recommended two-member configurations using a quorum disk discussed in Section 2.2.1. For this reason, the cluster in Figure 2-2 is not a recommended configuration.

By adding a second switch to this cluster, and connecting a LAN interconnect adapter from each member to each switch (as discussed in Section 2.2.3), you can eliminate the switch as a single point of failure and increase cluster reliability.

2.2.3    Cluster Using Fully Redundant LAN Interconnect Hardware

You can achieve a fully redundant LAN interconnect configuration by using NetRAIN and redundant paths from each member through interconnected switches. In the four-member cluster in Figure 2-3 and Figure 2-4, two Ethernet adapters on each member are configured as a NetRAIN virtual interface, two switches are interconnected by two crossover cables, and the Ethernet connections from each member are split across the switches.

Figure 2-3:  Recommended Fully Redundant LAN Interconnect Configuration Using Link Aggregation or Link Resiliency

Figure 2-4:  Recommended Fully Redundant LAN Interconnect Configuration Using the Spanning Tree Protocol

Note

If you are mixing switches from different manufacturers, consult with your switch manufacturers for compatibility between them.

Like the three-member cluster discussed in Section 2.2.2, this cluster can tolerate the failure of a single member or a single break in a member's LAN interconnect connection (for example, the servicing of an Ethernet adapter or a detached cable). (This assumes that each member has one vote and no quorum disk is configured.) However, this cluster can also survive a single switch failure and the loss of the crossover cables between the switches.

Because NetRAIN must probe the inactive LAN interconnect adapters across switches, the crossover cable connection between the switches is important. Two crossover cables are strongly recommended. When two crossover cables are used, as shown in Figure 2-3 and Figure 2-4, the loss of one of the cables is transparent to the cluster. As discussed in Appendix A, when using parallel interswitch links in this manner, it is important to employ one of the methods provided by the switches for detecting or avoiding routing loops between the switches. These figures indicate the appropriate port settings with respect to the most common methods provided by switches: link aggregation (also known as port trunking), link resiliency (both shown in Figure 2-3), and Spanning Tree Protocol (STP) (shown in Figure 2-4). (See Section 3.1.2 for an explanation of the IP addresses shown in the figure.)

In some circumstances (like the nonrecommended configuration, shown in Figure 2-5, that uses a single crossover cable), a broken crossover connection can result in a network partition. If the crossover connection is completely broken, its loss prevents NetRAIN from sending packets to the inactive adapters across the crossover connection. Although this situation will not cause the cluster to fail, it will disable failover between the adapters in the NetRAIN sets.

For example, in the configuration shown in Figure 2-5 the active LAN interconnect adapters of Members 1 and 2 are currently on Switch 1; those of Members 3 and 4 are on Switch 2. If the crossover connection is broken while the cluster is in this state, Members 1 and 2 can see each other but cannot see Members 3 and 4 (and thus will remove them from the cluster). Members 3 and 4 can see each other but cannot see Members 1 and 2 (and thus will remove them from the cluster). By design, neither cluster can achieve quorum; each has two votes out of a required three, and both will hang in quorum loss.

Figure 2-5:  Nonrecommended Redundant LAN Interconnect Configuration

To decrease a cluster's vulnerability to network partitions in a dual-switched configuration, take any or all of the following steps:

2.2.4    Clustering AlphaServer DS10L Systems

Support for the LAN interconnect makes it possible to cluster more basic AlphaServer systems, such as the Compaq AlphaServer DS10L. The AlphaServer DS10L is an entry-level system that ships with two 10/100 Mb/s Ethernet ports, one 64-bit PCI expansion slot, and a fixed internal IDE disk. The 44.7 x 52.1 x 4.5-centimeter (17.6 x 20.5 x 1.75-inch (1U)) size of the AlphaServer DS10L, and the ability to rackmount large numbers of them in a single M-series cabinet, make clustering them an attractive option, especially for Web-based applications.

When you configure an AlphaServer DS10L in a cluster, we recommend that you use the single PCI expansion slot for the host bus adapter for shared storage (where the cluster root, member boot disks, and optional quorum disk reside), one Ethernet port for the external network, and the other Ethernet port for the LAN interconnect. Figure 2-6 shows a very basic low-end cluster of this type consisting of four AlphaServer DS10Ls.

Figure 2-6:  Low-End AlphaServer DS10L Cluster

Although the configuration shown in Figure 2-6 represents an inexpensive and useful entry-level cluster, its LAN interconnect and shared SCSI storage bus present single points of failure. That is, if the shared storage bus or the LAN interconnect switch fails, the cluster becomes unusable.

To eliminate these single points of failure, the configuration in Figure 2-7 adds two AlphaServer ES40 members to the cluster, plus two parallel interswitch connections. Two AlphaServer DS10L members are connected via Ethernet ports to one switch on the LAN interconnect; two are connected to the other switch. A Fibre Channel fabric employing redundant Fibre Channel switches replaces the shared SCSI storage in the previous configuration.

Although not distinctly shown in the figure, the host bus adapters of two DS10Ls are connected to one Fibre Channel switch; those of the other two DS10Ls are connected to the other Fibre Channel switch.

Figure 2-7:  Cluster Including Both AlphaServer DS10L and AlphaServer ES40 Members

The physical LAN interconnect device on each of the two AlphaServer ES40 members consists of two Ethernet adapters configured as a NetRAIN virtual interface. On each ES40, one adapter is cabled to the first Ethernet switch and the other is cabled to the second Ethernet switch. Similarly, each ES40 contains two host bus adapters connected to the Fibre Channel fabric. On each, one adapter is connected to the first Fibre Channel switch, the other is connected to the second Fibre Channel switch.

When delegating votes in this cluster, you have a number of possibilities: