Sun Microsystems
Products & Services
 
Support & Training
 
 

Previous Previous     Contents     Index     Next Next
Chapter 2

Choosing the Cluster Hardware

Plan the installation and configuration of your cluster thoroughly to avoid setbacks and delays. Before you start to install your cluster, define what type of cluster you require. For example, you can install a small cluster to test new applications or existing applications. Choose the size and hardware configuration of your cluster to suit your purpose.

For example hardware configurations for your cluster, see the following sections:

Supported Hardware

The following list summarizes the hardware supported with the Foundation Services 2.1 6/03 at the time of publication of this guide:

Servers

Netra T1 105 servers

Netra T1 AC200  servers

Netra T1 DC200 servers

Netra 120 servers

Netra 20 servers

Sun Fire™ V210

Sun Fire V240

Netra 240 servers

Netra CT 410 servers

Netra CT 810 servers

Netra CT 820 servers

Boards

Netra CP2140 boards with Netra CT 410 and Netra CT 810 servers

Netra CP2160 boards with Netra CT 410 and Netra CT 810 servers

Netra CP2300 boards with Netra CT 820 servers

Netra CP2300 boards with Rapid Development Kit (RDK)

Ethernet Cards

Ethernet 10/100

1 Gbit

Disks

SCSI disks

FC-AL disks

IDE disks

Sun StorEdge 3310 disk array

Example Cluster Configurations

The example hardware configurations provided in this section are for clusters with different Sun hardware. Each configuration can be used with the SPARC™ Solaris operating system. You can use these example configurations to design your cluster. For information on the versions of the Solaris operating system and other software supported for different Sun hardware, see the Netra High Availability Suite Foundation Services 2.1 6/03 Release Notes.

Each cluster must have two master-eligible nodes. You can have a mix of diskless nodes and dataless nodes in a cluster. For definitions of the types of nodes, see the Netra High Availability Suite Foundation Services 2.1 6/03 Glossary.

Follow the examples in this section to be sure that the mix of hardware you choose is supported. You can also design a hardware configuration other than those listed below by having master-eligible nodes that are Netra 20, Netra 240, Sun Fire V240, or Sun Fire V210 servers and the master-ineligible nodes are Netra T1 servers. However, there are limits on the hardware that you can mix in a cluster configuration. For example, you cannot have Netra 20, Netra 240, Sun Fire V240, or Sun Fire V210 servers in a cluster containing Netra CT 410, 810 or 820 servers. For this reason, you are more certain of having a working configuration if you choose one of the example configurations described below.

Two-Node Cluster

Following are examples of hardware configurations for two-node clusters.

Two-Node Cluster With Netra 120 Servers

  • Two Netra 120 servers configured as master-eligible nodes

  • Two Ethernet switches

  • A terminal server to manage the consoles

Two-Node Cluster With Netra 240 Servers

  • Two Netra 240 servers configured as master-eligible nodes

    Each server is fitted with Gigabit Ethernet cards to configure the cluster network connection.

  • Two Ethernet switches

  • A terminal server to manage the consoles

Two-Node Cluster With Netra CP 2300 Boards

  • Two Netra CP 2300 boards with the Rapid Development Kit (RDK) configured as master-eligible nodes

  • Two Ethernet switches

  • A terminal server to manage the consoles

Previous Previous     Contents     Index     Next Next