This chapter lists restrictions on the use of TruCluster Server
Version 5.1A features.
2.1 Hardware Restriction
This section describes hardware restrictions in a cluster.
2.1.1 Restrictions on Memory Channel 2 on a GS80, GS160, or GS320 System
If you have a Memory Channel 2 (MC2) module installed on a peripheral component interconnect (PCI) bus of a GS80, GS160, or GS320 system, that bus can contain only another MC2 module or the CCMFB fiber-optic module. No other module can be installed on that PCI bus, not even the standard I/O module.
The section on Memory Channel restrictions in the
Cluster Hardware Configuration
manual incorrectly states that
this restriction applies only to redundant MC2 modules
jumpered for 512 MB.
In fact, the restriction applies
to all configurations of MC2 on a GS80, GS160, or
GS320.
2.2 CFS Restrictions
The Cluster File System (CFS) has the following restrictions in TruCluster Server Version 5.1A:
CFS supports the Network File System (NFS) client for read/write access.
When a file system is NFS-mounted in a cluster, CFS makes it available for read/write access from all cluster members. The member that has actually mounted it serves the file system to other cluster members.
If the member that has mounted the NFS file system shuts down or fails, the file system is automatically unmounted and CFS begins to clean up the mount points. During the cleanup process, members that access these mount points may see various types of behavior, depending upon how far the cleanup has progressed:
If members still have files open on that file system, their writes will be sent to a local cache instead of to the actual NFS-mounted file system.
After all of the files on that file system have been closed, attempts to
open a file on that file system will fail with an
EIO
error until the file system is remounted.
Applications may encounter
"Stale NFS handle" messages.
This is normal behavior on a standalone
system, as well as in a cluster.
Until the CFS cleanup is complete, members may still be able to create new files at the NFS file system's local mount point (or in any directories that were created locally beneath that mount point).
An NFS file system does not automatically fail over to another cluster
member.
Rather, you must manually remount it -- on the same mount
point or another -- from another cluster member to make it
available again.
Alternatively, booting a cluster member will remount
those file systems that are listed in the
/etc/fstab
file that are not currently mounted and served in the cluster.
(If you are
using AutoFS or automount, the remount will happen automatically.)
CFS does not support a clusterwide
/proc
file system.
A
/proc
file system can be mounted and accessed only on
the local member.
CFS does not support a clusterwide File-on-File Mount (FFM) file system. An FFM file system can be mounted and accessed only on the local member.
CFS does not support clusterwide named pipes. The reader and writer of a named pipe must reside on the same member.
The following restrictions apply to the Advanced File System (AdvFS) in a cluster:
The
cluster_root
domain should contain only one
fileset,
root
.
The software does not prevent you from adding filesets to
cluster_root
, but additional filesets can cause
a panic if
cluster_root
has to fail over.
If you create a clone of
/
, the clusterwide
root
, and mount it, the cloned fileset is
added to the
cluster_root
domain.
If
cluster_root
has to fail over while
the cloned fileset is mounted, the cluster will panic.
If you make backups of the clusterwide root from a cloned fileset, keep to a minimum the time during which the clone is mounted. Mount the cloned fileset, perform the backup, and unmount the clone as quickly as possible.
You cannot use the
addvol
command to add volumes to a
member's root domain (the
a
partition on the member's
boot disk).
Instead, you must delete the member from the cluster,
use
diskconfig
or SysMan to configure the
disk appropriately, and then add the member back into the cluster.
For the configuration requirements for a member boot disk, see
the TruCluster Server
Cluster Installation
manual.
2.4 Restrictions on Use of Internet Protocol Version 6 (IPv6)
The following restrictions apply to using IPv6 addresses in a cluster:
An IPv6 address cannot be associated with a cluster alias.
An IPv6 address cannot be used for a cluster interconnect address;
clu_create
and
clu_add_member
do not accept IPv6 addresses for the interconnect address.
Cluster members can use and advertise IPv6 addresses.
2.5 Restrictions on Multiple Network Interfaces
Tru64 UNIX supports multiple network interfaces in the following configurations:
Multiple network interfaces in the same subnet
Redundant Array of Independent Network Adapters (NetRAIN)
Cluster alias supports NetRAIN; it does not support multiple network interfaces in the same subnet.
The LAN interconnect supports only NetRAIN; it does not support NIC-based link aggregation or multiple network interfaces in the same subnet.
Switch-to-switch link aggregation is supported, as discussed in the Cluster LAN Interconnect manual.)
For more information about multiple network interfaces, see the section on network interfaces in the Tru64 UNIX Network Administration: Connections manual.