Sun Microsystems
Products & Services
 
Support & Training
 
 

Previous Previous     Contents     Index     Next Next
Chapter 7

File Sharing and Data Replication

File sharing and data replication on a cluster are provided by the highly available NFS service, Reliable NFS. This chapter describes how the disks on the master node and vice-master node are partitioned and are mirrored. This chapter refers to the master node disk and the vice-master node disk that contain the shared cluster configuration data. This chapter contains the following sections:

Introduction to Reliable NFS

Reliable NFS provides the following services:

  • A reliable file system that gives the vice-master node, the diskless nodes, and the dataless nodes access to data on the master node

  • IP mirroring of disk-based data from the master node to the vice-master node

  • IP addresses failover of the master role

The Reliable NFS service is controlled by the nhcrfsd daemon. The nhcrfsd daemon runs on the master node and vice-master node. It controls the failover or switchover from the master node to the vice-master node. If the master node fails, the vice-master node becomes master and the NFS server on the new master node becomes active.

The nhcrfsd daemon responds to changes in the cluster state as it receives notifications from the Cluster Membership Manager. For further information about the Cluster Membership Manager, see Chapter 8, Cluster Membership Manager. The Reliable NFS daemon is monitored by the Daemon Monitor, nhpmd. For further information about the Daemon Monitor, see Chapter 10, Daemon Monitor.

If the impact on performance is acceptable, do not use data and attribute caches when writing to shared file systems. If it is necessary to use data caches to improve performance, ensure that your applications minimize the risk of using inconsistent data. For guidelines on how to use data and attribute caches when writing to shared file systems, see "Using Data Caches in Shared File Systems" in the Netra High Availability Suite Foundation Services 2.1 6/03 Cluster Administration Guide.

For reference information about network tunable parameters and the Solaris kernel, see the Solaris Tunable Parameters Reference Manual for your version of the Solaris operating system.

Volume Management

This section describes how the master node disk and vice-master node disk are partitioned.

The master node, vice-master node, and dataless nodes access their local disks. The vice-master node and dataless nodes also access some disk partitions of the master node. Diskless nodes do not have, or are not configured to use, local disks. Diskless nodes rely entirely on the master node to boot and access services and data.

You can partition your disks as described in Standard Disk Partitioning, or as described in Virtual Disk Partitioning.

Standard Disk Partitioning

To use standard disk partitioning, you must specify your disk partitions in the cluster configuration files. During installation, the nhinstall tool partitions the disks according to the specifications in the cluster configuration files. If you manually install the Foundation Services, you must partition the system disk and create the required file systems manually.

The master node disk and vice-master node disk can be split identically into a maximum of eight partitions. For a cluster containing diskless nodes, the partitions can be arranged as follows:

  • Three partitions for the system configuration

  • Two partitions for data

  • Two partitions for scoreboard bitmaps

  • One free partition

Partitions that contain data are called data partitions. One data partition might typically contain the exported file system for diskless nodes. The other data partition might contain configuration and status files for the Foundation Services. Data partitions are mirrored from the master node to the vice-master node.

To be mirrored, a data partition must have a corresponding scoreboard bitmap partition. If a data partition does not have a corresponding scoreboard bitmap partition, it cannot be mirrored. For information about the scoreboard bitmap, see IP Mirroring.

Table 7-1 shows an example disk partition for a cluster containing master-eligible nodes and diskless nodes. This example indicates which partitions are mirrored.

Table 7-1 Example Disk Partition for a Cluster of Master-Eligible Nodes and Diskless Nodes

Partition

Use

Mirrored

s0

Solaris boot

Not mirrored

s1

Swap

Not mirrored

s2

Whole disk

Not applicable

s3

Data partition for diskless Solaris images

Mirrored read/write for the diskless nodes

s4

Data partition for middleware data and binaries

Mirrored read/write for applications

s5

Scoreboard bitmap partition

Used to mirror partition s3

s6

Scoreboard bitmap partition

Used to mirror partition s4

s7

Free

 

Master-eligible nodes in a cluster that does not contain diskless nodes do not require partitions s3 and s5.

Virtual Disk Partitioning

Virtual disk partitioning is provided by Solstice DiskSuite 4.2.1 for Solaris 8, and is integrated into Solaris 9 in the Solaris Volume Manager software.

One of the partitions of a physical disk can be configured as a virtual disk by using Solstice DiskSuite or Solaris Volume Manager. A virtual disk can be partitioned into a maximum of 128 soft partitions. To an application, a virtual disk is functionally identical to a physical disk. The following figure shows one partition of a physical disk configured as a virtual disk with soft partitions.

Figure 7-1 One Partition of a Physical Disk Configured as a Virtual Disk

Diagram shows one partition of a physical disk configured as a virtual disk with soft partitions.

In Solaris Volume Manager, a virtual disk is called a volume. In Solstice DiskSuite, a virtual disk is called a metadevice.

To use virtual disk partitioning, you must install and configure the Solaris operating system and virtual disk partitioning manually on your cluster. You can then configure the nhinstall tool to install the Foundation Services only, or you can install the Foundation Services manually.

For more information about virtual disk partitioning, see the Solaris documentation.

Logical Mirroring

Logical mirroring is provided by Solstice DiskSuite 4.2.1 for Solaris 8, and by Solaris Volume Manager for Solaris 9.

Logical mirroring can be used on master-eligible nodes with two or more disks. The disks are mirrored locally on the master-eligible nodes. They always contain identical information. If a disk on the master node is replaced or crashes, the second local disk takes over without a failover.

For more information about logical mirroring, see the Solaris documentation.

Previous Previous     Contents     Index     Next Next