![]() |
|||
![]() |
![]() ![]() |
![]() |
![]() ![]() |
![]() |
![]() ![]() |
![]() |
| ||
IP MirroringThis section describes how data is replicated from the master node to the vice-master node, and how these nodes are resynchronized after failover or switchover. Data Partitions and Scoreboard BitmapsWhen data is written to a replicated partition on the master node disk, the corresponding scoreboard bitmap is updated. The scoreboard bitmap maps one bit to a block of data on a replicated partition. When a block of data is changed, the corresponding bit in the scoreboard bitmap is set to 1. When the data has been replicated to the vice-master node, the corresponding bit in the scoreboard bitmap is set to zero. The scoreboard bitmap can reside on a partition on the master node disk or in memory. There are advantages and disadvantages to storing the scoreboard bitmap on the master node disk or in memory:
For information about how to configure the scoreboard bitmap in memory or on disk, see "Changing the Location of the Scoreboard Bitmap" in the Netra High Availability Suite Foundation Services 2.1 6/03 Cluster Administration Guide. Replication During Normal OperationReplication is the act of copying data from the master node to the vice-master node. Through replication, the vice-master node has an up-to-date copy of the data on the master node. Replication enables the vice-master node to take over the master role at any time, transparently. After replication, the master node disk and vice-master node disk are synchronized, that is, the mirrored partitions contain exactly the same data. Replication occurs at the following times:
The following figure illustrates a diskless node writing data to the master node, and that data being replicated to the vice-master node. Figure 7-2 Data Replication ![]() Replication During Failover and SwitchoverDuring failover or switchover, the master node goes out of service for a time before being re-established as the vice-master node. During this time, changes that are made to the new master node disk cannot be replicated to the vice-master node. Consequently, the cluster becomes unsynchronized. While the vice-master node is out of service, data continues to be updated on the master node disk, and the modified data blocks are identified in the scoreboard bitmap. Figure 7-3 illustrates Reliable NFS during failover or switchover. Figure 7-3 Reliable NFS During Failover or Switchover ![]() When the vice-master node is re-established, replication resumes. Any data written to the master node is replicated to the vice-master node. In addition, the scoreboard bitmap is examined to determine which data blocks have been changed while the vice-master node was out of service. Any changed data blocks are also replicated to the vice-master node. In this way, the cluster becomes synchronized again. The following figure illustrates the restoration of the synchronized state. Figure 7-4 Restoration of the Synchronized State ![]() While a cluster is unsynchronized, the data on the master node disk is not fully backed up. Do not schedule major tasks when a cluster is unsynchronized. You can verify whether a cluster is synchronized, as described in "To Verify That the Master Node and Vice-Master Node Are Synchronized" in the Netra High Availability Suite Foundation Services 2.1 6/03 Cluster Administration Guide. You can collect replication statistics by using the Node Management Agent as described in the Netra High Availability Suite Foundation Services 2.1 6/03 NMA Programming Guide. Master Node IP Address FailoverFor a failover to be transparent to a diskless node or dataless node, the following must be true:
For further information about the floating address of the master node, see Floating Address Triplet. | ||
| ||
![]() |