- Already mounted. In order to ensure data
integrity, please proceed to a 'full sync' using 'nhcrfsadm -f %s' command
This error occurs if a partition is already mounted at boot time.
- Could not disable previous SNDR configuration
Reliable NFS could not disable the
SNDR boot time configuration. This can happen if the replication configuration
is broken. Flush the replication configuration manually by performing the
following steps:
Boot the master-eligible nodes in single-user mode:
Reset the replication configuration on both nodes:
# /usr/opt/SUNWscm/sbin/dscfg --i
|
Re-create an empty replicated configuration file by typing Y at this prompt on both nodes:
# /usr/opt/SUNWscm/sbin/dscfg -i -p /etc/opt/SUNWesm/pconfig
|
Reboot the nodes.
If the problem persists:
Boot both master-eligible nodes in single user mode.
On each master-eligible node, edit the /etc/opt/SUNWcgha/target.conf file by setting the attributes field to "--" and the role field to "--".
For information about the target.conf file, see
the target.conf(4)target.conf(4)
man page.
Repeat Steps 2 to 4 on each master-eligible node.
- Could not export some directories
Reliable NFS could not share some directories. Verify that
the directories listed to be shared exist and that /usr/bin/share exists.
- Could not get port number for server <port number>
Reliable
NFS could not bind to the specified service. Verify that the service is not
defined in the/etc/services file. If it is not defined,
add an entry to the /etc/services file to define the
service.
- Could not put SNDR into logging mode
You cannot stop SNDR. Examine the disk configuration.
- Could not reverse SNDR configuration
You cannot reverse SNDR during a switchover. This role reversal
is handled by switching the primary and secondary SNDR roles. Examine the
disk configuration.
- Could not set master dynamic address(es)
Reliable NFS could not set the master
node floating address triplet. Verify that the interfaces exist.
- Could not start <command
name>
Reliable NFS could
not execute the specified command. Verify that the command is available on
the cluster and that its execution rights are correct.
- Could not start SNDR
The SNDR service failed to start. Verify that the node has
a valid disk partition configuration. For information about disk partitions,
see the Netra High Availability Suite Foundation Services 2.1 6/03 Custom Installation Guide.
- Could not stop <command
name>
Reliable NFS could
not execute the specified command. Verify that the command is available on
the cluster, and that its execution rights are correct.
- Could not unexport some directories
Reliable NFS could not share some directories. Verify that
the directories listed to be shared exist, and that /usr/bin/share exists.
- Could not unset master dynamic address(es)
Reliable NFS could not unset the master
node floating address triplet. The specified interfaces might be unknown or
unplumbed.
- Emergency reboot of the node
Reliable NFS rebooted the node because it did not restart
correctly. This can occur if the nhcrfsd daemon dies during
a switchover or a failover.
- Error in configuration
The Reliable NFS configuration is incorrect. The text following
the message should indicate the type of configuration error. Verify that the
configuration of the nhfs.conf file for the failing node
is consistent with the nhfs.conf(4) man page.
- Illegal startup case: we are 'master'
but were 'vice-master unsynchronized'. Please restart the node.
The vice-master node was rebooted and became the master node.
This scenario is not allowed. Nodes must be restarted with the same role as
they had before shutdown.
- Mount of local filesystems failed
Reliable NFS could not mount or unmount local filesystems
and abort. Verify that mount points and file systems are coherent. For the
device listed in the error, check:
- No canonical name found for address <IP address>
No canonical
name was found to correspond to the specified address. Canonical names are
required for every address. Specify the canonical name for this address in /etc/hosts.
- Node's CMM mastership and RNFS one is
not coherent
Reliable NFS believes
that the current node is the master, but the nhcmmd daemon
does not consider the current node to be the master.
- Number of SNDR slices is greater than
configuration file one
SNDR slices
are configured but not managed through Reliable NFS.
- Unable to read kstat
data
A partition managed by Reliable
NFS disappeared while the cluster was running. This might happen if you change
the SNDR configuration while the cluster is running. This scenario is not
allowed. Reboot the node.
- Unmount of local filesystems failed
See "Mount of local filesystems failed".
- Vice master has <number> slices, we have <number>: refusing
vice master to follow
The master disk
and vice-master disks do not have the same disk partition configuration. This
is not allowed. Stop the vice-master and change its disk partition configuration
to be the same as that on the master. See "Modifying and Adding Disk Partitions" in the Netra High Availability Suite Foundation Services 2.1 6/03 Cluster Administration Guide.
- Vice master has a wrong configuration:
refusing vice master to follow
The
master's view of the disk configuration on the vice-master disk is not current.
Check that the nhfs.conf file is consistent between the
two nodes.
- Wrong slice configured in SNDR
This problem occurs if SNDR slices are configured but not
managed through Reliable NFS. Do not use SNDR on behalf of Reliable NFS.