Sun Microsystems
Products & Services
 
Support & Training
 
 

Previous Previous     Contents     Index     Next Next

Error Messages Written by Reliable NFS

Already mounted. In order to ensure data integrity, please proceed to a 'full sync' using 'nhcrfsadm -f %s' command

This error occurs if a partition is already mounted at boot time.

Could not disable previous SNDR configuration

Reliable NFS could not disable the SNDR boot time configuration. This can happen if the replication configuration is broken. Flush the replication configuration manually by performing the following steps:

  1. Boot the master-eligible nodes in single-user mode:

    ok> boot -s

  2. Reset the replication configuration on both nodes:

    # /usr/opt/SUNWscm/sbin/dscfg --i

  3. Re-create an empty replicated configuration file by typing Y at this prompt on both nodes:

    # (Type Y for YES) Y

    # /usr/opt/SUNWscm/sbin/dscfg -i -p /etc/opt/SUNWesm/pconfig

  4. Reboot the nodes.

  5. If the problem persists:

    1. Boot both master-eligible nodes in single user mode.

    2. On each master-eligible node, edit the /etc/opt/SUNWcgha/target.conf file by setting the attributes field to "--" and the role field to "--".

      For information about the target.conf file, see the target.conf(4)target.conf(4) man page.

    3. Repeat Steps 2 to 4 on each master-eligible node.

Could not export some directories

Reliable NFS could not share some directories. Verify that the directories listed to be shared exist and that /usr/bin/share exists.

Could not get port number for server <port number>

Reliable NFS could not bind to the specified service. Verify that the service is not defined in the/etc/services file. If it is not defined, add an entry to the /etc/services file to define the service.

Could not put SNDR into logging mode

You cannot stop SNDR. Examine the disk configuration.

Could not reverse SNDR configuration

You cannot reverse SNDR during a switchover. This role reversal is handled by switching the primary and secondary SNDR roles. Examine the disk configuration.

Could not set master dynamic address(es)

Reliable NFS could not set the master node floating address triplet. Verify that the interfaces exist.

Could not start <command name>

Reliable NFS could not execute the specified command. Verify that the command is available on the cluster and that its execution rights are correct.

Could not start SNDR

The SNDR service failed to start. Verify that the node has a valid disk partition configuration. For information about disk partitions, see the Netra High Availability Suite Foundation Services 2.1 6/03 Custom Installation Guide.

Could not stop <command name>

Reliable NFS could not execute the specified command. Verify that the command is available on the cluster, and that its execution rights are correct.

Could not unexport some directories

Reliable NFS could not share some directories. Verify that the directories listed to be shared exist, and that /usr/bin/share exists.

Could not unset master dynamic address(es)

Reliable NFS could not unset the master node floating address triplet. The specified interfaces might be unknown or unplumbed.

Emergency reboot of the node

Reliable NFS rebooted the node because it did not restart correctly. This can occur if the nhcrfsd daemon dies during a switchover or a failover.

Error in configuration

The Reliable NFS configuration is incorrect. The text following the message should indicate the type of configuration error. Verify that the configuration of the nhfs.conf file for the failing node is consistent with the nhfs.conf(4) man page.

Illegal startup case: we are 'master' but were 'vice-master unsynchronized'. Please restart the node.

The vice-master node was rebooted and became the master node. This scenario is not allowed. Nodes must be restarted with the same role as they had before shutdown.

Mount of local filesystems failed

Reliable NFS could not mount or unmount local filesystems and abort. Verify that mount points and file systems are coherent. For the device listed in the error, check:

  • The associated mount point in the /etc/vfstab file

  • The access permission of this mount point

No canonical name found for address <IP address>

No canonical name was found to correspond to the specified address. Canonical names are required for every address. Specify the canonical name for this address in /etc/hosts.

Node's CMM mastership and RNFS one is not coherent

Reliable NFS believes that the current node is the master, but the nhcmmd daemon does not consider the current node to be the master.

Number of SNDR slices is greater than configuration file one

SNDR slices are configured but not managed through Reliable NFS.

Unable to read kstat data

A partition managed by Reliable NFS disappeared while the cluster was running. This might happen if you change the SNDR configuration while the cluster is running. This scenario is not allowed. Reboot the node.

Unmount of local filesystems failed

See "Mount of local filesystems failed".

Vice master has <number> slices, we have <number>: refusing vice master to follow

The master disk and vice-master disks do not have the same disk partition configuration. This is not allowed. Stop the vice-master and change its disk partition configuration to be the same as that on the master. See "Modifying and Adding Disk Partitions" in the Netra High Availability Suite Foundation Services 2.1 6/03 Cluster Administration Guide.

Vice master has a wrong configuration: refusing vice master to follow

The master's view of the disk configuration on the vice-master disk is not current. Check that the nhfs.conf file is consistent between the two nodes.

Wrong slice configured in SNDR

This problem occurs if SNDR slices are configured but not managed through Reliable NFS. Do not use SNDR on behalf of Reliable NFS.

Error Messages Written by the Reliable Boot Service

cmm_connect() failed (#)

Examine log files for messages from nhpmd saying that the nhcmmd daemon was stopped. If necessary, reboot the node to restart the nhcmmd daemon.

Error Messages Written by the Watchdog Timer

CPCI: cannot find 'watchdog-level1' node in PICL tree

Verify the patch level of the SUNWpiclu package against the required patches listed in the release notes of the hardware platform.

CPCI: configure: cannot connect to PICL daemon

The picl daemon is not running. Reboot the node to restart this daemon.

LOM: cannot stat /dev/lom

Verify that the LOM driver packages, SUNWlomu and SUNWlomr, are installed. If these packages are not installed, install them.

nhwdtd could not read config file, exiting

The nhwdtd daemon cannot find the nhfs.conf file, or the contents of this file are invalid. Compare the contents of the file with the requirements described in the nhfs.conf(4) man page.

Error Messages Written by the Node Management Agent

CMM statistics (JNI) Failed to get stats from CMM: [CMM status]

A call to the CMM succeeded from an RPC point of view. However, the CMM internals were unable to return valid statistics. Check the status of the nhcmmd daemon and its processes.

CMM statistics (JNI) Failed to get stats from CMM: [rpc return code]

An RPC error occurred during an access to the CMM statistics. Use the RPC return code to diagnose and correct the problem.

CMM statistics (JNI). Unable to access CMM statistics (can't access cmm-api service port number)

The CMM is incorrectly configured. Confirm that /etc/services contains an entry for cmm-api.

CMM statistics (JNI). Unable to access CMM statistics (can't access tcp netconfig).

The netconfig database is incorrectly configured for TCP. Correct the /etc/netconfig configuration.

CMM statistics (JNI) rpc call failed

RPC failed while attempting to access Cluster Membership Manager statistics. Correct the RPC configuration.

KSTAT (JNI). Unable to launch CGTP

CGTP statistics are not available. Confirm that the redundant network is available and that the network configuration is correct.

Previous Previous     Contents     Index     Next Next