PROBLEM: (89939) (PATCH ID: TCR520-024) ******** If clu_common returns the error disk is in use by an LSM volume, and you have an LSM volume with the same name as the disk media name, you will need this patch in order to install on that disk. PROBLEM: (90611, 79976) (PATCH ID: TCR520-057) ******** When attempting to install a cluster, if the member disk is over 10GB, the clu_create program will crash. Likewise, if using a quorum disk over 10GB, the clu_quorum program will crash. PROBLEM: (92920, 93306) (PATCH ID: TCR520-154) ******** If the customer has not run the versw command, there is one major problem that may occur. When rolling a cluster to a newer version, clu_upgrade may think that it is rolling two levels instead of one and prevent the roll. Other than that, the only "problem" is the non-functionality of any new functions that the user may have hoped to enable by upgrading. This patch will only help the customer if installed on the base. If the user is already running in a cluster, they will have to run the versw commands by hand. PROBLEM: (90192, 90194, 90436, 92183, 92191, 92316, 92551, 92663, 92228) (PATCH ID: TCR520-128) ******** After cluster creation, the /etc/ifaccess.conf file will contain deny entries for the cluster interconnect. After adding a member, the member's /etc/ifaccess.conf file will not contain entries for the cluster physical address if a LAN cluster. When checking for filtering, netstat -I{interface_name} -c will return filtering disabled. This can be enabled by placing a filter flag in the rc.config after the device. This fix will automatically enable filtering on cluster install and member addition. Layered product kits located in places other than /usr/var are not propagated to added members. Any installation or addition of members using disks which have been zeroed with disklabel -z or with disks that are brand new will fail. An error will be returned stating that the tool was unable to label the disk. PROBLEM: (89540, 93066) (PATCH ID: TCR520-207) ******** PROBLEM: Clu_quorum returns an error about a mismatch in the clu_bdmgr.conf file and the CNX partition. The cluster is running lsm, and one of the members has been deleted or down with private lsm disks. PROBLEM: clu_recoverymgr prints "Unable to retrieve unique identifier for hardware id #" The cluster is running lsm, and one of the member has been deleted or down with private lsm disks.