C H A P T E R  9

Upgrading the Cluster

To successfully upgrade the cluster, the procedures described in this chapter assume the following:

If these assumptions are correct, see the following sections to start upgrading your cluster:

If these assumptions cannot be fulfilled, reinstall the cluster. For instructions, see one of the following:


Preparing the Installation Server


procedure icon  To Prepare the Installation Server

1. Log in to your installation server as superuser.

2. Check that the installation server is connected to the cluster network.

For more information, see To Connect the Installation Server to the Cluster Network.

3. Check that the mountd and nfsd daemons are running on the installation server.

For example, use the ps command:


# ps -ef | grep mountd
root 184 1 0 Aug 03 ? 0:01 /usr/lib/autofs/automountd
root 290 1 0 Aug 03 ? 0:00 /usr/lib/nfs/mountd
root 2978 2974 0 17:40:34 pts/2 0:00 grep mountd
#
# ps -ef | grep nfsd
root 292 1 0 Aug 03 ? 0:00 /usr/lib/nfs/nfsd -a 16
root 2980 2974 0 17:40:50 pts/2 0:00 grep nfsd
#

If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons:


# /usr/etc/inid.d/nfs.server start

4. Add the following line to the /etc/dfs/dfstab file to share the directory containing the software distributions for the Foundation Services 2.1 7/05 release and the Solaris Operating System:

share -F nfs -o ro,anon=0 software-distribution-dir
share -F nfs -o ro,anon=0 Solaris-distribution-dir

5. Share the directories defined in the /etc/dfs/dfstab file:

# shareall

6. Log into every cluster node as superuser and create the mount point directory NetraHASuite:

# mkdir /NetraHASuite


Rolling Upgrade for a Cluster From Netra HA Suite 2.1 6/03 to 2.1 7/05

The upgrade is done one node at a time, so that the cluster never stops providing service.


procedure icon  To Upgrade the Cluster

The following steps describe the whole procedure. They are described in more detail below.

1. Upgrade the diskless nodes one by one, as described in To Upgrade a Diskless Node.

The rest of the nodes go on providing services.

2. Upgrade the dataless nodes one by one, as described in To Upgrade a Dataless Node.

The rest of the nodes go on providing services.

3. If you use a serial cable, unplug it.

A new protocol was implemented which is incompatible with the old one.

4. Update one master eligible node, as described in To Upgrade a Master-Eligible Node.

5. Update the other master eligible node, as described in To Upgrade a Master-Eligible Node.

6. If you use a serial cable, plug it in again.

7. Wait for the synchronization to finish.

Use the following command to see the synchronization status, and wait until it reads READY:

# /opt/SUNWcgha/sbin/nhcmmstat -c master

8. If you use the Node Management Agent, upgrade it as described in To Upgrade the Node Management Agent.


procedure icon  To Upgrade a Diskless Node

This procedure is to be executed on every diskless node, one by one.

1. Log in to the node as superuser.

2. List the installed packages:

TABLE 9-1 Installed Packages List
# /opt/SUNWcgha/sbin/nhadm check installation

OS and Software checking

 64-bit kernel mode                        OK
 OS release                                OK

Local packages

 Package SUNWnhtp9                         OK
 Package SUNWnhtu9                         OK
 Package SUNWnhadm                         OK
 Package SUNWnhcdt                         OK
 Package SUNWnhcmd                         OK
 Package SUNWnhcma                         OK
 Package SUNWnhcmb                         OK
 Package SUNWnhpma                         OK
 Package SUNWnhpmb                         OK
 Package SUNWnhpmn                         OK
 Package SUNWnhpms                         OK
 Package SUNWnhmas                         OK
 Package SUNWnhsms                         OK

Shared packages (on /SUNWcgha/swdb)

 Package SUNWjdrt                          OK
 Package SUNWnhmaj                         OK
 Package SUNWnhmal                         OK
 Package SUNWnhmad                         OK

Patches
Can take a long time ...

 Patch 112233-03                           OK
 Patch 112902-06                           OK
 Patch 112904-01                           OK
 Patch 112917-01                           OK
 Patch 112918-01                           OK
 Patch 112919-01                           OK
 INST_RELEASE file for shared packages     OK

Keep the list of packages under the "Local Packages" title. This is the list of packages installed in the node.

3. Stop the node.

# sync
# uadmin 1 0

4. Log in to the master node as superuser.

5. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.

# mount -F nfs installation-server-IP-address:/software-distribution-dir\
/NetraHASuite

installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.

software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.

6. Remove the old local packages.

The following command is an example: that includes all possible affected packages. Some of them might not be installed in your system. Do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 2. The packages that must be removed are those listed both in the example and in the list generated in Step 2. Be careful to replace the diskless-node-name tag with the name of the current node.

# pkgrm -R /export/root/diskless-node-name \
SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma SUNWnhcmb \
SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb SUNWnhpmm SUNWnhpmn \
SUNWnhpms SUNWnhmas

7. Install the new packages.

Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 2. The following command should be considered an example.

Be careful to replace the version tag in the path with the Solaris version you are using, and the diskless-node-name tag with the name of the current node.

# pkgadd -M -R /export/root/diskless-node-name \
-d /NetraHASuite/Product/NetraHASuite_2.1.2/\
FoundationServices/Solaris_version/sparc/Packages \
SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \
SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb \
SUNWnhpmm SUNWnhpmn SUNWnhpms SUNWnhmas

Do not worry about dependencies on Solaris packages. They are already installed but momentarily inaccessible.

8. Update the Minimal Configuration File:

/export/root/diskless-node-name/etc/opt/SUNWcgha/target.conf

being sure to replace diskless-node-name with the correct value. Add the line
VERSION : 2 at the beginning of the file. The file should now look like this:

VERSION : 2
domain_id: 1                    # Cluster domain_id
attributes : -                  # Local nodes attributes
election : 0                    # Election round number
role : NONE                     # Previous role

9. Configure the Boot Process.


Note - Skip this step if you use Netra HA Suite 2.1 6/03 Patch 3, 4 or 5.



Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf:

Node.Nic0=hme0
Node.Nic1=hme1

    a. Empty the following files by deleting all text in them:

    /export/root/diskless-node-name/etc/hostname.NIC0
    
    /export/root/diskless-node-name/etc/hostname.NIC1
    

    Be sure to replace diskless-node-name with the correct value.

    b. Create the following empty files:

    /etc/dhcp.NIC0
    
    /etc/dhcp.NIC1
    

10. Rejoin the cluster.

    a. Remove the not_configured file so that Netra HA Suite starts when the node is booted.

    # rm /export/root/diskless-node-name/etc/opt/\
    
    SUNWcgha/not_configured
    

    b. Boot the diskless node from the OpenBoot prompt:

    ok> boot
    

11. Check the node.

Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:

# /opt/SUNWcgha/sbin/nhadm check


procedure icon  To Upgrade a Dataless Node

This procedure is to be executed on every dataless node, one by one.

1. Log in to the node as superuser.

2. List the installed packages:

# /opt/SUNWcgha/sbin/nhadm check installation

Output similar to the following is produced:


TABLE 9-2 Installed Packages List
# /opt/SUNWcgha/sbin/nhadm check installation

OS and Software checking

 64-bit kernel mode                        OK
 OS release                                OK

Local packages

 Package SUNWnhtp9                         OK
 Package SUNWnhtu9                         OK
 Package SUNWnhadm                         OK
 Package SUNWnhcdt                         OK
 Package SUNWnhcmd                         OK
 Package SUNWnhcma                         OK
 Package SUNWnhcmb                         OK
 Package SUNWnhpma                         OK
 Package SUNWnhpmb                         OK
 Package SUNWnhpmn                         OK
 Package SUNWnhpms                         OK
 Package SUNWnhmas                         OK
 Package SUNWnhsms                         OK

Shared packages (on /SUNWcgha/swdb)

 Package SUNWjdrt                          OK
 Package SUNWnhmaj                         OK
 Package SUNWnhmal                         OK
 Package SUNWnhmad                         OK

Patches
Can take a long time ...

 Patch 112233-03                           OK
 Patch 112902-06                           OK
 Patch 112904-01                           OK
 Patch 112917-01                           OK
 Patch 112918-01                           OK
 Patch 112919-01                           OK
 INST_RELEASE file for shared packages     OK

Keep the list of packages under the "Local Packages" title. This is the list of packages installed in the node.

3. Take the node out of the cluster.

To work around BugID 6269249, "Dataless nodes cannot boot when the not_configured file is present", edit the /etc/vfstab file and set the flag mount at boot to no for all the lines beginning with master-cgtp:

Create the not_configured file and reboot:

# touch /etc/opt/SUNWcgha/not_configured
# sync
# uadmin 1 1

4. Once the node has booted, log in as superuser.

5. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.

# mount -F nfs installation-server-IP-address:/software-distribution-dir\
/NetraHASuite

installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.

software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.

6. Remove the old local packages.

The following command is an example: that includes all possible affected packages. Some of them might not be installed in your system. Do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 2. The packages that must be removed are those listed both in the example and in the list generated in Step 2. Be careful to replace the diskless-node-name tag with the name of the current node.

# pkgrm SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \
SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb SUNWnhpmm \
SUNWnhpmn SUNWnhpms SUNWnhmas

7. Install the new packages.

Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 2. The following command should be considered an example.

Be careful to replace the version tag in the path with the Solaris version you are using.

# pkgadd -M -d /NetraHASuite/Product/NetraHASuite_2.1.2/\
FoundationServices/Solaris_version/sparc/Packages \
SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \
SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb \
SUNWnhpmm SUNWnhpmn SUNWnhpms SUNWnhmas

8. Update the Minimal Configuration File:

/etc/opt/SUNWcgha/target.conf

Add the line VERSION : 2 at the beginning of the file. The file should now look like this:

VERSION : 2
domain_id: 1                    # Cluster domain_id
attributes : -                  # Local nodes attributes
election : 0                    # Election round number
role : NONE                     # Previous role

9. Update the hostname files.

The files to be updated are:

/etc/hostname.NIC0
/etc/hostname.NIC1
/etc/hostname.cgtp0

Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf.

There is only one line in the files with the node name. Add the following text to this line :

netmask + broadcast + -failover up

10. Configure the External Addresses.


Note - If you do not have an external address configured, skip this step.



The configuration can be found in a file named /etc/hostname.NICX:Y where X and Y are numbers. This contains the external node name.

    a. Edit the file /etc/hostname.NICX (which you might have updated in the previous steps) and add the following line:

    addif external node name netmask + broadcast + -failover up
    

    For example:

    netraDataless2-cgtp netmask + broadcast + -failover up
    
    addif extNode2 netmask + broadcast + -failover up
    

    b. Delete the file /etc/hostname.NICX:Y.

11. Rejoin the Cluster.

Undo the workaround done in Step 3. In the /etc/vfstab file, reset the flag mount at boot to yes for all the lines beginning with master-cgtp:.

Remove the not_configured file and reboot the node:

# rm /etc/opt/SUNWcgha/not_configured
# sync
# reboot

12. Check the node.

Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:

# /opt/SUNWcgha/sbin/nhadm check


procedure icon  To Upgrade a Master-Eligible Node

1. Log in to the node as superuser.

2. Unplug the serial cable if it is the first MEN that you upgrade.

3. List the installed packages:

# /opt/SUNWcgha/sbin/nhadm check installation

Output similar to the following is produced:


TABLE 9-3 Installed Packages List


OS and Software checking

 64-bit kernel mode                      OK
 OS release                              OK

Local packages

 Package SUNWnhtp9                       OK
 Package SUNWnhtu9                       OK
 Package SUNWnhadm                       OK
 Package SUNWnhcdt                       OK
 Package SUNWnhcmd                       OK
 Package SUNWnhcma                       OK
 Package SUNWnhcmb                       OK
 Package SUNWscmr                        OK
 Package SUNWscmu                        OK
 Package SUNWspsvr                       OK
 Package SUNWspsvu                       OK
 Package SUNWrdcr                        OK
 Package SUNWrdcu                        OK
 Package SUNWnhfsa                       OK
 Package SUNWnhfsb                       OK
 Package SUNWnhpma                       OK
 Package SUNWnhpmb                       OK
 Package SUNWnhpmn                       OK
 Package SUNWnhpms                       OK
 Package SUNWnhnsa                       OK
 Package SUNWnhnsb                       OK
 Package SUNWj3rt                        OK
 Package SUNWnhmas                       OK
 Package SUNWjsnmp                       OK
 Package SUNWnhrbb                       OK
 Package SUNWnhrbs                       OK
 Package SUNWnhsms                       OK

Shared packages (on /SUNWcgha/swdb)

 Package SUNWjdrt                        OK
 Package SUNWnhmaj                       OK
 Package SUNWnhmal                       OK
 Package SUNWnhmad                       OK

Patches
Can take a long time ...

 Patch 112233-03                         OK
 Patch 112902-06                         OK
 Patch 112904-01                         OK
 Patch 112917-01                         OK
 Patch 112918-01                         OK
 Patch 112919-01                         OK
 INST_RELEASE file for shared packages   OK

Take note of the list of packages under the "Local Packages" title. This is the list of packages installed in the node. Also take note of the list of patches installed on the node. You will need these lists of packages and patches in later steps.

4. Be sure the node is not the master node.

To know the role of the node, type:

# /opt/SUNWcgha/sbin/nhcmmrole -v
nhcmmrole: current role MASTER

If the node is master, trigger a switchover:

# /opt/SUNWcgha/sbin/nhcmmstat -c so

5. Prevent Netra HA Suite from launching and reboot.

# touch /etc/opt/SUNWcgha/not_configured
# sync
# uadmin 1 1

6. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.

# mount -F nfs installation-server-IP-address:\
/software-distribution-dir/NetraHAS2.1 /NetraHASuite

installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.

software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.

7. Remove the old local packages.

The following command is an example: do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 3.

# pkgrm SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \
SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb SUNWnhpmm \
SUNWnhpmn SUNWnhpms SUNWnhmas SUNWscmr SUNWscmu SUNWspsvr \
SUNWspsvu SUNWrdcr SUNWrdcu SUNWnhfsa SUNWnhfsb SUNWnhnsa \
SUNWnhnsb SUNWjsnmp SUNWnhrbb SUNWnhrbs

8. Install the new packages.

Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 3. The following command should be considered an example.

Be careful to replace the version tag in the path with the Solaris version you are using.

# pkgadd -M -d /NetraHASuite/Product/NetraHASuite_2.1.2/\
FoundationServices/Solaris_version/sparc/Packages \
SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \
SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb \
SUNWnhpmm SUNWnhpmn SUNWnhpms SUNWnhmas SUNWscmr \
SUNWscmu SUNWspsvr SUNWspsvu SUNWrdcr SUNWrdcu \
SUNWnhfsa SUNWnhfsb SUNWnhnsa SUNWnhnsb SUNWnhmas \
SUNWjsnmp SUNWnhrbb SUNWnhrbs SUNWnheaa SUNWnheab

The new packages (only needed if you use a floating external address) are SUNWnhea and SUNWnheab.

If you used NSM only to manage the external address, then it is no longer needed. You can skip installation of packages SUNWnhnsa and SUNWnhnsb because NSM is no longer responsible for this task.

9. Update the patches.


Note - If you use Netra HA Suite 2.1 6/03 Patch 2 or later, you can skip this step.



    a. Remove the obsolete patches (check if they are installed in the list generated in Step 3):

    # patchrm 113054-04
    
    # patchrm 113055-01
    
    # patchrm 113057-03
    

    b. Install the new patches:

    # patchadd -M /NetraHASuite/Product/NetraHASuite_2.1.2/\
    
    FoundationServices/Solaris_version/sparc/Patches/ 116710-01
    

    Be careful to replace the version tag in the path by the Solaris version you are using. The list of patches is found in the Netra High Availability Suite Foundation Services 2.1 7/05 README.

10. Update the Minimal Configuration File:

/etc/opt/SUNWcgha/target.conf

Add the line VERSION : 2 at the beginning of the file. The file should now look like this:

VERSION : 2
domain_id: 1                    # Cluster domain_id
attributes : -                  # Local nodes attributes
election : 26                   # Election round number
role : VICEMASTER               # Previous role

11. Update the Node Table.

Update the /etc/opt/SUNWcgha/cluster_nodes_table file by adding the line VERSION 2 at the very beginning.

The file should now look like this:

VERSION 2
#NodeId Domain_id       Name    Attributes
30      1      netraMEN1       -
31      1      netraMEN2       -
32      1      netraDiskless1  -
33      1      netraDataless2  -

12. Update the hostname files.

The files to be updated are:

/etc/hostname.NIC0
/etc/hostname.NIC1
/etc/hostname.cgtp0

Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf.

There is only one line in the files with the node name. Add the following text to this line :

netmask + broadcast + -failover up

13. Configure the External Addresses.


Note - If you do not have an external address configured, skip this step.



The configuration can be found in a file named /etc/hostname.NICX:Y where X and Y are numbers. This contains the external node name.

    a. Edit the file /etc/hostname.NICX (which you might have updated in the previous steps) and add the following line:

    addif external node name netmask + broadcast + -failover up
    

    For example:

    netraMEN1-cgtp netmask + broadcast + -failover up
    
    addif extMEN1 netmask + broadcast + -failover up
    

    b. Delete the file /etc/hostname.NICX:Y.

14. Configure the External Floating Address.


Note - If you do not have an external floating address configured, skip the rest of this step.



    a. Get the physical interface and floating IP address from the file /etc/opt/SUNWcgha/nhfs.conf:

    NSM.External.Master.Address=floating address
    
    NSM.External.Master.Nic=hmeX:Y
    

    b. Edit the file /etc/hostname.NICX (which you might have already updated in the previous steps) and add the following line:

    addif external node name netmask + broadcast + failover down
    

    For example:

    netraMEN1-cgtp netmask + broadcast + -failover up
    
    addif extMEN1 netmask + broadcast + -failover up
    
    addif extFloat netmask + broadcast + failover down
    

15. Update the /etc/opt/SUNWcgha/nhfs.conf file.

If you only used NSM to manage the external address, you can remove all the properties beginning with NSM. If NSM is used for other purposes, then only remove the properties beginning with NSM.External.

If you use an external floating address, add the following property:

Node.External.FloatingAddress.0=floating address

The value of floating address is the same as the previous step.

16. Rejoin the Cluster.

Remove the not_configured file and reboot the node.:

# rm /etc/opt/SUNWcgha/not_configured
# sync
# reboot

17. Check the node.

Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:

# /opt/SUNWcgha/sbin/nhadm check

18. Plug the serial cable in again if both MENs are already upgraded.


procedure icon  To Upgrade the Node Management Agent

You can skip this procedure if you do not use the Node Management Agent.

1. Log in to the master node as superuser.

2. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.

# mount -F nfs installation-server-IP-address:\
/software-distribution-dir/NetraHAS2.1 /NetraHASuite

installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.

software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.

3. Configure the package installation.

To tell pkgadd how and where to install the packages, create the file /tmp/admin with the following contents:

mail=
instance=unique
partial=nocheck
runlevel=quit
idepend=nocheck
rdepend=nocheck
space=quit
setuid=nocheck
conflict=nocheck
action=nocheck
basedir=/

The text must begin in the first column.

4. Install the packages.

Use the following commands to install the needed packages. Please note that from version 2.1.2 each installation of the NMA has its own JDMK package.

Be careful to replace the version tag in the path by the Solaris version you are using.

# pkgadd -a /tmp/admin -M -R /SUNWcgha/local/export/\
services/ha_2.1.2 -d /NetraHASuite/Product/\
NetraHASuite_2.1.2/FoundationServices/Solaris_version/\
sparc/Packages SUNWnhmaj SUNWnhmal SUNWnhmad
# pkgadd -M -R /SUNWcgha/local/export/services/ha_2.1.2 \
-d /NetraHASuite/Product/NetraHASuite_2.1.2/\
FoundationServices/Solaris_version/sparc/Packages \
SUNWjdrt

5. Create the INST_RELEASE file.

To be able to patch NMA or JDMK (if ever needed), you need to create the INST_RELEASE file.


# mkdir -p /SUNWcgha/local/export/services/ha_2.1.2/\
var/sadm/system/admin
# cp /SUNWcgha/local/export/services/var/sadm/system/\
admin/INST_RELEASE /SUNWcgha/local/export/services/\
ha_2.1.2/var/sadm/system/admin

6. Update all other nodes.

This step must be repeated in every cluster node except the current master. Update one node at a time, executing substeps a-d before moving on to the next node.

    a. Log in to the node as superuser.

    b. Take the node out of the cluster:

    # sync
    
    # uadmin 1 0
    

    c. Boot the Diskfull Node in Single-User Mode.


    Note - Skip this step on diskless nodes.



    From Open Boot Monitor,

    ok> boot -s
    

    boot the node in single-user mode:

    Log in to the node.

    d. Modify the /etc/vfstab file.

    For MEN and dataless nodes, edit the local file /etc/vfstab. For diskless nodes, edit the file /export/root/diskless-node-name/etc/vfstab located on the master node.

    Look for the old NFS "devices":

    master-cgtp:/SUNWcgha/local/export/services/ha_v1/opt
    
    master-cgtp:/SUNWcgha/local/export/services
    

    and replace them with:

    master-cgtp:/SUNWcgha/local/export/services/ha_2.1.2/opt
    
    master-cgtp:/SUNWcgha/local/export/services/ha_2.1.2
    

    e. Reboot the node.


    # sync
    
    # reboot
    

    On diskfull nodes use the following commands:

    On diskless nodes, waiting at OBP, use this command:

    ok> boot
    

    Wait until the node has rebooted before continuing with another node.

7. On the master node, trigger a switch-over.

Wait until the two MEN are synchronized. Use the following command to know the synchronization status:

# /opt/SUNWcgha/sbin/nhcmmstat -c master

Lots of information is provided by this command. One of the lines shows the needed information. Wait until the synchronization state reads READY. Trigger a switch-over:

# /opt/SUNWcgha/sbin/nhcmmstat -c so

and verify that the node became vice-master:

# /opt/SUNWcgha/sbin/nhcmmrole -v
nhcmmrole: current role VICE_MASTER

8. Repeat Step 6 on the new vice-master node (the old master).

9. Log in to the new master node.

10. Remove the old packages:

# pkgrm -R /SUNWcgha/local/export/services \
SUNWjdrt SUNWnhmaj SUNWnhmal SUNWnhmad
# rm -rf /SUNWcgha/local/export/services/var
# rm -rf /SUNWcgha/local/export/services/ha_v1

11. Check every cluster node.

Log in to each cluster node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:

# /opt/SUNWcgha/sbin/nhadm check