C H A P T E R 9 |
Upgrading the Cluster |
To successfully upgrade the cluster, the procedures described in this chapter assume the following:
You do not change the software configuration, that is, the version of the Solaris Operating System, the volume management configuration, or the boot policy.
If these assumptions are correct, see the following sections to start upgrading your cluster:
If these assumptions cannot be fulfilled, reinstall the cluster. For instructions, see one of the following:
|
1. Log in to your installation server as superuser.
2. Check that the installation server is connected to the cluster network.
For more information, see To Connect the Installation Server to the Cluster Network.
3. Check that the mountd and nfsd daemons are running on the installation server.
For example, use the ps command:
If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons:
4. Add the following line to the /etc/dfs/dfstab file to share the directory containing the software distributions for the Foundation Services 2.1 7/05 release and the Solaris Operating System:
share -F nfs -o ro,anon=0 software-distribution-dir share -F nfs -o ro,anon=0 Solaris-distribution-dir |
software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.
Solaris-distribution-dir is the directory that contains the Solaris distribution.
5. Share the directories defined in the /etc/dfs/dfstab file:
6. Log into every cluster node as superuser and create the mount point directory NetraHASuite:
The upgrade is done one node at a time, so that the cluster never stops providing service.
|
The following steps describe the whole procedure. They are described in more detail below.
1. Upgrade the diskless nodes one by one, as described in To Upgrade a Diskless Node.
The rest of the nodes go on providing services.
2. Upgrade the dataless nodes one by one, as described in To Upgrade a Dataless Node.
The rest of the nodes go on providing services.
3. If you use a serial cable, unplug it.
A new protocol was implemented which is incompatible with the old one.
4. Update one master eligible node, as described in To Upgrade a Master-Eligible Node.
5. Update the other master eligible node, as described in To Upgrade a Master-Eligible Node.
6. If you use a serial cable, plug it in again.
7. Wait for the synchronization to finish.
Use the following command to see the synchronization status, and wait until it reads READY:
8. If you use the Node Management Agent, upgrade it as described in To Upgrade the Node Management Agent.
|
This procedure is to be executed on every diskless node, one by one.
1. Log in to the node as superuser.
2. List the installed packages:
Keep the list of packages under the "Local Packages" title. This is the list of packages installed in the node.
4. Log in to the master node as superuser.
5. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.
6. Remove the old local packages.
The following command is an example: that includes all possible affected packages. Some of them might not be installed in your system. Do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 2. The packages that must be removed are those listed both in the example and in the list generated in Step 2. Be careful to replace the diskless-node-name tag with the name of the current node.
# pkgrm -R /export/root/diskless-node-name \ SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma SUNWnhcmb \ SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb SUNWnhpmm SUNWnhpmn \ SUNWnhpms SUNWnhmas |
Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 2. The following command should be considered an example.
Be careful to replace the version tag in the path with the Solaris version you are using, and the diskless-node-name tag with the name of the current node.
Do not worry about dependencies on Solaris packages. They are already installed but momentarily inaccessible.
8. Update the Minimal Configuration File:
being sure to replace diskless-node-name with the correct value. Add the line
VERSION : 2 at the beginning of the file. The file should now look like this:
VERSION : 2 domain_id: 1 # Cluster domain_id attributes : - # Local nodes attributes election : 0 # Election round number role : NONE # Previous role |
9. Configure the Boot Process.
Note - Skip this step if you use Netra HA Suite 2.1 6/03 Patch 3, 4 or 5. |
Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf:
a. Empty the following files by deleting all text in them:
Be sure to replace diskless-node-name with the correct value.
b. Create the following empty files:
a. Remove the not_configured file so that Netra HA Suite starts when the node is booted.
b. Boot the diskless node from the OpenBoot prompt:
Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:
|
This procedure is to be executed on every dataless node, one by one.
1. Log in to the node as superuser.
2. List the installed packages:
Output similar to the following is produced:
Keep the list of packages under the "Local Packages" title. This is the list of packages installed in the node.
3. Take the node out of the cluster.
To work around BugID 6269249, "Dataless nodes cannot boot when the not_configured file is present", edit the /etc/vfstab file and set the flag mount at boot to no for all the lines beginning with master-cgtp:
Create the not_configured file and reboot:
4. Once the node has booted, log in as superuser.
5. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.
6. Remove the old local packages.
The following command is an example: that includes all possible affected packages. Some of them might not be installed in your system. Do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 2. The packages that must be removed are those listed both in the example and in the list generated in Step 2. Be careful to replace the diskless-node-name tag with the name of the current node.
# pkgrm SUNWnhadm SUNWnhtp9 SUNWnhtu9 SUNWnhcdt SUNWnhcma \ SUNWnhcmb SUNWnhcmd SUNWnhhb SUNWnhpma SUNWnhpmb SUNWnhpmm \ SUNWnhpmn SUNWnhpms SUNWnhmas |
Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 2. The following command should be considered an example.
Be careful to replace the version tag in the path with the Solaris version you are using.
8. Update the Minimal Configuration File:
Add the line VERSION : 2 at the beginning of the file. The file should now look like this:
VERSION : 2 domain_id: 1 # Cluster domain_id attributes : - # Local nodes attributes election : 0 # Election round number role : NONE # Previous role |
Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf.
There is only one line in the files with the node name. Add the following text to this line :
10. Configure the External Addresses.
Note - If you do not have an external address configured, skip this step. |
The configuration can be found in a file named /etc/hostname.NICX:Y where X and Y are numbers. This contains the external node name.
a. Edit the file /etc/hostname.NICX (which you might have updated in the previous steps) and add the following line:
netraDataless2-cgtp netmask + broadcast + -failover up addif extNode2 netmask + broadcast + -failover up |
Undo the workaround done in Step 3. In the /etc/vfstab file, reset the flag mount at boot to yes for all the lines beginning with master-cgtp:.
Remove the not_configured file and reboot the node:
Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:
|
1. Log in to the node as superuser.
2. Unplug the serial cable if it is the first MEN that you upgrade.
3. List the installed packages:
Output similar to the following is produced:
Take note of the list of packages under the "Local Packages" title. This is the list of packages installed in the node. Also take note of the list of patches installed on the node. You will need these lists of packages and patches in later steps.
4. Be sure the node is not the master node.
To know the role of the node, type:
If the node is master, trigger a switchover:
5. Prevent Netra HA Suite from launching and reboot.
6. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.
# mount -F nfs installation-server-IP-address:\ /software-distribution-dir/NetraHAS2.1 /NetraHASuite |
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.
7. Remove the old local packages.
The following command is an example: do not try to remove packages that are not installed. To know which packages are actually installed, use the list generated in Step 3.
Install only the packages you had already installed for the previous version of the software. For this list of packages, refer to the list generated in Step 3. The following command should be considered an example.
Be careful to replace the version tag in the path with the Solaris version you are using.
The new packages (only needed if you use a floating external address) are SUNWnhea and SUNWnheab.
If you used NSM only to manage the external address, then it is no longer needed. You can skip installation of packages SUNWnhnsa and SUNWnhnsb because NSM is no longer responsible for this task.
Note - If you use Netra HA Suite 2.1 6/03 Patch 2 or later, you can skip this step. |
a. Remove the obsolete patches (check if they are installed in the list generated in Step 3):
# patchadd -M /NetraHASuite/Product/NetraHASuite_2.1.2/\ FoundationServices/Solaris_version/sparc/Patches/ 116710-01 |
Be careful to replace the version tag in the path by the Solaris version you are using. The list of patches is found in the Netra High Availability Suite Foundation Services 2.1 7/05 README.
10. Update the Minimal Configuration File:
Add the line VERSION : 2 at the beginning of the file. The file should now look like this:
VERSION : 2 domain_id: 1 # Cluster domain_id attributes : - # Local nodes attributes election : 26 # Election round number role : VICEMASTER # Previous role |
Update the /etc/opt/SUNWcgha/cluster_nodes_table file by adding the line VERSION 2 at the very beginning.
The file should now look like this:
VERSION 2 #NodeId Domain_id Name Attributes 30 1 netraMEN1 - 31 1 netraMEN2 - 32 1 netraDiskless1 - 33 1 netraDataless2 - |
12. Update the hostname files.
Replace NIC0 and NIC1 with the actual names of the interfaces mentioned in the file /etc/opt/SUNWcgha/nhfs.conf.
There is only one line in the files with the node name. Add the following text to this line :
13. Configure the External Addresses.
Note - If you do not have an external address configured, skip this step. |
The configuration can be found in a file named /etc/hostname.NICX:Y where X and Y are numbers. This contains the external node name.
a. Edit the file /etc/hostname.NICX (which you might have updated in the previous steps) and add the following line:
14. Configure the External Floating Address.
Note - If you do not have an external floating address configured, skip the rest of this step. |
a. Get the physical interface and floating IP address from the file /etc/opt/SUNWcgha/nhfs.conf:
b. Edit the file /etc/hostname.NICX (which you might have already updated in the previous steps) and add the following line:
netraMEN1-cgtp netmask + broadcast + -failover up addif extMEN1 netmask + broadcast + -failover up addif extFloat netmask + broadcast + failover down |
15. Update the /etc/opt/SUNWcgha/nhfs.conf file.
If you only used NSM to manage the external address, you can remove all the properties beginning with NSM. If NSM is used for other purposes, then only remove the properties beginning with NSM.External.
If you use an external floating address, add the following property:
The value of floating address is the same as the previous step.
Remove the not_configured file and reboot the node.:
Once the node has booted, log in to the node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:
18. Plug the serial cable in again if both MENs are already upgraded.
|
You can skip this procedure if you do not use the Node Management Agent.
1. Log in to the master node as superuser.
2. Mount the Netra HA Suite 2.1 7/05 distribution on /NetraHASuite.
# mount -F nfs installation-server-IP-address:\ /software-distribution-dir/NetraHAS2.1 /NetraHASuite |
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
software-distribution-dir is the directory that contains the Foundation Services 2.1 7/05 packages.
3. Configure the package installation.
To tell pkgadd how and where to install the packages, create the file /tmp/admin with the following contents:
mail= instance=unique partial=nocheck runlevel=quit idepend=nocheck rdepend=nocheck space=quit setuid=nocheck conflict=nocheck action=nocheck basedir=/ |
The text must begin in the first column.
Use the following commands to install the needed packages. Please note that from version 2.1.2 each installation of the NMA has its own JDMK package.
Be careful to replace the version tag in the path by the Solaris version you are using.
5. Create the INST_RELEASE file.
To be able to patch NMA or JDMK (if ever needed), you need to create the INST_RELEASE file.
This step must be repeated in every cluster node except the current master. Update one node at a time, executing substeps a-d before moving on to the next node.
a. Log in to the node as superuser.
b. Take the node out of the cluster:
c. Boot the Diskfull Node in Single-User Mode.
d. Modify the /etc/vfstab file.
For MEN and dataless nodes, edit the local file /etc/vfstab. For diskless nodes, edit the file /export/root/diskless-node-name/etc/vfstab located on the master node.
Look for the old NFS "devices":
master-cgtp:/SUNWcgha/local/export/services/ha_2.1.2/opt master-cgtp:/SUNWcgha/local/export/services/ha_2.1.2 |
On diskless nodes, waiting at OBP, use this command:
Wait until the node has rebooted before continuing with another node.
7. On the master node, trigger a switch-over.
Wait until the two MEN are synchronized. Use the following command to know the synchronization status:
Lots of information is provided by this command. One of the lines shows the needed information. Wait until the synchronization state reads READY. Trigger a switch-over:
and verify that the node became vice-master:
8. Repeat Step 6 on the new vice-master node (the old master).
9. Log in to the new master node.
# pkgrm -R /SUNWcgha/local/export/services \ SUNWjdrt SUNWnhmaj SUNWnhmal SUNWnhmad # rm -rf /SUNWcgha/local/export/services/var # rm -rf /SUNWcgha/local/export/services/ha_v1 |
Log in to each cluster node as superuser and run the following command to verify that everything went well. If not, check that all previous steps were executed correctly:
Copyright © 2006, Sun Microsystems, Inc. All Rights Reserved.