 |
» |
|
|
 |
The following sections provide important information you need to be aware of if you remove or reinstall patches during a rolling upgrade. Undoing a CSP or ERP When Paused at the Postinstall Stage (QAR 99184 — patch/Rolling_Patch_Removal.txt) |  |
When applying an ERP or CSP to a TruCluster system it is good practice to stop at the Postinstall Stage and perform testing of the Patch prior to rolling the patch on all the cluster members. This can help reduce further impacts to the Cluster. Installing the ERP/CSP using the Rolling Patch method is fairly strightforward, it's the removal of the Patch which where questions arise. The reason for removing an ERP/CSP may be many for many reasons, it's not the intent of this document to address making that decision. Following are the summary of steps and commands used to perform this task. Below, in the details section of this article, are is an example of performing this on a 2 member TruCluster system.  |  |  |  |  | NOTE: This procedure assumes that the ERP/CSP was installed using the Rolling Patch upgrade method (clu_upgrade) and -not- the NoRoll method. This procedure does not apply to ERP/CSP's that where installed using the NoRoll Patch method or to situations where the ERP/CSP was installed on a Single Member TruCluster system. |  |  |  |  |
Prior to installing Patches, always make sure to maintain current backups of the following file systems for a TruCluster system along with disklabels for associated disks and a sys_check -all. / = cluster_root#root Shared by all members /usr = cluster_usr#usr Shared by all members /var = cluster_var#varShared by all members /cluster/members/member1/boot_partition = root1_domain#root Member1 Member specific root file system. /cluster/members/member1/boot_partition = root2_domain#root Member2 Member specific root file system Any additional Member specific root file systems
Summary of Steps: While logged on to SERVER1 (Lead Member) perform the undo of the Postinstall stage. SERVER1# clu_upgrade undo postinstall |
Shutdown SERVER1 (Lead Member) then boot single-user mode to perform dupatch delete. Do not shut down to single-user level. Instead, shutdown the system and boot to single-user mode. SERVER1# shutdown -h now
P00>> boot -fl s
Entering Single-User Mode
SERVER1# init s
SERVER1# bcheckrc
SERVER1# lmf reset |
with SERVER1 (Lead Member) at single-user mode perform the dupatch delete procedure. SERVER1# cd /usr/patch_kit << Location of CSP Patch
SERVER1# ./dupatch
Main Menu
---------
|
 |
Select # 2 Patch Deletion Ignore all the SPECIAL Instructions displayed at this time - they do not apply to the ERP/CSP but do apply to the Aggregate Patch. You should, however, reference the ERP/CSP Patch Release Notes for any special instructions specific to the ERP/CSP being removed. After the "Special Instructions" are presented the following menu is displayed. 1) Patches for Tru64 UNIX V5.1B
2) Patches for TruCluster Server V5.1B | If the CSP/ERP is a Base OS Patch select 1, if the patch is specific to TruCluster, select 2.Enter Name and comment when prompted. Next a long listing of all patches installed on the system relavent to the Base OS or TruCluster software, depending on selection made in last menu, are displayed. identify the Patch from the list and select it. The patch will then be deleted and if necessary a new kernel will be built. When prompted to reboot answer YES, if not prompted reboot system.
With the dupatch deletion completed and the Lead Member rebooted, next perform undo of the Install Stage as follows: SERVER1# shutdown -h now
Halted
P00>> | On SERVER2 (Member2 or any Non-Lead Member) run "clu_upgrade undo install", it WARNS to use dupatch, which has already completed, so go ahead and answer YES. This restores tagged files and may take a few minutes (20 min. on ES45 using EVA5000 SAN Storage). |  |  |  |  | NOTE: Make sure you have plenty of free space on cluster_root / file system. minimum 40% free space, but this could differ depending on total disk space and installation. You can use "addvol" to add additional storage to cluster_root and/or cluster_usr file systems. Also, make sure that the Non-Lead member selected has access to the Lead Members Specific root file system disk, it should -not- be mounted only accessible, see if "disklabel -r dskN" works. |  |  |  |  |
SERVER2# clu_upgrade undo install |
Boot the Lead Member into Mult-user mode and perform undo of the Preinstall Stage. SERVER1# clu_upgrade undo preinstall |
Undo Setup Stage - to do this first you have to disable tagged files on other members Member 2 (SERVER2) etc...
The Cluster is now backed out from the clu_upgrade Patch install. SERVER1# clu_upgrade -v status
Retrieving cluster upgrade status.
There is currently no cluster upgrade in progress.
SERVER1#
SERVER2# clu_upgrade -v status
Retrieving cluster upgrade status.
There is currently no cluster upgrade in progress.
SERVER2# |
Caution on Removing Version Switched Patches |  |
When removing version switched patches on a cluster, do not remove version switched patches that were successfully installed in a previous rolling upgrade. This situation can occur because more than one patch subset may contain the same version switched patch. Although both the new and old patches can be removed during a roll, only the most recently installed, newer version switched patch can be properly removed. The older version switched patch can only be properly removed according to the documented procedure associated with that patch. This usually requires running some program before beginning the rolling upgrade to remove the patch. If you accidentally remove the older version switched patch, the rolling upgrade will most likely fail on the switch stage. To correct this situation, you will have to undo the upgrade by undoing all the stages up to and including the "install" stage. You will then need to reinstall the original version switched patch from the original patch kit that contained it. Steps Prior to the Switch Stage |  |
You can remove a patch kit you installed during the rolling upgrade at any time prior to issuing the clu_upgrade switch command by returning to the install stage, rerunning dupatch, and selecting the Patch Deletion item in the Main Menu. See “Removing Patches” for information about removing patches with dupatch. The procedure is as follows: Uninstall the patch kit as described in “Removing Patches”. Run the clu_upgrade undo install command. Note that although you do not have to run the clu_upgrade install command when installing a patch kit or an NHD kit, you must run the clu_upgrade undo install command if you want to remove those kits and undo the install stage. After you run the clu_upgrade undo install, you can continue undoing stages as described in “Undoing a Stage”.
Steps for After the Switch Stage |  |
To remove patches after you have issued the clu_upgrade switch command, you will have to complete the current rolling upgrade procedure and then rerun the procedure from the beginning (starting with the setup stage). When you run the install stage, you must bring down your system to single-user mode as described in steps 1 through 6 of “Installing Patches from Single-User Mode”. When you rerun dupatch (step 7), select the Patch Deletion item in the Main Menu. See “Removing Patches” for information about removing patches with dupatch. If the patch uses the version switch, you can still remove the patch, even after you have issued the clu_upgrade switch command. Do this as follows: Complete the current rolling upgrade procedure. Undo the patch that uses the version switch by following the instructions in the release note for that patch. Note that the last step to undo the patch will require a shutdown of the entire cluster. Rerun the rolling upgrade procedure from the beginning (starting with the setup stage). When you rerun dupatch, select the Patch Deletion item in the Main Menu.
Use the grep command to learn which patches use the version switch. For example, in the C shell: # grep -l PATCH_REQUIRES_VERSION_SWITCH=\"Y\" /usr/.smdb./*PAT*.ctrl
|
For information about version switches, see “Version Switch”.
|