 |
» |
|
|
 |
The following sections describe each of the rolling upgrade stages. Preparation Stage |  |
During the preparation stage, you back up all important cluster data and verify that the cluster is ready for a roll. Before beginning a rolling upgrade, do the following: Choose one member of the cluster as the first member to roll. This member, known as the lead member, must have direct access to the root (/), /usr, /var, and, if used, i18n file systems. Make sure that the lead member can run any critical applications. You can test these applications after you update this member during the install stage, but before you roll any other members. If a problem occurs, you can try to resolve it on this member before you continue. If you cannot resolve a problem, you can undo the rolling upgrade and return the cluster to its pre-roll state. (“Undoing a Stage” describes how to undo rolling upgrade stages.) Back up the clusterwide root (/), /usr, and /var file systems, including all member-specific files in these file systems. If the cluster has a separate i18n file system, back up that file system. In addition, back up any other file systems that contain critical user or application data. If you plan to run the installupdate command in the install stage, remove any blocking layered products listed in Table 4-6 “Blocking Layered Products” that are installed on the cluster. Run the clu_upgrade -v check setup lead_memberid command, which verifies the following information: No rolling upgrade is in progress. All members are running the same versions of the base operating system and cluster software. No members are running on tagged files. There is adequate free disk space.
Verify that each system's firmware will support the new software. Update firmware as needed before starting the rolling upgrade.
A cluster can continue to operate during a rolling upgrade because two copies exist of the operating system and cluster software files. (Only one copy exists of shared configuration files so that changes made by any member are visible to all members.) This approach makes it possible to run two different versions of the base operating system and the cluster software at the same time in the same cluster. The trade-off is that, before you start an upgrade, you must make sure that there is adequate free space in each of the clusterwide root (/), /usr, and /var file systems, and, if a separate domain exists for the Worldwide Language Support (WLS) subsets, in the i18n file system. A rolling upgrade has the following disk space requirements: At least 50 percent free space in root (/), cluster_root#root. At least 50 percent free space in /usr, cluster_usr#usr. At least 50 percent free space in /var, cluster_var#var, plus, if updating the operating system, an additional 425 MB to hold the subsets for the new version of the base operating system. If a separate i18n domain exists for the WLS subsets, at least 50 percent free space in that domain. No tagged files are placed on member boot partitions. However, programs might need free space when moving kernels to boot partitions. We recommend that you reserve at least 50 MB free space on each member's boot partition. See the Patch Summary and Release Notes that came with your patch kit to find the amount of space you will need to install that kit. If installing an NHD kit, see the New Hardware Delivery Release Notes and Installation Instructions that came with your NHD kit to find the amount of space you will need to install that kit.
If a file system needs more free space, use AdvFS utilities such as addvol to add volumes to domains as needed. For information on managing AdvFS domains, see the Tru64 UNIX AdvFS Administration manual. (The AdvFS Utilities require a separate license.) You can also expand the clusterwide root (/) domain. Setup Stage |  |
The setup stage performs the clu_upgrade check setup command, creates tagged files, and prepares the cluster for the roll. The clu_upgrade setup lead_memberid command performs the following tasks: Creates the rolling upgrade log file, /cluster/admin/clu_upgrade.log. Makes the -v check setup tests listed in “Preparation Stage”. Prompts you to indicate whether to perform an update installation, install a patch kit, install an NHD kit, or a combination thereof. The following example shows the menu displayed by the TruCluster software Version 5.1B clu_upgrade command: What type of rolling upgrade will be performed?
Selection Type of Upgrade
---------------------------------------------------------------
1 An upgrade using the installupdate command
2 A patch using the dupatch command
3 A new hardware delivery using the nhd_install command
4 All of the above
5 None of the above
6 Help
7 Display all options again
---------------------------------------------------------------
Enter your Choices (for example, 1 2 2-3): |
If you specify an update installation, copies the relevant kits onto disk: If performing an update installation, copies the cluster kit to /var/adm/update/TruClusterKit so that the kit will be available to the installupdate command during the install stage. (The installupdate command copies the operating system kit to /var/adm/update/OSKit during the install stage.) The clu_upgrade command prompts for the absolute pathname for the TruCluster software kit location. On a TruCluster software Version 5.1B cluster, when performing a rolling upgrade that includes an update installation, remember to mount the TruCluster software kit before running the clu_upgrade setup command. On a TruCluster software Version 5.1B cluster, if performing an NHD installation, uses the nhd_install command to copy the NHD kit to /var/adm/update/NHDKit
Creates the mandatory set of tagged files for the OSF (base), TCR (cluster), and IOS (Worldwide Language Support) products. Sets the sysconfigtab variable rolls_ver_lookup=1 on all members except the lead member. When rolls_ver_lookup=1, a member uses tagged files. As a result, the lead member can upgrade while the remaining members run on the .Old.. files from the current release. Prompts you to reboot all cluster members except the lead member. When the setup command completes, reboot these members one at a time so that the cluster can maintain quorum. This reboot is required for each member that will use tagged files in the mixed-version cluster. When the reboots complete, all members except the lead member are running on tagged files.
Preinstall Stage |  |
The purpose of the preinstall stage is to verify that the cluster is ready for the lead member to run one or more of the installupdate, dupatch, or nhd_install commands. The clu_upgrade preinstall command performs the following tasks: Verifies that the command is being run on the lead member, that the lead member is not running on tagged files, and that any other cluster members that are up are running on tagged files. (Optional) Verifies that tagged files are present, that they match their product's inventory files, and that each tagged file's AdvFS property is set correctly. (This process can take a while, but not as long as it does to create the tagged files in the setup stage. Table 4-2 “Time Estimates for Rolling Upgrade Stages” provides time estimates for each stage.) Makes on-disk backup copies of the lead member's member-specific files.
Install Stage |  |
If your current cluster is running TruCluster software Version 5.1B or Version 5.1A, you can perform one of the tasks or combinations of tasks listed in Table 4-1 “Rolling Upgrade Tasks Supported by Version 5.1A and Version 5.1B”. The install stage starts when the clu_upgrade preinstall command completes, and continues until you run the clu_upgrade postinstall command. The lead member must be in single-user mode to run the installupdate command or the nhd_install command; single-user mode is recommended for the dupatch command. When taking the system to single-user mode, you must halt the system and then boot it to single-user mode. When the system is in single-user mode, run the init s, bcheckrc, and lmf reset commands before you run the installupdate, dupatch, or nhd_install commands. Postinstall Stage |  |
The postinstall stage verifies that the lead member has completed an update installation, a patch, or an NHD installation. If an update installation was performed, clu_upgrade postinstall verifies that the lead member has rolled to the new version of the base operating system. Roll Stage |  |
The lead member was upgraded in the install stage. The remaining members are upgraded in the roll stage. In many cluster configurations, you can roll multiple members in parallel and shorten the time required to upgrade the cluster. The number of members rolled in parallel is limited only by the requirement that the members not being rolled (plus the quorum disk, if one is configured) have sufficient votes to maintain quorum. Parallel rolls can be performed only after the lead member is rolled. The clu_upgrade roll command performs the following tasks: Verifies that the member is not the lead member, that the member has not already been rolled, and that the member is in single-user mode. Verifies that rolling the member will not result in a loss of quorum. Backs up the member's member-specific files. Sets up the it(8) scripts that will be run on reboot to perform the roll. Reboots the member. During this boot, the it scripts roll the member, build a customized kernel, and reboot with the customized kernel.
If a member goes down (and cannot be repaired and rebooted) before all members have rolled, you must delete the member to complete the roll of the cluster. However, if you have rolled all members but one, and this member goes down before it has rebooted in the roll stage, you must delete this member and then reboot any other member of the cluster. (The clu_upgrade command runs during reboot and tracks the number of members rolled versus the number of members currently in the cluster; clu_upgrade marks the roll stage as completed when the two values are equal. That is why, in the case where you have rolled all members except one, deleting the unrolled member and rebooting another member completes the roll stage and lets you continue the rolling upgrade.) Switch Stage |  |
The switch stage sets the active version of the software to the new version, which results in turning on any new features that had been deliberately disabled during the rolling upgrade. (See “Version Switch” for a description of active version and new version.) The clu_upgrade switch command performs the following tasks: Verifies that all members have rolled, that all members are running the same versions of the base operating system and cluster software, and that no members are running on tagged files. Sets the new version ID in each member's sysconfigtab file and running kernel. Sets the active version to the new version for all cluster members.
Clean Stage |  |
The clean stage removes the tagged (.Old..) files from the cluster and completes the upgrade. The clu_upgrade clean command performs the following tasks: Verifies that the switch stage has completed, that all members are running the same versions of the base operating system and cluster software, and that no members are running on tagged files. Removes all .Old.. files. Removes any on-disk backup archives that clu_upgrade created. If the directory exists, recursively deletes /var/adm/update/TruClusterKit, /var/adm/update/OSKit, and /var/adm/update/NHDKit. If an update installation was performed, gives you the option of running the Update Administration Utility (updadmin) to manage the files that were saved during an update installation. Creates an archive directory for this upgrade, /cluster/admin/clu_upgrade/history/release_version, and moves the clu_upgrade.log file to the archive directory.
|