TruCluster Server Version 5.0A and higher provides the infrastructure that makes a rolling upgrade possible.
For more detailed information about using the rolling upgrade process to install a new operating system or TruCluster software version, see the Version 5.1 or higher Cluster Installation manual.
Note
If you have not yet created your cluster, HP recommends that you patch your system first. See Section 3.4 for this time-saving procedure.
This chapter provides the following information:
An overview of the rolling upgrade process. (Section 5.1)
A description of the rolling upgrade stages. (Section 5.2)
The step-by-step procedure for performing a rolling upgrade on your cluster. (Section 5.3)
How to display the status of a rolling upgrade. (Section 5.4)
How to undo a stage. (Section 5.6)
How to remove patches installed during a rolling upgrade. (Section 5.7)
Because TruCluster Server software Version 5.1A contains some
minor changes to the rolling upgrade interface, the output you see may differ
slightly from the examples presented in this chapter.
5.1 Overview
A rolling upgrade is a software upgrade of a cluster that is performed while the cluster is in operation. One member at a time is rolled and returned to operation while the cluster transparently maintains a mixed-version environment for the base operating system, cluster, and Worldwide Language Support (WLS) software. Clients accessing services are not aware that a rolling upgrade is in progress.
When performing a rolling upgrade, the same procedure is used for patching
your system as for upgrading to a new operating system or TruCluster
version.
The only difference is that for a rolling patch you use the
dupatch
utility and for a rolling upgrade you use the
installupdate
utility during the install stage.
Note
See Chapter 2 for an overview of the
dupatch
utility and Chapter 4 for step-by-step instructions for usingdupatch
.
A roll consists of a series of stages (described in
Section 5.2)
that must be performed in a fixed order.
When patching a cluster, the commands
that control a rolling upgrade to enforce this order are
clu_upgrade
and
dupatch
.
You can perform only one rolling upgrade at a time. You cannot start another roll until the first roll is completed.
Note
A rolling upgrade updates the file systems and disks that the cluster currently uses; it does not update the disk or disks that contain the Tru64 UNIX operating system that were used to create the cluster (the operating system on which you ran
clu_create
). Although you can boot the original operating system in an emergency, remember that the differences between the current cluster and the original operating system increase with each roll.
A rolling upgrade updates the software
on one cluster member at a time so that you can test the new software without
disrupting critical services.
In order to support two versions of software
in the cluster during a roll,
clu_upgrade
creates a set
of tagged files in the setup stage.
These tagged files are copies of current files with
.Old..
prepended to the file name.
For example, the tagged file for the
vdump
command is
/sbin/.Old..vdump
.
Tagged
files are created in the same file system as the original files.
Each tagged file has an AdvFS property,
DEC_VERSION_TAG
,
set on it.
If a member's
sysconfigtab
rolls_ver_lookup
attribute is set to
1
, pathname resolution includes
determining whether a specified filename has a
.Old..filename
copy and whether the copy has the
DEC_VERSION_TAG
property set on it.
If both conditions are met,
the requested file operation is transparently diverted to use the
.Old..filename
version of the file.
Note that file system operations on directories are not bound by this
.Old..
restraint.
For example, you will see both versions of a file
listed when you issue the
ls
command of a directory on
any cluster member during a rolling upgrade.
The upgrade commands control when a member runs on tagged files by setting
that member's
sysconfigtab
rolls_ver_lookup
variable.
The commands set the value to
1
when
the member must run on tagged files, and to
0
when the
member must not run on tagged files.
The only member that never runs on tagged
files is the lead member (the first member to roll).
The following rules determine which files have tagged files automatically created for them in the setup stage:
Tagged files are created for the following product codes:
base operating system (OSF
), TruCluster software (TCR
), and Worldwide Language Support (IOS
).
The
subsets for each product use that product's three-letter product code as a
prefix for each subset name.
For example, TruCluster software subset names start
with the TruCluster software three-letter product code:
TCRBASE505
,
TCRMAN505
, and
TCRMIGRATE505
.
By default, files that are associated with other layered products do not have tagged files created for them. Tagged files are created only for layered products that have been modified to support tagged files during a rolling upgrade.
Caution
Unless a layered product's documentation specifically states that you can install a newer version of the product on the first rolled member, and that the layered product knows what actions to take in a mixed-version cluster, HP strongly recommend that you do not install either a new layered product or a new version of a currently installed layered product during a rolling upgrade.
The
clu_upgrade
command provides several command
options to manipulate tagged files:
check
,
add
,
remove
,
enable
, and
disable
.
When dealing with tagged files, take the following into
consideration:
During a normal rolling upgrade you do not have to manually
add or remove tagged files.
The
clu_upgrade
command calls
the
tagged
commands as needed to control the creation and
removal of tagged files.
The target for a
check
,
add
,
or
remove
tagged file operation is a product code that
represents an entire product.
The
clu_upgrade tagged
commands
operate on all files in the specified product or products.
For example, the
following command verifies the correctness of all the tagged files created
for the
TCR
kernel layered product (the TruCluster software
subsets):
# clu_upgrade tagged check TCR
If you inadvertently remove a
.Old..
copy of a file,
you must create tagged files for the entire layered product to re-create that
one file.
For example, the
vdump
command is in the
OSFADVFSnnn
subset, which is part of
the
OSF
product.
If you mistakenly remove
/sbin/.Old..vdump
, run the following command to re-create tagged files for the entire
layered product:
# clu_upgrade tagged add OSF
The
enable
and
disable
commands enable or disable the use of tagged files by a cluster member.
You
do not have to use
enable
or
disable
during a normal rolling upgrade.
The
disable
command is useful if you have to undo
the setup stage.
Because no members can be running with tagged files when
undoing the setup stage, you can use the
disable
command
to disable tagged files on any cluster member that is currently running on
tagged files.
For example, to disable tagged files for a member whose ID is
3m, issue the following command:
# clu_upgrade tagged disable 3
The
enable
command is provided in case you make a
mistake with the
disable
command.
A version switch manages the transition of the active version to the new version of an operating system. The active version is the one that is currently in use. The purpose of a version switch in a cluster is to prevent the introduction of potentially incompatible new features until all members have been updated.
For example, if a new version introduces a change to a kernel structure that is incompatible with the current structure, you do not want cluster members to use that new feature until all members have updated to the version that supports the new features.
At the start of a rolling upgrade, all members' active versions are the same as their new versions. During a roll, each member's new version is updated when it rolls. After all members have rolled, the switch stage sets the active version to the new version on all members. At the completion of the upgrade, all members' active versions are once again the same as their new versions.
The
clu_upgrade
command uses the
versw
command (described in
versw
(8)clu_upgrade
command manages all the version switch activity when
rolling individual members.
In the switch stage, after all members have rolled,
run the
clu_upgrade switch
command to complete the transition
to the new software.
5.2 Rolling Upgrade Stages
This section takes a closer look at each of the rolling upgrade stages.
Figure 5-1
provides a flow chart of the tasks and stages
that are required to perform a rolling upgrade.
(See
Section 5.3
for the rolling upgrade procedure.)
Figure 5-1: Rolling Upgrade Flow Chart
The stages are performed in the following order:
Preparation stage (Section 5.2.1)
Setup stage (Section 5.2.2)
Preinstall stage (Section 5.2.3)
Install stage (Section 5.2.4)
Postinstallation stage (Section 5.2.5)
Roll stage (Section 5.2.6)
Switch stage (Section 5.2.7)
Clean stage (Section 5.2.8)
Command | Where Run | Run Level |
clu_upgrade -v check setup
lead_memberid |
any member | multiuser mode |
During the preparation stage, you back up all important cluster data and verify that the cluster is ready for a roll. Before beginning a rolling upgrade, do the following:
Back up the clusterwide root (/
),
/usr
, and
/var
file systems.
The backups should
include all member-specific files in these file systems.
If the cluster has
a separate
i18n
file system, back up that file system.
In addition, back up any other file systems that contain critical user or
application data.
Note
If you perform an incremental or full backup of the cluster during a rolling upgrade, make sure to perform the backup on a member that is not running on tagged files. If you back up from a member that is using tagged files, you will back up the contents of the
.Old..
files. Because the lead member never uses tagged files, you can back up the cluster from the lead member (or any other member that has rolled) during a rolling upgrade.Most sites have automated backup procedures. If you know that an automatic backup will take place while the cluster is in the middle of a rolling upgrade, make sure that backups are done on the lead member or on a member that has rolled.
Choose
one member of the cluster as the first member to roll.
This member, known
as the lead member, must have direct access to the root (/
),
/usr
,
/var
, and if used,
i18n
file systems.
Make sure that the lead member can run any critical applications. You can test these applications after you update this member during the install stage, but before you roll any other members. If there is a problem, you can try to resolve it on this member before you continue. If there is a problem that you cannot resolve, you can undo the rolling upgrade and return the cluster to its pre-roll state. (Section 5.6 describes how to undo rolling upgrade stages.)
Run the
clu_upgrade -v check setup
lead_memberid
command, which verifies that:
No rolling upgrade is in progress.
All members are running the same versions of the base operating system and cluster software.
No members are running on tagged files.
Note
The
clu_upgrade -v check setup lead_memberid
command may check some but not all file systems for adequate space. Make sure that you manually check that your system meets the disk space requirements described later in this section.
A cluster can continue to operate during a rolling upgrade or a patch
because there are two copies of almost every file.
(There is only one copy
of some configuration files so that changes made by any member are visible
to all members.) This approach makes it possible to run two different versions
of the base operating system and the cluster software at the same time in
the same cluster.
The trade-off is that, before you start an upgrade or patch,
you must make sure that there is adequate free space in each of the clusterwide
root (/
),
/usr
, and
/var
file systems, and if there is a separate domain for the Worldwide
Language Support (WLS) subsets,
i18n
file systems.
A rolling upgrade has the following disk space requirements:
At least 50 percent free space in root (/
),
cluster_root#root
.
At least 50 percent free space in
/var
,
cluster_var#var
, plus an additional 425 MB to hold the subsets
for the new version of the base operating system.
If there is a separate
i18n
domain, at
least 50 percent free space in that file system.
See the Patch Summary and Release Notes included with each patch kit to find out the amount of space you will need to install the patch kit for your system.
If a file system needs more free space, use AdvFS utilities such as
addvol
to add volumes to domains as needed.
For information on managing
AdvFS domains, see the
AdvFS Administration
manual.
Note that you can expand the
clusterwide root (/
) domain.
5.2.2 Setup Stage
Command | Where Run | Run Level |
clu_upgrade setup
lead_memberid |
any member | multiuser mode |
The
clu_upgrade setup
lead_memberid
command performs the following tasks:
Caution
Make sure your system meets the space requirements described in Section 5.2.1 before issuing the
clu_upgrade setup
command.
Makes the
-v check setup
tests listed in
Section 5.2.1.
Asks whether you are going to patch (run
dupatch
) or update (run
installupdate
) your cluster.
Makes on-disk backup copies of the lead member's member-specific files.
Creates the mandatory set of tagged files (copies of existing
files, but with
.Old..
prepended to the file name) for
the
OSF
(base),
TCR
(cluster), and
IOS
(Worldwide Language Support) products.
Caution
If, for any reason, during an upgrade you need to create
.Old..
files for a layered product, see Section 5.1.1.
Sets the
sysconfigtab
variable
rolls_ver_lookup=1
on all members except the lead member.
When
rolls_ver_lookup=1
, a member uses the tagged files.
As a result,
the lead member can upgrade while the remaining members run on the
.Old..
files from the current release.
Prompts you to reboot all cluster members except the lead
member.
When the
setup
command completes, reboot these
members (one at a time so that the cluster can maintain quorum).
This reboot
is required for each member that uses tagged files in the mixed-version cluster.
When the reboots complete, all members except the lead member use tagged files.
Command | Where Run | Run Level |
clu_upgrade preinstall |
lead member | multiuser mode |
The purpose of the preinstall stage is to verify that the cluster is
ready for the lead member to run the
installupdate
or
dupatch
commands and, if the upgrade includes update installation,
to copy the new TruCluster software kit so that the kit will be available during
the install stage.
If you will perform an update installation when you perform
the step-by-step upgrade procedure in
Section 5.3,
remember to mount the new TruCluster software kit before you run the
preinstall
command.
The
clu_upgrade preinstall
command performs the following
tasks:
Verifies that the command is being run on the lead member, that the lead member is not running on tagged files, and that any other cluster members that are up are running on tagged files.
Verifies that tagged files are present, that they match their product's inventory files, and that each tagged file's AdvFS property is set correctly. (This process can take a while, but not as long as it does to create the tagged files in the setup stage.)
If you are performing a rolling upgrade,
clu_upgrade
preinstall
prompts you for the location of the new TruCluster software
kit, and then copies the kit to
/var/adm/update/TruClusterKit
on the lead member so that the kit will be available to the
installupdate
command during the install stage.
(The
installupdate
command copies the operating system kit to
/var/adm/update/OSKit
during the install stage.)
Caution
The files in
/var/adm/update
are critical to the roll process. Do not remove or modify files in this directory. Doing so can cause a rolling upgrade to fail.
Command | Where Run | Run Level |
installupdate |
lead member | single-user mode |
dupatch |
lead member | single-user mode or multiuser |
The install stage starts when the
clu_upgrade preinstall
command completes, and continues until you run the
clu_upgrade postinstall
command.
The lead member must be in single-user mode to run the
installupdate
command, and single-user mode is recommended for the
dupatch
command.
When taking the system to single-user mode, you
must halt the system and then boot it to single-user mode.
When the system is in single-user mode, run the
bcheckrc
and
init
-s
commands before you run either
the
installupdate
or
dupatch
command.
See the Tru64 UNIX
Installation Guide
for information on how to use these
commands.
In the install stage, you can perform one of the following:
An update installation:
installupdate
A patch:
dupatch
Note
If you run
clu_upgrade status
after runninginstallupdate
,clu_upgrade
will print a line indicating that the install stage is complete. However, the install stage is not complete until you run theclu_upgrade postinstall
command.
Command | Where Run | Run Level |
clu_upgrade postinstall |
lead member | multiuser mode |
The postinstallation stage verifies that the lead member has completed
an update installation, a patch, or both.
If an update installation was performed,
clu_upgrade postinstall
verifies that the lead member has rolled
to the new version of the base operating system.
5.2.6 Roll Stage
Command | Where Run | Run Level |
clu_upgrade roll |
member being rolled | single-user mode |
The lead member was upgraded in the install stage. The remaining members are upgraded one at a time in the roll stage.
The
clu_upgrade roll
command performs the following
tasks:
Verifies that the member is not the lead member, that the member has not already been rolled, and that the member is in single-user mode.
Backs up the member's member-specific files.
Sets up the
it
(8)
Reboots the member.
During this boot, the
it
scripts roll the member, build a customized kernel, and reboot with the customized
kernel.
Command | Where Run | Run Level |
clu_upgrade switch |
any member | multiuser mode |
The switch stage sets the active version of the software to the new version, which results in turning on any new features that had been deliberately disabled during the rolling upgrade.
The
clu_upgrade switch
command performs the following
tasks:
Verifies that all members have rolled, that all members are running the same versions of the base operating system and cluster software, and that no members are running on tagged files.
Sets the new version ID in each member's
sysconfigtab
file and running kernel.
Sets the active version to the new version for all cluster members.
Note
After the switch stage completes, you must reboot each member of the cluster, one at a time.
Command | Where Run | Run Level |
clu_upgrade clean |
any member | multiuser mode |
The clean stage cleans up the files and directories that were used for the rolling upgrade.
The
clu_upgrade clean
command performs the following
tasks:
Verifies that the switch stage has completed, that all members are running the same versions of the base operating system and cluster software, and that no members are running on tagged files.
Removes any on-disk backup archives that
clu_upgrade
created.
Deletes the following directories:
/var/adm/update/TruClusterKit
and
/var/adm/clu_upgrade/OSKit
.
If an update installation was performed, gives you the option
of running the Update Administration Utility (updadmin
)
to manage the files that were saved during an update installation.
Creates an archive directory for this upgrade,
/cluster/admin/clu_upgrade/history/base_OS_version
,
and moves the
clu_upgrade.log
file to the archive directory.
In the following procedure, unless otherwise stated, run commands in multiuser mode.
Note
If you have not yet created your cluster, it is recommended that you patch the operating system and TruCluster software before performing a rolling upgrade. See Section 3.4 for information.
Note
During a rolling upgrade, do not use the
/usr/sbin/setld
command to add or delete any of the following subsets:
Base Operating System subsets (those with the prefix
OSF
).TruCluster Server subsets (those with the prefix
TCR
).Worldwide Language Support (WLS) subsets (those with the prefix
IOSWW
).
Adding or deleting these subsets during a rollng upgrade creates inconsistencies in the tagged files.
Some stages of a rolling upgrade take longer to complete than others.
Table 5-1
lists the approximate time it takes to complete each
stage.
Table 5-1: Time Estimations for a Rolling Upgrade
Stage | Duration |
Preparation | Not under program control. |
Setup | 45 - 120 minutes. [Footnote 1] |
Preinstall | 15 - 30 minutes. [Footnote 1] |
Install | The same as installing a patch kit on a single system. Approximately 35 minutes, depending upon the size of the patch kit. |
Postinstall | Less than 1 minute. |
Roll (per member) | Patch: less than 5 minutes. Update installation: about the same amount of time it takes to add a member. |
Switch | Less than 1 minute. |
Clean | 30 - 90 minutes. [Footnote 1] |
Prepare the cluster (see Section 5.2.1):
Back up the cluster.
Choose a cluster member to be the lead member (the first member
to roll).
The examples in this procedure use the member whose
memberid
is
2
as the lead member.
The member's host name
is
provolone
.
Make sure that your system contains the required space in
all file systems as described in
Section 5.2.1.
If a file system needs more free space, use AdvFS utilities such as
addvol
to add volumes to domains as needed.
For information on managing
AdvFS domains, see the
AdvFS Administration
manual.
Note that the
clu_upgrade
-v check setup
lead_memberid
command
may check some but not all file systems for adequate space.
Make sure that you manually check that your system meets the disk space requirements
described in
Section 5.2.1.
On any member, run the
clu_upgrade -v check setup
lead_memberid
command to determine whether the cluster
is ready for an upgrade.
For example:
# clu_upgrade -v check setup 2
Perform the setup stage (Section 5.2.2).
On any member, run the
clu_upgrade setup
lead_memberid
command.
For example:
# clu_upgrade setup 2
Caution
If any file system fails to meet the minimum space requirements, the program will fail and generate an error message similar to the following:
*** Error *** The tar commands used to create tagged files in the '/' file system have reported the following errors and warnings: NOTE: CFS: File system full: / tar: sbin/lsm.d/raid5/volsd : No space left on device tar: sbin/lsm.d/raid5/volume : No space left on device NOTE: CFS: File system full: / .NOTE: CFS: File system full: /
If you receive this message, run the
clu_upgrade -undo setup
command, free up the required amount of space on the affected file systems, and then rerun theclu_upgrade setup
command.
During the setup stage,
clu_upgrade
asks whether
you are performing a update installation or a patch.
However, the wording
of the prompts in the Version 5.0A command is somewhat ambiguous:
Are you running the clu_upgrade command to upgrade to a new version of the base operating system and cluster software? [yes]: Are you running the clu_upgrade command in order to apply a rolling patch? [yes]
The
clu_upgrade
command does not display the second
prompt until it receives an answer for the first.
An administrator might be
tempted to answer yes to the
...
upgrade to a new version ...
prompt when performing a rolling upgrade to patch the cluster because a patch
is an upgrade to new software.
However, if you see these prompts, answer yes
to the first prompt only if you plan to run
installupdate
during the install stage.
Note: No WLS and Disk Space
Additional space is required in the
cluster_root
domain for backing up member files on clusters without Worldwide Language Support (WLS). If no space is available, the following message is displayed:*** Error *** There is no space available in the root (/), /usr, or /var file systems to back up member ''???'' member-specific files. Increase the available disk space on one of these file systems and rerun this stage of the upgrade.
The minimum required available space in the
cluster_root
domain must be greater than the sum of all of the member directories in the root (/
),/usr
, or/var
file systems.To view the available space in the
cluster_root
domain, enter the following command:# df /
For example:
#
df /
Filesystem 512-blocks Used Available Capacity Mounted on cluster_root#root 524288 175710 330512 35% /
To calculate the minimum required value, enter the following command:
# ksh 'du -s {,/usr,/var}/cluster/members/member?*/' | \ awk '{minimum+=$1}; END{print minimum}'
For example:
# ksh 'du -s {,/usr,/var}/cluster/members/member?*/' | \ > awk '{minimum+=$1}; END{print minimum}' 679030
The example indicates that
cluster_root
domain needs 348518 more blocks (679030 minus 330512), or approximately 175 MB of disk space. Use theaddvol
command to add additional volumes to thecluster_root
domain.
When asked if you want to continue the cluster upgrade, accept the default of yes:
This is the cluster upgrade program. You have indicated that you want to perform the 'setup' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:[Return]
Are you running the clu_upgrade command to upgrade to a new version of the base operating system and cluster software? [yes]:no
Are you running the clu_upgrade command to apply a rolling patch? [yes]:[Return]
Note that these prompts will change if you run the upgrade to its conclusion and then rerun it to remove patches. See Section 5.7 for more information (including the prompts you will see).
One at a time, reboot all cluster members except the lead member.
Perform the preinstall stage (Section 5.2.3).
Note
If you plan to run
installupdate
in the install stage, mount the device or directory that contains the new TruCluster software kit before runningclu_upgrade preinstall
. Thepreinstall
command will copy the kit to the/var/adm/update/TruClusterKit
directory.
On the lead member, run the following command:
# clu_upgrade preinstall
Manually relocate CAA services from the lead member to another cluster member before performing the install stage. For example:
# /usr/sbin/caa_relocate -s lead_member -c non_lead_member
Perform the install stage (Section 5.2.4).
Note
If while running
dupatch
you encounter a situation in which the lead member falls into an unrecoverable state, you will have to run theclu_upgrade undo install
command. Any subsequent patch installations may need to be enabled via thedupatch
baseline procedure. See Section 4.7 for information about baselining. See Section 5.6 for information about undoing a rolling upgrade stage.
You can patch a cluster or update cluster and operating system software.
You can perform a rolling upgrade to patch a cluster in either single-user mode, which is recommended, or in multiuser mode:
To patch the system in single-user mode, follow the instructions in Section 4.8.1.1.
To patch the system in multiuser mode, run the
dupatch
command.
See
Chapter 4
for information about using the
dupatch
utility.
Run the
lmf reset
command:
# lmf reset
If you are performing a roll that includes both an upgrade and a patch, do the update installation first and then the patch installation.
After the lead member performs its final reboot with its new custom kernel, perform the following manual tests before you roll any additional members:
Verify that the newly rolled lead member can serve the shared root (/
) file system.
Use the
cfsmgr
command to determine which
cluster member is currently serving the root file system.
For example:
# cfsmgr -v -a server / Domain or filesystem name = / Server Name = polishham Server Status : OK
Relocate the root (/
) file system to
the lead member.
For example:
# cfsmgr -h polishham -r -a SERVER=provolone /
Verify that the lead member can serve applications to clients. Make sure that the lead member can serve all important applications that the cluster makes available to its clients.
You decide how and what to test. Thoroughly exercise all critical applications and satisfy yourself that the lead member can serve these applications to clients before continuing the roll. For example, you can:
Manually relocate CAA services to the lead member.
For example,
to relocate an application resource named
clock
to lead
member
provolone
:
# caa_relocate clock -c provolone
Temporarily modify the default cluster alias attributes for the lead member so that it handles routing for the alias and serves all client requests that are directed to the alias. For example:
# cluamgr -a alias=DEFAULTALIAS,rpri=100,selp=100 # cluamgr -r start
The lead member is now handling all traffic that is addressed to the
default cluster alias.
(You can use the
arp -a
command
to verify that the lead member has the
permanent published
entry for the default cluster alias.)
From another member or from an outside client, use services such as
telnet
and
ftp
to verify that the lead member
can handle alias traffic.
Test client access to all important services that
the cluster provides.
When you are satisfied, reset the alias attributes on
the lead member to their original values.
Perform the postinstallation stage (Section 5.2.5).
# clu_upgrade postinstall
Perform the roll stage (Section 5.2.6).
One at a time, on each member of the cluster that has not rolled, do the following:
Manually relocate CAA services from the member to another cluster member before performing the roll stage. For example:
# /usr/sbin/caa_relocate -s member_to_roll \ -c another_member
Take the member to single-user mode by first halting the member and then booting to single-user mode. Before halting the member, make sure that the cluster can maintain quorum without the member's vote. For information about maintaining quorum when shutting down a member, see the chapter on Managing Cluster Members in the Version 5.1A Cluster Administration manual.
# /sbin/shutdown -h now
Note
Halting and booting the system ensures that it provides the minimal set of services to the cluster and that the running cluster has a minimal reliance on the member running in single-user mode. In particular, halting the member satisfies services that require the cluster member to have a status of DOWN before completing a service failover. If you do not first halt the cluster member, there is a high probability that services will not fail over as expected.
Boot the member:
>>> boot -fl s
When the system reaches single-user mode, run the
init s
,
bcheckrc
,
kloadsrv
,
and
lmf reset
commands.
For example:
# /sbin/init s # /sbin/bcheckrc # /sbin/kloadsrv # /usr/sbin/lmf reset
# clu_upgrade roll
When the member boots its new kernel, it has completed its roll and is no longer running on tagged files. Continue to roll members until all members of the cluster have rolled.
Note: /var Disk Space
The following messages might be displayed while running the
clu_upgrade roll
command:Backing up member-specific data for member: n ...NOTE: CFS: File system full: /var tar: /dev/tty Unavailable *** Error *** An error was detected while backing up member 'n' \ member-specific files.
Additional space in the
cluster_var
domain is required. To view the available space in thecluster_var
domain, enter the following command:# df /var
To calculate the required value, enter the following command:
# ksh 'du -s {,/usr,/var}/cluster/members/member?*/' | \ awk '{minimum+=$1}; END{print minimum}'
Use the
addvol
command to add additional volumes to thecluster_var
domain.
Perform the switch stage (Section 5.2.7).
After all members have rolled, run the following command on any member to enable any new software features that were deliberately disabled until all members have rolled:
# clu_upgrade switch
One at a time, reboot each member of the cluster.
Perform the clean stage (Section 5.2.8).
Run the following command on any member to remove the tagged (.Old..
) files from the cluster and complete the upgrade.
# clu_upgrade clean
5.4 Displaying the Status of a Rolling Upgrade
The
clu_upgrade
command provides the following options
for displaying the status of a rolling upgrade or patch.
You can run status
commands at any time.
Note
During a roll, there might be two versions of the
clu_upgrade
in the cluster an older version used by members that have not yet rolled, and a newer version (if included in the update distribution or patch kit). When checking status, the information that is displayed by thestatus
command might differ depending on whether the command is run on a member that has rolled. Therefore, if you run thestatus
command on two members, do not be surprised if the format and content of the displayed output are not the same.
To display the overall status of a rolling upgrade or patch:
clu_upgrade -v
or
clu_upgrade -v status
.
Note
If you run
clu_upgrade status
after runninginstallupdate
,clu_upgrade
will print a line indicating that the install stage is complete. However, the install stage is not complete until you run theclu_upgrade postinstall
command.
To determine whether you can run a stage:
clu_upgrade
check [stage]
.
If you do not specify
a
stage, the
clu_upgrade
tests
whether the next stage can be run.
To determine whether a stage has started or completed:
clu_upgrade started
stage
and
clu_upgrade completed
stage
.
To determine whether a member has rolled:
clu_upgrade
check roll
memberid
.
To verify whether tagged files have been created for a layered
product:
clu_upgrade tagged check [prod_code
[prod_code
...]]
.
If you do not specify
a product code,
clu_upgrade
inspects all tagged files in
the cluster.
5.5 Installing Multiple Patch Kits
During the install stage you can install
multiple patch
kits.
For example, you could run
dupatch
to install an
inaugural patch kit (such as Patch Kit 0001 for Version 5.1A) and then run
dupatch
again to install an Early Release Patch (ERP) Kit.
The benefit of installing multiple patch kits is the time you save by not having to run the rolling upgrade procedure multiple times. You should be aware, however, that installing multiple patch kits could make troubleshooting more difficult if you subsequently experience a problem with your system.
Prior to issuing the
clu_upgrade switch
command,
you can
remove any patches from
the last of the multiple patch kits you installed.
See
Section 4.12
for information about removing patches.
5.6 Undoing a Stage
The
clu_upgrade undo
command provides the ability
to undo a rolling upgrade that has not completed the switch stage.
You can
undo any stage except the switch stage and the clean stage.
Note
See Section 5.7 for information about deleting patches installed during a rolling upgrade.
To undo a stage, use the
undo
command with the stage
that you want to undo.
The
clu_upgrade
command determines
whether the specified stage is a valid stage to undo.
Table 5-2
outlines the requirements for undoing a stage:
Note
If while running
dupatch
you encounter a situation in which the lead member falls into an unrecoverable state, you will have to run theclu_upgrade undo install
command. Any subsequent patch installations may need to be enabled via thedupatch
baseline procedure. See Section 4.7 for information about baselining.
Stage to Undo | Command | Comments |
Setup | clu_upgrade undo setup |
You must run this command on the lead member. In addition, no members can be running on tagged files when you undo the setup stage. Before you undo the setup stage, use the
When no members are running on tagged files, run the
|
Preinstall | clu_upgrade undo preinstall |
You must run this command on the lead member. |
Install | clu_upgrade undo install |
When patching a cluster, perform this
operation only in a situations where the lead member is in an unrecoverable
state.
If the lead member is not in an unrecoverable state,
use the
If you do
need to run the
|
Postinstall | clu_upgrade undo postinstall |
You must run this command on the lead member. |
Roll | clu_upgrade undo roll
memberid |
You can run this command on any member
except the member whose roll is being undone.
Halt the member whose roll
stage is being undone.
Then run the
|
Note
You might see the following error message when running the
clu_upgrade undo postinstall
command:*** Error *** The 'undo' option cannot be run at the 'postinstall' stage, either because the next stage has already been started or because the stage specified for undo has not been started.
If you see the message, remove the following file before running the
clu_upgrade undo postinstall
command:# rm /cluster/admin/clu_upgrade/roll.started
5.7 Removing Patches Installed During a Rolling Upgrade
The following sections describe how to remove or reinstall patches during
a rolling upgrade.
5.7.1 Caution on Removing Version Switched Patches
When removing version switched patches on a cluster, do not remove version switched patches that were successfully installed in a previous rolling upgrade.
This situation can occur because more than one patch subset may contain the same version switched patch. Although both the new and old patches can be removed during a roll, only the most recently installed, newer version switched patch can be properly removed.
The older version switched patch can only be properly removed according to the documented procedure associated with that patch. This usually requires running some program before beginning the rolling upgrade to remove the patch.
If you accidentally remove the older version switched patch, the rolling
upgrade will most likely fail on the switch stage.
To correct this situation,
you will have to undo the upgrade by undoing all the stages up to and including
the "install" stage.
You will then need to reinstall the original version
switched patch from the original patch kit that contained it.
5.7.2 Steps Prior to the Switch Stage
At any time prior to issuing the
clu_upgrade switch
command, you can remove some or all of the patches you installed during the
rolling upgrade by returning to the install stage, rerunning
dupatch
, and selecting the
Patch Deletion
item in the
Main Menu.
See
Section 4.12
for information about removing
patches with
dupatch
.
You can also reinstall some or all of the patches you removed by rerunning
dupatch
.
After you are done running
dupatch
, you can then
proceed to the postinstall stage by running the
clu_upgrade postinstall
command on the lead member.
See
Section 5.6
for information about undoing any
of the rolling upgrade stages.
5.7.3 Steps for After the Switch Stage
To remove patches after you have issued the
clu_upgrade switch
command, you will have to complete the current rolling upgrade
procedure and then rerun the procedure from the beginning (starting with the
setup stage).
When you run the install stage, you must bring down your system to single-user
mode as described in steps 1 through 6 of
Section 4.8.1.1.
When you rerun
dupatch
(step 7), select the
Patch
Deletion
item in the Main Menu.
See
Section 4.12
for information about removing patches with
dupatch
.
If the patch uses the version switch, you can still remove the patch,
even after you have issued the
clu_upgrade
switch command.
Do this as follows:
Complete the current rolling upgrade procedure.
Undo the patch that uses the version switch by following the instructions in the release note for that patch. Note that the last step to undo the patch will require a shutdown of the entire cluster.
Rerun the rolling upgrade procedure from the beginning (starting
with the setup stage).
When you rerun
dupatch
, select the
Patch Deletion
item in the Main Menu.
Use the
grep
command to learn which patches use the
version switch.
For example, in the C shell:
For information about version switches, see Section 5.1.2.
Note
If you rerun the rolling upgrade procedure to remove patches, the prompts you receive during the setup stage will be different from those issued during the initial rolling upgrade. Those prompts will look as follows:
Do you want to continue to upgrade the cluster? [yes]:[Return]
What type of upgrade will be performed? 1) Rolling upgrade using the installupdate command 2) Rolling patch using the dupatch command 3) Both a rolling upgrade and a rolling patch 4) Exit cluster software upgrade Enter your choice:2
The sample installation in Section B.2 shows the prompts you will see during the initial rolling upgrade.