This chapter provides samples of the logs written by:
clu_create
(Section C.1)
clu_add_member
(Section C.2)
clu_upgrade
(Section C.3)
Each time you run
clu_create
, it writes log
messages to
/cluster/admin/clu_create.log
.
Example C-1
shows a sample
clu_create
log file.
Example C-1: Sample clu_create Log File
Do you want to continue creating the cluster? [yes]: [Return] Each cluster has a unique cluster name, which is a hostname used to identify the entire cluster. Enter a fully-qualified cluster name []: deli.zk3.dec.com Checking cluster name: deli.zk3.dec.com You entered 'deli.zk3.dec.com' as your cluster name. Is this correct? [yes]: [Return] The cluster alias IP address is the IP address associated with the default cluster alias. (192.168.168.1 is an example of an IP address.) Enter the cluster alias IP address []: 16.140.112.209 Checking cluster alias IP address: 16.140.112.209 You entered '16.140.112.209' as the IP address for the default cluster alias. Is this correct? [yes]: [Return] The cluster root partition is the disk partition (for example, dsk4b) that will hold the clusterwide root (/) file system. Note: The default 'a' partition on most disks is not large enough to hold the clusterwide root AdvFS domain. Enter the device name of the cluster root partition []: dsk1b Checking the cluster root partition: dsk1b You entered 'dsk1b' as the device name of the cluster root partition. Is this correct? [yes]: [Return] The cluster usr partition is the disk partition (for example, dsk4g) that will contain the clusterwide usr (/usr) file system. Note: The default 'g' partition on most disks is usually large enough to hold the clusterwide usr AdvFS domain. Enter the device name of the cluster usr partition []: dsk2c Checking the cluster usr partition: dsk2c You entered 'dsk2c' as the device name of the cluster usr partition. Is this correct? [yes]: [Return] The cluster var device is the disk partition (for example, dsk4h) that will hold the clusterwide var (/var) file system. Note: The default 'h' partition on most disks is usually large enough to hold the clusterwide var AdvFS domain. Enter the device name of the cluster var partition []: dsk3c Checking the cluster var partition: dsk3c You entered 'dsk3c' as the device name of the cluster var partition. Is this correct? [yes]: [Return] Do you want to define a quorum disk device at this time? [yes]: [Return] The quorum disk device is the name of the disk (for example, 'dsk5') that will be used as this cluster quorum disk. Enter the device name of the quorum disk []: dsk7 Checking the quorum disk device: dsk7 You entered 'dsk7' as the device name of the quorum disk device. Is this correct? [yes]: [Return] By default the quorum disk is assigned '1' vote(s). To use this default value, press Return at the prompt. The number of votes for the quorum disk is an integer usually 0 or 1. If you select 0 votes then the quorum disk will not contribute votes to the cluster. If you select 1 vote then the quorum disk must be accessible to boot and run a single member cluster. Enter the number of votes for the quorum disk [1]: [Return] Checking number of votes for the quorum disk: 1 You entered '1' as the number votes for the quorum disk. Is this correct? [yes]: [Return] The default member ID for the first cluster member is '1'. To use this default value, press Return at the prompt. A member ID is used to identify each member in a cluster. Each member must have a unique member ID, which is an integer in the range 1-63, inclusive. Enter a cluster member ID [1]: [Return] Checking cluster member ID: 1 You entered '1' as the member ID. Is this correct? [yes]: [Return] By default the 1st member of a cluster is assigned '1' vote(s). Checking number of votes for this member: 1 Each member has its own boot disk, which has an associated device name; for example, 'dsk5'. Enter the device name of the member boot disk []: dsk10 Checking the member boot disk: dsk10 You entered 'dsk10' as the device name of this member's boot disk. Is this correct? [yes]: [Return] Device 'ics0' is the default virtual cluster interconnect device Checking virtual cluster interconnect device: ics0 The virtual cluster interconnect IP name 'pepicelli-ics0' was formed by appending '-ics0' to the system's hostname. To use this default value, press Return at the prompt. Each virtual cluster interconnect interface has a unique IP name (a hostname) associated with it. Enter the IP name for the virtual cluster interconnect [pepicelli-ics0]: [Return] Checking virtual cluster interconnect IP name: pepicelli-ics0 You entered 'pepicelli-ics0' as the IP name for the virtual cluster interconnect. Is this name correct? [yes]: [Return] The virtual cluster interconnect IP address '10.0.0.1' was created by replacing the last byte of the default virtual cluster interconnect network address '10.0.0.0' with the previously chosen member ID '1'. To use this default value, press Return at the prompt. The virtual cluster interconnect IP address is the IP address associated with the virtual cluster interconnect IP name. (192.168.168.1 is an example of an IP address.) Enter the IP address for the virtual cluster interconnect [10.0.0.1]: [Return] Checking virtual cluster interconnect IP address: 10.0.0.1 You entered '10.0.0.1' as the IP address for the virtual cluster interconnect. Is this address correct? [yes]: [Return] What type of cluster interconnect will you be using? Selection Type of Interconnect ---------------------------------------------------------------------- 1 Memory Channel 2 Local Area Network 3 None of the above 4 Help 5 Display all options again ---------------------------------------------------------------------- Enter your choice [1]: 1 You selected option '1' for the cluster interconnect Is that correct? (y/n) [y]: [Return] Device 'mc0' is the default physical cluster interconnect interface device Checking physical cluster interconnect interface device name(s): mc0 You entered the following information: Cluster name: deli.zk3.dec.com Cluster alias IP Address: 16.140.112.209 Clusterwide root partition: dsk1b Clusterwide usr partition: dsk2c Clusterwide var partition: dsk3c Clusterwide i18n partition: Directory-In-/usr Quorum disk device: dsk7 Number of votes assigned to the quorum disk: 1 First member's member ID: 1 Number of votes assigned to this member: 1 First member's boot disk: dsk10 First member's virtual cluster interconnect device name: ics0 First member's virtual cluster interconnect IP name: pepicelli-ics0 First member's virtual cluster interconnect IP address: 10.0.0.1 First member's physical cluster interconnect devices mc0 First member's NetRAIN device name Not-Applicable First member's physical cluster interconnect IP address Not-Applicable If you want to change any of the above information, answer 'n' to the following prompt. You will then be given an opportunity to change your selections. Do you want to continue to create the cluster? [yes]: [Return] Creating required disk labels. Creating disk label on member disk : dsk10 Initializing cnx partition on member disk : dsk10h Creating disk label on quorum disk : dsk7 Initializing cnx partition on quorum disk : dsk7h Creating AdvFS domains: Creating AdvFS domain 'root1_domain#root' on partition '/dev/disk/dsk10a'. Creating AdvFS domain 'cluster_root#root' on partition '/dev/disk/dsk1b'. Creating AdvFS domain 'cluster_usr#usr' on partition '/dev/disk/dsk2c'. Creating AdvFS domain 'cluster_var#var' on partition '/dev/disk/dsk3c'. Populating clusterwide root, usr, and var file systems: Copying root file system to 'cluster_root#root'. ... Copying usr file system to 'cluster_usr#usr'. ...................... Copying var file system to 'cluster_var#var'. .. Creating Content Dependent Symbolic Links (CDSLs) for file systems: Creating CDSLs in root file system. Creating CDSLs in usr file system. Creating CDSLs in var file system. Creating links between clusterwide file systems Populating member's root file system. Modifying configuration files required for cluster operation: Creating /etc/fstab file. Configuring cluster alias. Updating /etc/hosts - adding IP address '16.140.112.209' \ and hostname 'deli.zk3.dec.com' Updating member-specific /etc/inittab file with 'cms' entry. Updating /etc/hosts - adding IP address '10.0.0.1' and hostname 'pepicelli-ics0' Updating /etc/rc.config file. Updating /etc/sysconfigtab file. Retrieving cluster_root major and minor device numbers. Creating cluster device file CDSLs. Updating /.rhosts - adding hostname 'deli.zk3.dec.com'. Updating /etc/hosts.equiv - adding hostname 'deli.zk3.dec.com' Updating /.rhosts - adding hostname 'pepicelli-ics0'. Updating /etc/hosts.equiv - adding hostname 'pepicelli-ics0' Updating /etc/ifaccess.conf - adding deny entry for 'sl0' Updating /etc/ifaccess.conf - adding deny entry for 'tu0' Updating /etc/ifaccess.conf - adding deny entry for 'tun0' Updating /etc/cfgmgr.auth - adding hostname 'pepicelli.zk3.dec.com' Finished updating member1-specific area. Building a kernel for this member. Saving kernel build configuration. The kernel will now be configured using the doconfig program. *** KERNEL CONFIGURATION AND BUILD PROCEDURE *** Saving /sys/conf/PEPICELLI as /sys/conf/PEPICELLI.bck *** PERFORMING KERNEL BUILD *** Working....Tue May 8 15:54:11 EDT 2001 The new kernel is /sys/PEPICELLI/vmunix Finished running the doconfig program. The kernel build was successful and the new kernel has been copied to this member's boot disk. Restoring kernel build configuration. Updating console variables Setting console variable 'bootdef_dev' to dsk10 Setting console variable 'boot_dev' to dsk10 Setting console variable 'boot_reset' to ON Saving console variables to non-volatile storage clu_create: Cluster created successfully. To run this system as a single member cluster it must be rebooted. If you answer yes to the following question clu_create will reboot the system for you now. If you answer no, you must manually reboot the system after clu_create exits. Would you like clu_create to reboot this system now? [yes]: [Return] Shutdown at 15:56 (in 0 minutes) [pid 23642]
Each time you run
clu_add_member
, it writes log
messages to
/cluster/admin/clu_add_member.log
.
Example C-2
shows a sample
clu_add_member
log file.
Example C-2: Sample clu_add_member Log File
Do you want to continue adding this member? [yes]: [Return] Each cluster member has a hostname, which is assigned to the HOSTNAME variable in /etc/rc.config. Enter the new member's fully qualified hostname []: polishham.zk3.dec.com Checking member's hostname: polishham.zk3.dec.com You entered 'polishham.zk3.dec.com' as this member's hostname. Is this name correct? [yes]: [Return] The next available member ID for a cluster member is '2'. To use this default value, press Return at the prompt. A member ID is used to identify each member in a cluster. Each member must have a unique member ID, which is an integer in the range 1-63, inclusive. Enter a cluster member ID [2]: [Return] Checking cluster member ID: 2 You entered '2' as the member ID. Is this correct? [yes]: [Return] By default, when the current cluster's expected votes are greater then 1, each added member is assigned 1 vote(s). Otherwise, each added member is assigned 0 (zero) votes. To use this default value, press Return at the prompt. The number of votes for a member is an integer usually 0 or 1 Enter the number of votes for this member [1]: [Return] Checking number of votes for this member: 1 You entered '1' as the number votes for this member. Is this correct? [yes]: [Return] Each member has its own boot disk, which has an associated device name; for example, 'dsk5'. Enter the device name of the member boot disk []: dsk12 Checking the member boot disk: dsk12 You entered 'dsk12' as the device name of this member's boot disk. Is this correct? [yes]: [Return] Device 'ics0' is the default virtual cluster interconnect device Checking virtual cluster interconnect device: ics0 The virtual cluster interconnect IP name 'polishham-ics0' was formed by appending '-ics0' to the system's hostname. To use this default value, press Return at the prompt. Each virtual cluster interconnect interface has a unique IP name (a hostname) associated with it. Enter the IP name for the virtual cluster interconnect [polishham-ics0]: [Return] Checking virtual cluster interconnect IP name: polishham-ics0 You entered 'polishham-ics0' as the IP name for the virtual cluster interconnect. Is this name correct? [yes]: [Return] The virtual cluster interconnect IP address '10.0.0.2' was created by replacing the last byte of the virtual cluster interconnect network address '10.0.0.0' with the previously chosen member ID '2'. To use this default value, press Return at the prompt. The virtual cluster interconnect IP address is the IP address associated with the virtual cluster interconnect IP name. (192.168.168.1 is an example of an IP address.) Enter the IP address for the virtual cluster interconnect [10.0.0.2]: [Return] Checking virtual cluster interconnect IP address: 10.0.0.2 You entered '10.0.0.2' as the IP address for the virtual cluster interconnect. Is this address correct? [yes]: [Return] Device 'mc0' is the default physical cluster interconnect interface device To use this default value, press Return at the prompt. The physical cluster interconnect interface device is the name of the physical device(s) which will be used for low level cluster node communications. Examples of the physical cluster interconnect interface device name are: tu0, ee0, and nr0. Enter the physical cluster interconnect device name(s) [mc0]: [Return] Checking physical cluster interconnect interface device name(s): mc0 You entered 'mc0' as your physical cluster interconnect interface device name(s). Is this correct? [yes]: [Return] Each cluster member must have its own registered TruCluster Server license. The data required to register a new member is typically located on the License PAK certificate or it may have been previously placed on your system as a partial or complete license data file. If you are prepared to enter this license data at this time, clu_add_member can configure the new member to use this license data. If you do not have the license data at this time you can enter this data on the new member when it is up and running. Do you wish to register the TruCluster Server license for this new member at this time? [yes]: no You entered the following information: Member's hostname: polishham.zk3.dec.com Member's ID: 2 Number of votes assigned to this member: 1 Member's boot disk: dsk12 Member's virtual cluster interconnect devices: ics0 Member's virtual cluster interconnect IP name: polishham-ics0 Member's virtual cluster interconnect IP address: 10.0.0.2 Member's physical cluster interconnect devices: mc0 Member's NetRAIN device name: Not-Applicable Member's physical cluster interconnect IP address: Not-Applicable Member's cluster license: Not Entered If you want to change any of the above information answers 'n' to the following prompt. You will then be given an opportunity to change your selections. Do you want to continue to add this member? [yes]: [Return] Creating required disk labels. Creating disk label on member disk : dsk12 Initializing cnx partition on member disk : dsk12h Creating AdvFS domains: Creating AdvFS domain 'root2_domain#root' on partition '/dev/disk/dsk12a'. Creating cluster member-specific files: Creating new member's root member-specific files Creating new member's usr member-specific files Creating new member's var member-specific files Creating new member's boot member-specific files Modifying configuration files required for new member operation: Updating /etc/hosts - adding IP address '10.0.0.2' and hostname 'polishham-ics0' Updating /etc/rc.config Updating /etc/sysconfigtab Updating member-specific /etc/inittab file with 'cms' entry. Updating /etc/securettys - adding ptys entry Updating /.rhosts - adding hostname 'polishham-ics0' Updating /etc/hosts.equiv - adding hostname 'polishham-ics0' Updating /etc/cfgmgr.auth - adding hostname 'polishham.zk3.dec.com' Configuring cluster alias. Configuring Network Time Protocol for new member Adding interface 'pepicelli-ics0' as an NTP peer to member 'polishham.zk3.dec.com' Adding interface 'polishham-ics0' as an NTP peer to member 'pepicelli.zk3.dec.com' Configuring automatic subset configuration and kernel build. clu_add_member: Initial member 2 configuration completed successfully. From the newly added member's console, perform the following steps to complete the newly added member's configuration: 1. Set the console variable 'boot_osflags' to 'A'. 2. Identify the console name of the newly added member's boots device. >>> show device The newly added member's boot device has the following properties: Manufacturer: DEC Model: HSG80 Target: IDENTIFIER=4 Lun: UNKNOWN Serial Number: SCSI-WWID:01000010:6000-1fe1-0006-3f10-0009-0270-0619-0005 Note: The SCSI bus number may differ when viewed from different members. 3. Boot the newly added member using genvmunix: >>> boot -file genvmunix <new-member-boot-device> During this initial boot the newly added member will: o Configure each installed subset. o Attempt to build and install a new kernel. If the system cannot build a kernel, it starts a shell where you can attempt to build a kernel manually. If the build succeeds, copy the new kernel to /vmunix. When you are finished exit the shell using ^D or 'exit'. o The newly added member will attempt to set boot related console variables and continue to boot to multi-user mode. o After the newly added member boots you should setup your system default network interface using the appropriate system management command.
Each time you perform a rolling upgrade,
clu_upgrade
writes log messages to
/cluster/admin/clu_upgrade.log
.
When the rolling
upgrade is complete,
clu_upgrade
moves the log file
to the
/cluster/admin/clu_upgrade/history/release_version
directory.
Example C-3
shows a sample
clu_upgrade
log file for a rolling upgrade of a
cluster from Version 5.1 to Version 5.1A.
(The log is slightly
reformatted for readability).
Example C-3: Sample clu_upgrade Log File
############################################################################# clu_upgrade Command: upgrade Stage: On Host: pepicelli.zk3.dec.com Started at: Thu May 17 13:34:23 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. This is the cluster upgrade program. You have indicated that you want to perform the 'setup' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] Marking stage 'setup' as 'started'. What type of upgrade will be performed? 1) Rolling upgrade using the installupdate command 2) Rolling patch using the dupatch command 3) Both a rolling upgrade and a rolling patch 4) Exit cluster software upgrade Enter your choice:1 Backing up member-specific data for member: 1 ... Creating tagged files. ............................................................................\ ............................................................................\ ............................................................................\ ............................................................... ............................................. The cluster upgrade 'setup' stage has completed successfully. Reboot all cluster members except member: '1' Marking stage 'setup' as 'completed'. The 'setup' stage of the upgrade has completed successfully. ----------------------------------------------------------------------------- clu_upgrade Command: upgrade Stage: setup On Host: pepicelli.zk3.dec.com Finished at: Thu May 17 15:48:12 EST 2001 ############################################################################# clu_upgrade Command: boot Stage: preinstall On Host: polishham.zk3.dec.com Started at: Thu May 17 16:01:15 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. ############################################################################# clu_upgrade Command: boot Stage: preinstall On Host: pepicelli.zk3.dec.com Started at: Thu May 17 16:16:16 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. ############################################################################# clu_upgrade Command: upgrade Stage: On Host: pepicelli.zk3.dec.com Started at: Thu May 17 16:21:18 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. This is the cluster upgrade program. You have indicated that you want to perform the 'preinstall' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] Checking tagged files. ............................................................................\ ............................................................ Marking stage 'preinstall' as 'started'. Enter the full pathname of the cluster kit mount point ['???']: /mnt/TruCluster/kit A cluster kit has been found in the following location: /mnt/TruCluster/kit This kit has the following version information: 'Tru64 UNIX TruCluster(TM) Server Software X5.1A-8 (Rev 1139) autokit' Is this the correct cluster kit for the update being performed? [yes]: [Return] Saving cluster kit '/mnt/TruCluster/kit' to '/var/adm/update/TruClusterKit/'. Marking stage 'preinstall' as 'completed'. The cluster upgrade 'preinstall' stage has completed successfully. On the lead member, perform the following steps before running the installupdate command: # shutdown now [NOTE: As stated in the TruCluster Server Version 5.1 Release Notes, the correct method for taking a cluster member to single-user mode is to halt the member, and then boot it to single-user mode.] In single-user mode enter: # /sbin/bcheckrc The 'preinstall' stage of the upgrade has completed successfully. ----------------------------------------------------------------------------- clu_upgrade Command: upgrade Stage: preinstall On Host: pepicelli.zk3.dec.com Finished at: Fri May 18 08:49:48 EST 2001 ############################################################################# clu_upgrade Command: check Stage: install On Host: Started at: Fri May 18 14:18:58 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. Checking install... The 'install' stage of cluster upgrade is ready to be run. ############################################################################# clu_upgrade Command: check Stage: install On Host: Started at: Fri May 18 14:21:43 EST 2001 ----------------------------------------------------------------------------- Retrieving cluster status. Retrieving upgrade-related system configuration. Checking install... The 'install' stage of cluster upgrade is ready to be run. ############################################################################# clu_upgrade Command: clu_upgrade boot On Host: pepicelli.zk3.dec.com Invoked at: Fri May 18 17:48:21 EST 2001 ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade boot On Host: pepicelli.zk3.dec.com Exited at: Fri May 18 17:48:24 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade upgrade postinstall On Host: pepicelli.zk3.dec.com Invoked at: Sun May 20 16:44:10 EST 2001 ----------------------------------------------------------------------------- This is the cluster upgrade program. You have indicated that you want to perform the 'postinstall' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] Marking stage 'postinstall' as 'started'. Marking stage 'postinstall' as 'completed'. The 'postinstall' stage of the upgrade has completed successfully. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade upgrade postinstall On Host: pepicelli.zk3.dec.com Exited at: Sun May 20 16:44:19 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade upgrade roll On Host: Invoked at: Mon May 21 09:41:00 EST 2001 ----------------------------------------------------------------------------- This is the cluster upgrade program. You have indicated that you want to perform the 'roll' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] Backing up member-specific data for member: 2 *** START UPDATE INSTALLATION (Mon May 21 09:42:43 EST 2001) *** FLAGS: Checking for installed supplemental hardware support... Completed check for installed supplemental hardware support Checking for retired hardware...done. Initializing new version information (OSF)...done Initializing new version information (TCR)...done Initializing the list of member specific files for member2...done Update Installation has detected the following update installable products on your system: Tru64 UNIX V5.1 Operating System ( Rev 732 ) Tru64 UNIX TruCluster(TM) Server Software V5.1 (Rev 389) These products will be updated to the following versions: Tru64 UNIX X5.1A-8 Operating System ( Rev 1751 ) Tru64 UNIX TruCluster(TM) Server Software X5.1A-8 (Rev 1139) autokit It is recommended that you update your system firmware and perform a complete system backup before proceeding. A log of this update installation can be found at /var/adm/smlogs/update.log. Do you want to continue the Update Installation? (y/n) []: y Do you want to select optional kernel components? (y/n) [n]: n Do you want to archive obsolete files? (y/n) [n]: n USER SETTINGS: -------------- *** Checking for conflicting software *** The following software may require reinstallation after the Update Installation is completed: Advanced Printing Software Do you want to continue the Update Installation? (y/n) [y]: [Return] *** Determining installed Operating System software *** Working....Mon May 21 09:46:40 EST 2001 *** Determining installed Tru64 UNIX TruCluster(TM) Server \ Software V5.1 (Rev 389) software *** *** Determining kernel components *** Working....Mon May 21 09:48:45 EST 2001 *** Checking for file type conflicts *** *** Checking for obsolete files *** *** Checking file system space *** Update Installation is now ready to begin modifying the files necessary to reboot the cluster member off of the new OS. Please check the /var/adm/smlogs/update.log and /var/adm/smlogs/it.log files for errors after the installation is complete. Do you want to continue the Update Installation? (y/n) [n]: y *** Starting configuration merges for Update Install *** *** Merging new file ./etc/.new..sysconfigtab into existing ./etc/../cluster/members/member2/boot_partition/etc/sysconfigtab Merging /etc/../cluster/members/member2/boot_partition/etc/sysconfigtab Merge completed successfully. The critical files needed for reboot have been moved into place. The system will now reboot with the generic kernel for Compaq Computer Corporation Tru64 UNIX X5.1A-8 and complete the rolling upgrade for this member (member2). *** END UPDATE INSTALLATION (Mon May 21 09:59:18 EST 2001) *** The 'roll' stage has completed successfully. This member must be rebooted in order to run with the newly installed software. Do you want to reboot this member at this time? []: y You indicated that you want to reboot this member at this time. Is that correct? [yes]: [Return] The 'roll' stage of the upgrade has completed successfully. ############################################################################# clu_upgrade Command: clu_upgrade boot On Host: polishham.zk3.dec.com Invoked at: Mon May 21 11:04:05 EST 2001 ----------------------------------------------------------------------------- Marking stage 'roll' as 'completed'. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade boot On Host: polishham.zk3.dec.com Exited at: Mon May 21 11:04:09 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade upgrade switch On Host: pepicelli.zk3.dec.com Invoked at: Mon May 21 11:19:27 EST 2001 ----------------------------------------------------------------------------- This is the cluster upgrade program. You have indicated that you want to perform the 'switch' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] Marking stage 'switch' as 'started'. Initiating version switch on cluster members ..Successful switch of the version identifiers Marking stage 'switch' as 'completed'. The cluster upgrade 'switch' stage has completed successfully. All cluster members must be rebooted before running the 'clean' command. The 'switch' stage of the upgrade has completed successfully. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade upgrade switch On Host: pepicelli.zk3.dec.com Exited at: Mon May 21 11:20:02 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade boot On Host: polishham.zk3.dec.com Invoked at: Mon May 21 11:28:28 EST 2001 ----------------------------------------------------------------------------- Marking stage 'switch' as 'completed'. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade boot On Host: polishham.zk3.dec.com Exited at: Mon May 21 11:28:31 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade boot On Host: pepicelli.zk3.dec.com Invoked at: Mon May 21 11:34:20 EST 2001 ----------------------------------------------------------------------------- Marking stage 'switch' as 'completed'. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade boot On Host: pepicelli.zk3.dec.com Exited at: Mon May 21 11:34:24 EST 2001 ############################################################################# clu_upgrade Command: clu_upgrade upgrade clean On Host: pepicelli.zk3.dec.com Invoked at: Mon May 21 12:38:54 EST 2001 ----------------------------------------------------------------------------- This is the cluster upgrade program. You have indicated that you want to perform the 'clean' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]: [Return] .Marking stage 'clean' as 'started'. Deleting tagged files. ............................................................................\ ............................................................................\ ............................................... Removing back-up and kit files ...................................... The Update Administration Utility is typically run after an update installation to manage the files that are saved during an update installation. Do you want to run the Update Administration Utility at this time? [yes]: [Return] The Update Installation Cleanup utility is used to clean up backup files created by Update Installation. Update Installation can create two types of files: .PreUPD and .PreMRG. The .PreUPD files are copies of unprotected customized system files as they existed prior to running Update Installation. The .PreMRG files are copies of protected system files as they existed prior to running Update Installation. Please make a selection from the following menu. Update Installation Cleanup Main Menu --------------------------------------- c) Unprotected Customized File Administration (.PreUPD) p) Pre-Merge File Administration (.PreMRG) x) Exit This Utility Enter your choice: x Exiting /usr/sbin/updadmin... Marking stage 'clean' as 'completed'. The 'clean' stage of the upgrade has completed successfully. ----------------------------------------------------------------------------- clu_upgrade Command: clu_upgrade upgrade clean On Host: pepicelli.zk3.dec.com Exited at: Mon May 21 13:38:55 EST 2001