This chapter tells you how to prepare for installation, where to get the NHD-6 kit, and how to install it on your system. It includes the following topics:
Preparing to install NHD (Section 3.1)
Getting the NHD kit (Section 3.2)
Installing the NHD kit (Section 3.3)
3.1 Preparing for NHD-6 Installation
Follow these steps before you install NHD-6:
If your system already is running Version 5.1A or 5.1B of the operating system, perform a full backup of your system.
Get the NHD-6 kit as described in Section 3.2.
Determine the NHD kit to install.
The installable Version 5.1A kit is located on the distribution
media at
/520/usr/sys/hardware/base.kit
.
The installable Version 5.1B kit is located on the distribution
media at
/540/usr/sys/hardware/base.kit
.
If necessary, create an NHD-6 kit CD image as described in Section 3.2.3.
If you are installing from a RIS server, perform the following tasks:
Set up the RIS area as described in Section 3.2.4.1.
Register your system as a RIS client as described in Section 3.2.4.2.
See the Sharing Software on a Local Area Network manual for more information about RIS.
If your system already is running a version of the operating system, shut down your system.
Upgrade your system to the latest version of firmware for your processor.
Determine the console name of your system disk and any devices you will use for software distributions, such as the NHD-6 kit, the Tru64 UNIX Operating System distribution, and the Associated Products, Volume 2, distribution for TruCluster software. This could include the following:
Any CD-ROM drives where you are mounting CD-ROMs
Any spare disk used to create a CD image
Your network interface adapter if you are installing from a RIS server
At the console prompt, set the value of the
bootdef_dev
variable to null:
>>> set bootdef_dev ""
At the console prompt, set the value of the
auto_action
variable to
halt
:
>>> set auto_action halt
At the console prompt, set the value of the
boot_osflags
variable to
a
:
>>> set boot_osflags a
Power down your system.
Review your hardware documentation and install your new hardware.
Note
If you add supported hardware after NHD-6 is already installed on your system, follow the instructions in Section 3.3.5 to include support for the new hardware in your custom kernel on either the single system or the cluster member where you install the new hardware.
Power up your system.
Install NHD-6 according to the instructions in Section 3.3.
Caution
Before you install NHD-6 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 2.1.8. Failure to follow these instructions can cause your NHD-6 installation to fail.
This section tells you how to acquire the NHD-6 kit and what to do before you install it.
You can get the NHD-6 kit from two sources:
Order it on CD-ROM from your Tru64 UNIX sales or service representative (Section 3.2.1).
Download it from the World Wide Web (Section 3.2.2).
If you download the NHD-6 kit, you must create a CD image on disk (Section 3.2.3).
If you are going to install NHD-6 from a RIS server, you must prepare for RIS installation (Section 3.2.4).
3.2.1 Ordering the NHD-6 Kit on CD-ROM
Contact your Tru64 UNIX sales or service representative at 1-800-888-0220.
Order part number QA-MT4AX-H8 to get the NHD-6 kit on CD-ROM.
3.2.2 Downloading the NHD-6 Kit
You can download the NHD-6 kit from the World Wide Web at one of the following URLs:
http://ftp.support.compaq.com/public/unix/v5.1a/nhd/6.0/
http://ftp.support.compaq.com/public/unix/v5.1b/nhd/6.0/
This directory includes the following files:
nhd6.CHKSUM
- NHD-6 kit checksum
information
nhd6.README
- NHD-6 customer letter
nhd6.tar.gz
- Compressed and archived NHD-6
kit
3.2.3 Creating an NHD-6 Kit CD Image
These instructions assume that you have downloaded the NHD-6
kit to
/usr/tmp
.
Before you create a CD image, you must
have a spare disk with at least 750 MB of free space to use for the CD image.
Note
This procedure creates a CD image of the NHD-6 kit distribution for installation purposes. It does not allow you to burn a CD-ROM from this image.
Follow these steps to create the NHD-6 CD image on disk:
Log in as
root
.
Create a new UFS file system on the spare disk. For example:
# newfs /dev/rdisk/dsk2c
You see output similar to this:
Warning: /dev/rdisk/dsk2c and overlapping partition(s) are marked in use. If you continue with the operation you can possibly destroy existing data. CONTINUE? [y/n]
Enter
y
to continue.
You see output
similar to this:
/dev/rdisk/dsk2c: 8380080 sectors in 3708 cylinders of 20 tracks, \ 113 sectors 4091.8MB in 232 cyl groups (16 c/g, 17.66MB/g, 4288 i/g) super-block backups (for fsck -b #) at: 32, 36320, 72608, 108896, 145184, 181472, 217760, 252048, 290336, 326624, 362912, 399200, 435488, 471776, 508064, 544352, 580640, 616928, 653216, 689504, 725792, 762080, 798368, 834656, 870944, 907232, 943520, 979808, 1016096, 1052384, 1088672, 1124960, 1157152, 1193440, 1229728, 1266016, 1302304, 1338592, 1374880, 1411168, 1447456, 1483744, 1520032, 1556320, 1592608, 1628896, 1665184, 1701472, 1737760, 1774048, 1810336, 1846624, 1882912, 1919200, 1955488, 1991776, 2028064, 2064352, 2100640, 2136928, 2173216, 2209504, 2245792, 2282080, 2314272, 2350560, 2386848, 2423136, 2459424, 2495712, 2532000, 2568288, 2604576, 2640864, 2677152, 2713440, 2749728, 2786016, 2822304, 2858592, 2894880, 2931168, 2967456, 3003744, 3040032, 3076320, 3112608, 3148896, 3185184, 3221472, 3257760, 3294048, 3330336, 3366624, 3402912, 3439200, 3471392, 3507680, 3543968, 3580256, 3616544, 3652832, 3689120, 3725208, 3761696, 3797984, 3834272, 3870560, 3906848, 3943136, 3979424, 4015712, 4052000, 4088288, 4124576, 4160864, 4197152, 4233440, 4269728, 4306016, 4342304, 4378592, 4414880, 4451168, 4487456, 4523744, 4560032, 4596320, 4628512, 4664800, 4701088, 4737376, 4773664, 4809952, 4846240, 4882528, 4918816, 4955104, 4991392, 5027680, 5063968, 5100256, 5136544, 5172832, 5209120, 5245208, 5281696, 5317984, 5354272, 5390560, 5426848, 5463136, 5499424, 5535712, 5572000, 5608288, 5644576, 5680864, 5717152, 5753440, 5785632, 5821920, 5858208, 5894496, 5930784, 5967072, 6003360, 6039648, 6075936, 6112224, 6148512, 6184800, 6221088, 6257376, 6293664, 6329952, 6366240, 6402528, 6438816, 6475104, 6511392, 6547680, 6583968, 6620256, 6656544, 6692832, 6729120, 6765208, 6801696, 6837984, 6874272, 6910560, 6942752, 6979040, 7015328, 7051616, 7087904, 7124192, 7160480, 7196768, 7233056, 7269344, 7305632, 7341920, 7378208, 7414496, 7450784, 7487072, 7523360, 7559648, 7595936, 7632224, 7668512, 7704800, 7741088, 7777376, 7813664, 7849952, 7886240, 7922528, 7958816, 7995104, 8031392, 8067680, 8099872, 8136160, 8172448, 8208736, 8245024, 8281312, 8317600, 8353888,
Mount the spare disk where you will create the CD image. For example:
# mount /dev/disk/dsk2c /mnt
Change directory to the new file system:
# cd /mnt
Enter the following command to extract the NHD-6 kit into the CD image:
# gzcat /usr/tmp/nhd6.tar.gz | tar xvf -
You see a list of files as they are extracted.
Return to the root directory and unmount the CD image:
# cd / # umount /mnt
You have created an NHD-6 CD image on the disk at
/dev/disk/dsk2c
.
3.2.4 Preparing for RIS installation
If you are installing NHD-6 from a RIS server, you first must do the following:
Set up the RIS area on the RIS server (Section 3.2.4.1).
Register the RIS client (Section 3.2.4.2).
Note
Although the examples in this section show the NHD-6 distribution on CD-ROM, you can use a CD image created from the downloaded NHD-6 kit, as described in Section 3.2.2 and Section 3.2.3.
See the
Sharing Software on a Local Area Network
manual for more information about RIS.
The Troubleshooting
RIS chapter is especially helpful if you encounter difficulties.
3.2.4.1 Setting Up the RIS Area
Follow these steps to create a RIS area for NHD-6 on your RIS server:
Use the
ris
utility to install Version 5.1A or 5.1B
of the base operating system into a new RIS area.
Caution
Use the standard method to create the RIS area, not the bootlink method.
Extract the base operating system; do not use symbolic links.
Optionally, you may install TruCluster Server and Worldwide Language Support in the same RIS area.
Load the NHD-6 CD-ROM into the RIS server's CD-ROM drive.
Mount the NHD-6 distribution. For example:
# mount /dev/disk/cdrom0a /mnt
Run the
update_ris
script to install the NHD-6
kit into the RIS area.
For example:
# /mnt/tools/update_ris
You see messages similar to the following:
Please select one of the following products to add NHD support to 1) /usr/var/adm/ris/ris9.alpha 'Tru64 UNIX V5.1x Operating System (Rev nnnn)' 2) /usr/var/adm/ris/ris6.alpha 'Tru64 UNIX V5.1x Operating System ( Rev nnnn )' Enter your selection or press <return> to quit:
Note
The RIS areas you see depend upon your RIS server.
In this example, enter
2
and press
Return.
You see messages similar to the following:
You are updating ris area /usr/var/adm/ris/ris6.alpha for: V5.1x Operating System ( Rev 1885 ) with NHD support. Is this correct? (y/n):
In this example, enter
y
and press
Return.
You see messages similar to the following:
'Tru64 UNIX New Hardware for V5.1x' 1 'Tru64 UNIX New Hardware for V5.1x' Building new network bootable kernel /usr/var/adm/ris/ris6.alpha/kit has been updated with NHD-6 support
3.2.4.2 Registering the RIS Client
See the Sharing Software on a Local Area Network manual for instructions on how to register RIS clients for a RIS area.
Note
When you register a cluster as a RIS client, remember to register both the cluster alias and the lead cluster member. During client registration, you see the following prompt:
Is this client a cluster alias? (y/n) [n]:
When you register a cluster alias, enter
y
and press Return.When you register the lead cluster member, press Return. When prompted, enter the hardware address.
This section tells you how to install the NHD-6 kit on a system in one of the following configurations:
Single system already running Version 5.1A or 5.1B (Section 3.3.1)
Single system during Full Installation of Version 5.1A or 5.1B:
Installing from a CD-ROM or CD image (Section 3.3.2.1)
Installing from RIS (Section 3.3.2.2)
Cluster already running Version 5.1A or 5.1B (Section 3.3.3)
Cluster during Full Installation of Version 5.1A or 5.1B (Section 3.3.4)
Note
You can install NHD-6 from CD-ROM or a CD image that you create from the downloaded kit.
If you are installing NHD-6 during a Full Installation, you can install from a RIS area.
3.3.1 Installing on a Single System Running Version 5.1A or 5.1B
Before you start this procedure, you must have the NHD-6 distribution. See Section 3.2 for information about how to get the NHD-6 kit and, if necessary, how to create an NHD-6 kit CD image.
Note
You cannot use RIS to install NHD-6 with this method.
You cannot use this method to install NHD-6 on a DS25 system. See Section 3.3.2 for instructions on installing NHD-6 on a single system during a Full Installation of Version 5.1A or 5.1B.
Caution
Before you install NHD-6 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 2.1.8. Failure to follow these instructions can cause your NHD-6 installation to fail.
Follow these steps to install NHD-6 on a single system that already is running Version 5.1A or 5.1B of the operating system:
Log in as
root
.
Mount the NHD-6 kit. For example:
# mount /dev/disk/cdrom0a /mnt
Change directory to the mounted NHD-6 kit. For example:
# cd /mnt
Run the
nhd_install
script:
# ./nhd_install
You see output similar to the following:
Using kit at /mnt/nnn Checking file system space required to install specified subsets: File system space checked OK. 2 subsets will be installed. Loading subset 1 of 1 ... New Hardware Base System Support V6.0 Copying from /mnt/nnn/kit (disk) Working....Thu Jun 20 13:59:55 EDT 2002 Verifying 1 of 1 subsets installed successfully. Configuring "New Hardware Base System Support V6.0" (OSHHWBASEnnn) Rebuilding the /GENERIC file to include the kernel modules for the new hardware. This may take a few minutes. Successful setting of the new version identifier Successful switch of the version identifiers
At the shell prompt, shut down the system:
# shutdown -h now
At the console prompt, boot the generic kernel. For example:
>>> boot -fi genvmunix dqb0
After the system boots, log in as
root
.
At the shell prompt, use the
doconfig
utility
to rebuild the custom kernel:
# doconfig
You see messages similar to the following:
*** KERNEL CONFIGURATION AND BUILD PROCEDURE *** Enter a name for the kernel configuration file. [SYSNAME]: A configuration file with the name 'SYSNAME' already exists. Do you want to replace it? (y/n) [n]:
Enter
y
and press Return.
You see messages
similar to the following:
Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck *** KERNEL OPTION SELECTION *** Selection Kernel Option -------------------------------------------------------------- 1 System V Devices 2 NTP V3 Kernel Phase Lock Loop (NTP_TIME) 3 Kernel Breakpoint Debugger (KDEBUG) 4 Packetfilter driver (PACKETFILTER) 5 IP-in-IP Tunneling (IPTUNNEL) 6 IP Version 6 (IPV6) 7 Point-to-Point Protocol (PPP) 8 STREAMS pckt module (PCKT) 9 X/Open Transport Interface (XTISO, TIMOD, TIRDWR) 10 Digital Versatile Disk File System (DVDFS) 11 ISO 9660 Compact Disc File System (CDFS) 12 Audit Subsystem 13 ATM UNI 3.0/3.1 ILMI (ATMILMI3X) 14 IP Switching over ATM (ATMIFMP) 15 LAN Emulation over ATM (LANE) 16 Classical IP over ATM (ATMIP) 17 ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) 18 Asynchronous Transfer Mode (ATM) 19 All of the above 20 None of the above 21 Help 22 Display all options again -------------------------------------------------------------- Enter your choices. Choices (for example, 1 2 4-6) [20]:
Select the kernel options you want built into your new custom
kernel.
This should include the same options you were already running on your
system.
For example, if you want to select all listed kernel options, enter
19
and press Return.
You see messages similar to the following:
You selected the following kernel options: System V Devices NTP V3 Kernel Phase Lock Loop (NTP_TIME) Kernel Breakpoint Debugger (KDEBUG) Packetfilter driver (PACKETFILTER) IP-in-IP Tunneling (IPTUNNEL) IP Version 6 (IPV6) Point-to-Point Protocol (PPP) STREAMS pckt module (PCKT) X/Open Transport Interface (XTISO, TIMOD, TIRDWR) Digital Versatile Disk File System (DVDFS) ISO 9660 Compact Disc File System (CDFS) Audit Subsystem ATM UNI 3.0/3.1 ILMI (ATMILMI3X) IP Switching over ATM (ATMIFMP) LAN Emulation over ATM (LANE) Classical IP over ATM (ATMIP) ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) Asynchronous Transfer Mode (ATM) Is that correct? (y/n) [y]:
Enter
y
to confirm your selection and
press Return.
You see the following prompt:
Do you want to edit the configuration file? (y/n) [n]:
Enter
n
and press Return.
You see messages
similar to the following:
*** PERFORMING KERNEL BUILD *** A log file listing special device files is located in /dev/MAKEDEV.log Working....Thu Jun 20 14:59:36 EDT 2002 Working....Thu Jun 20 15:01:53 EDT 2002 Working....Thu Jun 20 15:05:32 EDT 2002 The new kernel is /sys/SYSNAME/vmunix
Copy the new custom kernel to
/vmunix
.
For example:
# cp /sys/SYSNAME/vmunix /vmunix
Shut down the system:
# shutdown -h now
At the console prompt, boot the system with the new custom kernel. For example:
>>> boot -fi "vmunix" dqb0
Caution
When you install NHD-6, you also must install the most current Version 5.1A or 5.1B patch kit before you return your system to production. It does not matter whether you install NHD-6 or the release-appropriate patch kit first.
Tru64 UNIX Versions 5.1A and 5.1B patch kits are available on the World Wide Web at the following URLs:
http://ftp.support.compaq.com/public/unix/v5.1a/ http://ftp.support.compaq.com/public/unix/v5.1b/
3.3.2 Installing on a Single System During Full Installation of Version 5.1A or 5.1B
You can install NHD-6 on a single system during a Full Installation of the operating system from either of the following sources:
NHD-6 kit on CD-ROM or on a CD image that you created from the downloaded kit (Section 3.3.2.1)
NHD-6 kit in a RIS area along with the base operating system. (Section 3.3.2.2)
See Section 3.2 for information on getting the NHD kit, creating a CD image, and setting up a RIS area.
Caution
Before you install NHD-6 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 2.1.8. Failure to follow these instructions can cause your NHD-6 installation to fail.
3.3.2.1 Installing from a CD-ROM or CD Image
Before you start this procedure, see the Installation Guide for information about the Full Installation process. You must have both the NHD-6 kit and the Tru64 UNIX Operating System distribution. See Section 3.2 for information about how to get the NHD-6 kit and, if necessary, create an NHD-6 kit CD image.
Follow these steps to install NHD-6 on a single system during a Full Installation:
If your system already is running a version of the operating
system, log in as
root
and shut down the system.
What you do next depends upon the media you are using:
If you are using a single CD-ROM drive, load the Version 5.1A or 5.1B Tru64 UNIX Operating System CD-ROM into your CD-ROM drive.
If you are using multiple CD-ROM drives, load the Version 5.1A or 5.1B Tru64 UNIX Operating System CD-ROM into one CD-ROM drive and the New Hardware Delivery CD-ROM into another CD-ROM drive.
If you are using one or more CD images, make sure that the disks containing the CD images are on line and available.
If you are using a combination of CD-ROMs and CD images, make sure that all distribution media are on line and available.
At the console prompt, boot the generic kernel. For example:
>>> boot -fl fa -fi "GENERIC" dqb0
You see messages similar to the following:
(boot dqb0.0.1.16.0 -file GENERIC -flags fa) block 0 of dqb0.0.1.16.0 is a valid boot block reading 15 blocks from dqb0.0.1.16.0 bootstrap code read in base = 200000, image_start = 0, image_bytes = 1e00 initializing HWRPB at 2000 initializing page table at 3ff48000 initializing machine state setting affinity to the primary CPU jumping to bootstrap code UNIX boot - Wednesday, August 01, 2001 Loading GENERIC ... Loading at fffffc0000250000 Enter all Foreign Hardware Kit Names. Device Names are entered as console names (e.g. dkb100). Enter Device Name, or <return> if done:
Note
The message to enter foreign kit names starts the phase of the process where you specify kits and their locations. Do not enter a kit name here. Look at the actual prompt and enter the device name where the NHD-6 kit is located.
Enter the console device name of the device where the NHD-6 kit is located:
If you are installing from a CD-ROM, enter the console
device name of the CD-ROM drive.
For example,
dqb0
.
If you are installing from a CD image on disk, enter the console
device name of that disk.
For example,
dka400
.
Press Return. You see a prompt similar to the following:
Enter Hardware Kit Name, or <return> if done with dqb0:
Enter the NHD-6 kit name:
/nnn/usr/sys/hardware/base.kit
You see a prompt similar to the following:
Insert media for kit 'dqb0:/nnn/usr/sys/hardware/base.kit' hit <return> when ready, or 'q' to quit this kit:
What you do next depends upon the media you are using:
If you are installing from a single CD-ROM drive, remove the Tru64 UNIX Operating System CD-ROM and load the New Hardware Delivery CD-ROM.
If you are installing from multiple CD-ROM drives, CD images, or both, do nothing.
Press Return. You see the following prompt:
Enter Hardware Kit Name, or <return> if done with dqb0:
Press Return. You see the following prompt:
Enter Device Name, or <return> if done:
Press Return, and the boot process verifies the NHD-6 kit. You see the following prompt:
Insert boot media, hit <return> when ready:
What you do next depends upon the media you are using:
If you are installing from a single CD-ROM drive, remove the New Hardware Delivery CD-ROM and load the Tru64 UNIX Operating System CD-ROM.
If you are installing from multiple CD-ROM drives, CD images, or both, do nothing.
Press Return. As the base operating system kernel modules are linked, you see messages similar to the following:
Linking nnn objects: nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn ... ... 105 104 103 102 101 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 81 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9
Note
You may see a different number of objects linked, as the NHD-6 kit is updated several times before the distribution is finalized.
You see a prompt similar to the following:
Insert media for kit 'dqb0:/nnn/usr/sys/hardware/base.kit' hit <return> when ready or 'q' to quit:
What you do next depends upon the media you are using:
If you are installing from a single CD-ROM drive, remove the Tru64 UNIX Operating System CD-ROM and load the New Hardware Delivery CD-ROM.
If you are installing from multiple CD-ROM drives, CD images, or both, do nothing.
Press Return. As the NHD-6 kit kernel modules are linked, you see messages similar to the following:
8 7 6 5 4 3 2 1
Note
You may see a different number of objects linked, as the NHD-6 kit is updated several times before the distribution is finalized.
You see a prompt similar to the following:
Insert boot media, hit <return> when ready:
What you do next depends upon the media you are using:
If you are installing from a single CD-ROM drive, remove the New Hardware Delivery CD-ROM and load the Tru64 UNIX Operating System CD-ROM.
If you are installing from multiple CD-ROM drives, CD images, or both, do nothing.
Press Return. You see the operating system boot and the Full Installation user interface start.
Enter host information, select subsets and target disks, and continue the Full Installation process as described in the Installation Guide.
After the final reboot, the Full Installation process configures the system and reboots the system.
You see messages similar to the following:
UNIX boot - Wednesday, August 01, 2001 Loading /GENERIC ... Loading at fffffc0000250000 Enter all Foreign Hardware Kit Names. Device Names are entered as console names (e.g. dkb100). Enter Device Name, or <return> if done:
Enter the console device name for the NHD-6 kit, for
example:
dqb0
.
You see the following prompt:
Enter Hardware Kit Name, or <return> if done with dqb0:
Enter the NHD-6 kit name:
/nnn/usr/sys/hardware/base.kit
You see a prompt similar to the following:
Insert media for kit 'dqb0:/nnn/usr/sys/hardware/base.kit' hit <return> when ready, or 'q' to quit this kit:
What you do next depends upon the media you are using:
If you are installing from a single CD-ROM drive, remove the Tru64 UNIX Operating System CD-ROM and load the New Hardware Delivery CD-ROM.
If you are installing from multiple CD-ROM drives, CD images, or both, do nothing.
Press Return. You see a prompt similar to the following:
Enter Hardware Kit Name, or <return> if done with dqb0:
Because there are no other kits included in NHD-6, press Return. You see the following prompt:
Enter Device Name, or <return> if done:
Again, because there are no other kits to install, press Return. You see the following prompt:
Insert boot media, hit <return> when ready:
Note
Although this prompt asks you to insert the boot media, do not insert the Tru64 UNIX Operating System CD-ROM. At this point in the installation process you are booting from the system disk, and no media change is necessary.
Press Return. You see a prompt similar to the following:
Linking nnn objects: nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn nnn ... ... 105 104 103 102 101 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 81 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 Insert media for kit 'dka400:/nnn/usr/sys/hardware/base.kit' hit <return> when ready or 'q' to quit:
If you removed the NHD-6 kit media, replace it. When the NHD-6 kit media is in place, press Return.
You see messages similar to the following:
8 7 6 5 4 3 2 1 Insert boot media, hit <return> when ready:
Note
Although this prompt asks you to insert the boot media, do not insert the Tru64 UNIX Operating System CD-ROM. At this point in the installation process you are booting from the system disk, and no media change is necessary.
Press Return. You see the standard subset installation and configuration messages.
When the hardware kit is loaded and configured, you see messages similar to the following:
*** START LOAD HARDWARE KIT (Thu Jun 20 16:07:30 EDT 2002) *** Validating distribution media... The Hardware Support product has been successfully located. Checking file system space required to install specified subsets: File system space checked OK. 1 subsets will be installed. Loading subset 1 of 1 ... New Hardware Base System Support V6.0 Copying from /instkit1//nnn/kit (disk) Verifying 1 of 1 subsets installed successfully. *** SYSTEM CONFIGURATION *** Configuring "New Hardware Base System Support VTru64 UNIX.0" (OSHHWBASEnnn) Rebuilding the /GENERIC file to include the kernel modules for the new hardware. This may take a few minutes. Rebuilding the /GENERIC file to include the kernel modules for the new hardware. This may take a few minutes. *** END LOAD HARDWARE KIT (Thu Jun 20 16:09:35 EDT 2002) ***
Note
If you are installing the Worldwide Language Support (WLS) subsets, you are prompted to insert the Associated Products, Volume 1 CD-ROM. See the Tru64 UNIX Installation Guide for information about installing WLS subsets.
You see messages similar to the following as the kernel is rebuilt:
The system name assigned to your machine is 'sysname'. *** KERNEL CONFIGURATION AND BUILD PROCEDURE *** The system will now automatically build a kernel with all options and then reboot. This can take up to 15 minutes, depending on the processor type. When the login prompt appears after the system has rebooted, use 'root' as the login name and the SUPERUSER password that was entered during this procedure, to log into the system. *** PERFORMING KERNEL BUILD *** Working....Thu Jun 20 16:13:24 EDT 2002 The new version ID has been successfully set on this system. The entire set of new functionality has been enabled. This message is contained in the file /var/adm/smlogs/it.log for future reference.syncing disks... done rebooting.... (transferring to monitor)
The system reboots with the custom kernel, and you see the login prompt.
Log in as
root
and configure your system
from the System Setup Checklist.
See the System Setup Checklist online help
for more information.
Before you start this procedure, see the Installation Guide for information about the Full Installation process. You must have both the NHD-6 kit and the Base Operating System distribution. See Section 3.2 for information about how to get the NHD-6 kit and how to prepare for RIS installation.
If your system already is running a version of the operating
system, log in as
root
and shut down the system.
At the console prompt, boot from the RIS server. For example:
>>> boot ewa0
You see the operating system boot and the Full Installation user interface start.
Enter host information, select subsets and target disks, and continue the Full Installation process as described in the Installation Guide.
The following list describes differences you may see when you install NHD-6 from a RIS server:
After the base operating system subsets are installed, you
see the
New Hardware Base System Support V6.0
subset installed
from the RIS server.
During system configuration, you see messages similar to the following as NHD-6 is configured and the generic kernel is rebuilt:
Configuring "New Hardware Base System Support V6.0" (OSHHWBASEnnn) Rebuilding the /GENERIC file to include the kernel modules for the new hardware. This may take a few minutes.
You see messages similar to the following as the kernel is rebuilt before the final reboot:
The system name assigned to your machine is 'sysname'. *** KERNEL CONFIGURATION AND BUILD PROCEDURE *** The system will now automatically build a kernel with all options and then reboot. This can take up to 15 minutes, depending on the processor type. When the login prompt appears after the system has rebooted, use 'root' as the login name and the SUPERUSER password that was entered during this procedure, to log into the system. *** PERFORMING KERNEL BUILD *** Working....Thu Jun 20 14:06:34 EDT 2002 The new version ID has been successfully set on this system. The entire set of new functionality has been enabled. This message is contained in the file /var/adm/smlogs/it.log for future reference.syncing disks... done rebooting.... (transferring to monitor)
The system reboots with the custom kernel, and you see the login prompt.
Log in as
root
and configure your system
from the System Setup Checklist.
See the System Setup Checklist online help
for more information.
Caution
When you install NHD-6, you also must install the most current Version 5.1A or 5.1B patch kit before you return your system to production. It does not matter whether you install NHD-6 or the release-appropriate patch kit first.
Tru64 UNIX Versions 5.1A and 5.1B patch kits are available on the World Wide Web at the following URLs:
http://ftp.support.compaq.com/public/unix/v5.1a/ http://ftp.support.compaq.com/public/unix/v5.1b/
3.3.3 Installing on a Cluster Running Version 5.1A or 5.1B
Before you install NHD-6 on an existing cluster, see the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual. You must have the NHD-6 kit distribution, the Tru64 UNIX Operating System CD-ROM, and the Associated Products, Volume 2, CD-ROM that includes the TruCluster Server software. See Section 3.2 for information about how to get the NHD-6 kit and, if necessary, create an NHD-6 kit CD image or prepare for RIS installation.
Caution
Before you install NHD-6 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 2.1.8. Failure to follow these instructions can cause your NHD-6 installation to fail.
Perform a Rolling Upgrade as described in the following sections to install NHD-6 on an existing cluster. See the clu_upgrade Quick Reference Best Practice and the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual for more information.
Figure 3-1
shows a simplified flow chart of
the tasks and stages that are part of an NHD Rolling Upgrade.
Figure 3-1: NHD Rolling Upgrade
Perform the tasks in the Rolling Upgrade Preparation Stage.
See the
Rolling Upgrade chapter in the TruCluster Server
Cluster Installation
manual
for more information.
3.3.3.2 Setup Stage
Perform the following steps in the Setup Stage:
Use the
clu_upgrade
command to start the
Setup Stage.
For example, if the lead member has member ID
1
:
# clu_upgrade -v setup 1
You see the following messages:
Retrieving cluster upgrade status. This is the cluster upgrade program. You have indicated that you want to perform the 'setup' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Press Return. You see the following messages:
What type of rolling upgrade will be performed? Selection Type of Upgrade ---------------------------------------------------------------------- 1 An upgrade using the installupdate command 2 A patch using the dupatch command 3 A new hardware delivery using the nhd_install command 4 All of the above 5 None of the above 6 Help 7 Display all options again ---------------------------------------------------------------------- Enter your Choices (for example, 1 2 2-3):
Enter
3
and press Return.
You see the
following messages:
You selected the following rolling upgrade options: 3 Is that correct? (y/n) [y]:
Enter
y
and press Return.
You see the
following messages:
Enter the full pathname of the nhd kit mount point ['???']:
Enter the NHD kit mount point, for example:
/mnt
, and press Return.
You see the following messages:
A nhd kit has been found in the following location: /mnt This kit has the following version information: 'Tru64 UNIX New Hardware for V5.1x' Is this the correct nhd kit for the update being performed? [yes]:
Enter
yes
and press Return .
You see
the following messages:
Checking inventory and available disk space. Marking stage 'setup' as 'started'. Copying NHD kit '/mnt' to '/var/adm/update/NHDKit/'. nhd_install -copy nnn /var/adm/update/NHDKit/ Creating tagged files. ...... The cluster upgrade 'setup' stage has completed successfully. Reboot all cluster members except member: '1' Marking stage 'setup' as 'completed'. The 'setup' stage of the upgrade has completed successfully.
Note
You may see the following message during this step:
clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.530756
This is a known error and can be ignored.
Reboot all your cluster members except the lead member. See the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual for more information.
Perform the following steps in the Preinstall Stage:
Use the
clu_upgrade
command to start the
Preinstall Stage:
# clu_upgrade -v preinstall
You see the following messages:
Retrieving cluster upgrade status. This is the cluster upgrade program. You have indicated that you want to perform the 'preinstall' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Enter
yes
and press Return .
You see
the following messages:
clu_upgrade has previously created the required tagged files and would normally check and repair any tagged files which may have been modified since they where created. If you feel that the tagged files have not changed since they where created you may bypass these checks and continue with the rolling upgrade. Do you wish to skip tag file checking? [no]:
If the Preinstall Stage is performed immediately after the Setup Stage, you can skip tagged file checking. If time has elapsed between the Setup Stage and Preinstall Stage, you may want to check the tagged files.
The prompt asks if you want to skip tagged file checking.
If you do want to check tagged files, enter
no
at the
prompt and press Return.
You see the following message, followed by a progress
indicator:
Checking tagged files. ...................................................................
If you want to skip tagged file checking, enter
yes
and press Return.
Enter
yes
and press Return .
You see
the following messages:
Marking stage 'preinstall' as 'started'. Backing up member-specific data for member: 1 Marking stage 'preinstall' as 'completed'. The cluster upgrade 'preinstall' stage has completed successfully. You can now run the nhd_install command on the lead member. The 'preinstall' stage of the upgrade has completed successfully.
Note
You may see the following message during this step:
. find: bad starting directory .
This is a known error and can be ignored.
Perform the following steps in the Install Stage:
Make sure that the NHD-6 distribution is still mounted.
Change directory to the mounted NHD-6 kit. For example:
# cd /mnt
Use the
nhd_install
script to install the NHD-6
kit on the lead member:
# ./nhd_install
At the shell prompt, shut down the system:
# shutdown -h now
At the console prompt, boot the generic kernel. For example:
>>> boot -fi genvmunix dqb0
After the system boots, log in as
root
.
At the shell prompt, use the
doconfig
utility
to rebuild the custom kernel:
# doconfig
You see messages similar to the following:
*** KERNEL CONFIGURATION AND BUILD PROCEDURE *** Enter a name for the kernel configuration file. [SYSNAME]:
Press Return to accept the default. You see messages similar to the following:
A configuration file with the name 'SYSNAME' already exists. Do you want to replace it? (y/n) [n]:
Enter
y
and press Return.
You see messages
similar to the following:
Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck *** KERNEL OPTION SELECTION *** Selection Kernel Option -------------------------------------------------------------- 1 System V Devices 2 NTP V3 Kernel Phase Lock Loop (NTP_TIME) 3 Kernel Breakpoint Debugger (KDEBUG) 4 Packetfilter driver (PACKETFILTER) 5 IP-in-IP Tunneling (IPTUNNEL) 6 IP Version 6 (IPV6) 7 Point-to-Point Protocol (PPP) 8 STREAMS pckt module (PCKT) 9 X/Open Transport Interface (XTISO, TIMOD, TIRDWR) 10 Digital Versatile Disk File System (DVDFS) 11 ISO 9660 Compact Disc File System (CDFS) 12 Audit Subsystem 13 ATM UNI 3.0/3.1 ILMI (ATMILMI3X) 14 IP Switching over ATM (ATMIFMP) 15 LAN Emulation over ATM (LANE) 16 Classical IP over ATM (ATMIP) 17 ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) 18 Asynchronous Transfer Mode (ATM) 19 All of the above 20 None of the above 21 Help 22 Display all options again -------------------------------------------------------------- Enter your choices. Choices (for example, 1 2 4-6) [20]:
Select the kernel options you want built into your new custom
kernel.
This should include the same options you were already running on your
system.
In this example, if you want to select all listed kernel options,
enter
19
and press Return.
You see messages similar to the following:
You selected the following kernel options: System V Devices NTP V3 Kernel Phase Lock Loop (NTP_TIME) Kernel Breakpoint Debugger (KDEBUG) Packetfilter driver (PACKETFILTER) IP-in-IP Tunneling (IPTUNNEL) IP Version 6 (IPV6) Point-to-Point Protocol (PPP) STREAMS pckt module (PCKT) X/Open Transport Interface (XTISO, TIMOD, TIRDWR) Digital Versatile Disk File System (DVDFS) ISO 9660 Compact Disc File System (CDFS) Audit Subsystem ATM UNI 3.0/3.1 ILMI (ATMILMI3X) IP Switching over ATM (ATMIFMP) LAN Emulation over ATM (LANE) Classical IP over ATM (ATMIP) ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) Asynchronous Transfer Mode (ATM) Is that correct? (y/n) [y]:
Enter
y
to confirm your selection and
press Return.
You see the following prompt:
Do you want to edit the configuration file? (y/n) [n]:
Enter
n
and press Return.
You see messages
similar to the following:
*** PERFORMING KERNEL BUILD *** A log file listing special device files is located in /dev/MAKEDEV.log Working....Thu Jun 20 14:59:36 EDT 2002 Working....Thu Jun 20 15:01:53 EDT 2002 Working....Thu Jun 20 15:05:32 EDT 2002 The new kernel is /sys/SYSNAME/vmunix
Copy the new custom kernel to the member-specific directory on the lead member. For example:
# cp /sys/SYSNAME/vmunix /cluster/members/memberN/boot_partition
Shut down the lead member:
# shutdown -h now
At the console prompt, boot the lead member with the new custom kernel. For example:
>>> boot -fi "vmunix" dqb0
Log in as
root
on the lead member.
Use the
clu_upgrade
command to check the
installation status:
# clu_upgrade -v
You see messages similar to the following:
Retrieving cluster upgrade status. Upgrade Status Stage Status Date setup started: Thu Jun 20 16:50:43 EDT 2002 lead member: 1 nhd kit source: /mnt completed: Thu Jun 20 16:52:34 EDT 2002 preinstall started: Thu Jun 20 16:54:46 EDT 2002 completed: Thu Jun 20 16:55:16 EDT 2002 nhd started: Thu Jun 20 16:55:57 EDT 2002 completed: Thu Jun 20 16:57:42 EDT 2002 Member Status Tagged File Status ID Hostname State Rolled Running with On Next Boot 1 member01.site.place.net UP Yes No No
Note
If your system is running Version 5.1A, the output incorrectly indicates
patch kit source
rather thannhd kit source
. The source information (in this example,/mnt
) is correct; the label is in error.
Perform the following steps in the Postinstall Stage:
On the lead member, use the
clu_upgrade
command to start the Postinstall Stage:
# clu_upgrade -v postinstall
You see the following messages:
Retrieving cluster upgrade status. This is the cluster upgrade program. You have indicated that you want to perform the 'postinstall' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Enter
yes
and press Return .
You see
the following messages:
Marking stage 'postinstall' as 'started'. Marking stage 'postinstall' as 'completed'. The 'postinstall' stage of the upgrade has completed successfully.
Use the
clu_upgrade
command to check the
installation status:
# clu_upgrade -v
You see messages similar to the following:
Retrieving cluster upgrade status. Upgrade Status Stage Status Date setup started: Thu Jun 20 16:50:43 EDT 2002 lead member: 1 nhd kit source: /mnt completed: Thu Jun 20 16:52:34 EDT 2002 preinstall started: Thu Jun 20 16:54:46 EDT 2002 completed: Thu Jun 20 16:55:16 EDT 2002 nhd started: Thu Jun 20 16:55:57 EDT 2002 completed: Thu Jun 20 16:57:42 EDT 2002 postinstall started: Thu Jun 20 16:58:28 EDT 2002 completed: Thu Jun 20 16:58:28 EDT 2002 roll started: Thu Jun 20 16:58:29 EDT 2002 members rolled: 1 completed: Thu Jun 20 16:58:29 EDT 2002 Member Status Tagged File Status ID Hostname State Rolled Running with On Next Boot 1 member01.site.place.net UP Yes No No 10 member10.site.place.net UP No Yes Yes
Note
If your system is running Version 5.1A, the output incorrectly indicates
patch kit source
rather thannhd kit source
. The source information (in this example,/mnt
) is correct; the label is in error.
Before running the Roll Stage, see the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual.
Perform the following steps for each additional cluster member:
Log in to the cluster member as
root
.
Shut down the cluster member:
# shutdown -h now
At the console prompt, boot the cluster member to single-user mode:
>>> boot -fl s
Use the
init s
command to initialize process
control:
# init s
Use the
bcheckrc
command to mount and check
local file systems:
# bcheckrc
You see output similar to the following:
Checking device naming: Passed. CNX QDISK: Successfully claimed quorum disk, adding 1 vote. Checking local filesystems Mounting / (root) user_cfg_pt: reconfigured root_mounted_rw: reconfigured Mounting /cluster/members/member57/boot_partition (boot file system) user_cfg_pt: reconfigured root_mounted_rw: reconfigured user_cfg_pt: reconfigured dsfmgr: NOTE: updating kernel basenames for system at / scp kevm tty00 tty01 lp0 dsk3 dsk4 dsk5 dsk6 dsk7 dsk8 floppy1 cdrom1 dmapi Mounting local filesystems exec: /sbin/mount_advfs -F 0x14000 cluster_root#root / cluster_root#root on / type advfs (rw) exec: /sbin/mount_advfs -F 0x4000 cluster_usr#usr /usr cluster_usr#usr on /usr: Device busy exec: /sbin/mount_advfs -F 0x4000 cluster_var#var /var cluster_var#var on /var: Device busy /proc on /proc type procfs (rw)
Use the
lmf reset
command to copy license
information into the kernel cache:
# lmf reset
Use the
clu_upgrade
command to start the
Roll Stage:
# clu_upgrade -v roll
You see messages similar to the following:
This is the cluster upgrade program. You have indicated that you want to perform the 'roll' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Enter
yes
and press Return .
Note
You may see the following message during this step:
clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.530756
This is a known error and can be ignored.
You also may see messages similar to the following:
*** Warning *** The cluster upgrade command was unable to find or verify the configuration file used to build this member's kernel. clu_upgrade attempts to make a backup copy of the configuration file which it would restore as required during a clu_upgrade undo command. To use the default configuration file or to continue without backing up a configuration file hit return. Enter the name of the configuration file for this member [SYSNAME]:
Press Return to use SYSNAME as the configuration file name.
You see messages similar to the following:
Backing up member-specific data for member: 10 The 'roll' stage has completed successfully. This member must be rebooted in order to run with the newly installed software. Do you want to reboot this member at this time? []:
Enter
y
and press Return.
You see the
following message:
You indicated that you want to reboot this member at this time. Is that correct? [yes]:
Enter
y
and press Return.
You see messages
similar to the following:
The 'roll' stage of the upgrade has completed successfully. Terminated # syncing disks... done drd: Clean Shutdown rebooting.... (transferring to monitor)
The cluster member reboots and reconfigures.
Use the
clu_upgrade
command to check the
installation status:
# clu_upgrade -v
You see messages similar to the following:
Retrieving cluster upgrade status. Upgrade Status Stage Status Date setup started: Thu Jun 20 16:40:07 EDT 2002 lead member: 1 nhd kit source: /mnt tagged files list: /cluster/admin/clu_upgrade/tag_files.list tagged files missing: /cluster/admin/clu_upgrade/tag_files.miss completed: Thu Jun 20 16:42:48 EDT 2002 preinstall started: Thu Jun 20 16:51:09 EDT 2002 completed: Thu Jun 20 16:52:32 EDT 2002 nhd started: Thu Jun 20 16:54:49 EDT 2002 completed: Thu Jun 20 16:58:08 EDT 2002 postinstall started: Thu Jun 20 17:18:12 EDT 2002 completed: Thu Jun 20 17:18:12 EDT 2002 roll started: Thu Jun 20 17:22:24 EDT 2002 members rolled: 1 10 completed: Thu Jun 20 17:32:42 EDT 2002 Member Status Tagged File Status ID Hostname State Rolled Running with On Next Boot 1 member01.site.place.net UP Yes No No 10 member10.site.place.net UP Yes No No
Note
If your system is running Version 5.1A, the output incorrectly indicates
patch kit source
rather thannhd kit source
. The source information (in this example,/mnt
) is correct; the label is in error.
Repeat this process for each remaining cluster member.
3.3.3.7 Switch Stage
Perform the following steps in the Switch Stage:
After the Roll Stage is complete, use the
clu_upgrade
command to start the Switch Stage on any cluster member:
# clu_upgrade -v switch
You see the following messages:
Retrieving cluster upgrade status. This is the cluster upgrade program. You have indicated that you want to perform the 'switch' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Enter
yes
and press Return .
You see
the following messages:
Initiating version switch on cluster members .Marking stage 'switch' as 'started'. Switch already switched Marking stage 'switch' as 'completed'. The cluster upgrade 'switch' stage has completed successfully. All cluster members must be rebooted before running the 'clean' command. The 'switch' stage of the upgrade has completed successfully.
After you complete the Switch Stage, reboot all cluster members. After each member reboots, you see the login prompt.
Log in to the system as
root
.
Use the
clu_upgrade
command to check the
installation status:
# clu_upgrade -v
You see messages similar to the following:
Retrieving cluster upgrade status. Upgrade Status Stage Status Date setup started: Thu Jun 20 16:40:07 EDT 2002 lead member: 1 nhd kit source: /mnt tagged files list: /cluster/admin/clu_upgrade/tag_files.list tagged files missing: /cluster/admin/clu_upgrade/tag_files.miss completed: Thu Jun 20 16:42:48 EDT 2002 preinstall started: Thu Jun 20 16:51:09 EDT 2002 completed: Thu Jun 20 16:52:32 EDT 2002 nhd started: Thu Jun 20 16:54:49 EDT 2002 completed: Thu Jun 20 16:58:08 EDT 2002 postinstall started: Thu Jun 20 17:18:12 EDT 2002 completed: Thu Jun 20 17:18:12 EDT 2002 roll started: Thu Jun 20 17:22:24 EDT 2002 members rolled: 1 10 completed: Thu Jun 20 17:32:42 EDT 2002 switch started: Thu Jun 20 16:37:50 EDT 2002 completed: Thu Jun 20 16:38:20 EDT 2002 Member Status Tagged File Status ID Hostname State Rolled Running with On Next Boot 1 member01.site.place.net UP Yes No No
Note
If your system is running Version 5.1A, the output incorrectly indicates
patch kit source
rather thannhd kit source
. The source information (in this example,/mnt
) is correct; the label is in error.
Perform the following steps in the Clean Stage:
After the Switch Stage is complete, use the
clu_upgrade
command to start the Clean Stage on any cluster member:
# clu_upgrade -v clean
You see the following messages:
Retrieving cluster upgrade status. This is the cluster upgrade program. You have indicated that you want to perform the 'clean' stage of the upgrade. Do you want to continue to upgrade the cluster? [yes]:
Enter
yes
and press Return .
You see
the following messages:
.Marking stage 'clean' as 'started'. Deleting tagged files. .... Removing back-up and kit files Marking stage 'clean' as 'completed'. The 'clean' stage of the upgrade has completed successfully.
Use the
clu_upgrade
command to check the
installation status:
# clu_upgrade -v
You see messages similar to the following:
Retrieving cluster upgrade status. There is currently no cluster upgrade in progress. The last cluster upgrade completed succesfully on: Thu Jun 20 17:05:25 EDT 2002 History for this upgrade can be found in the directory: /cluster/admin/clu_upgrade/history/Compaq.Tru64.UNIX.V5.1x.Rev.1885-1
Caution
When you install NHD-6, you also must install the most current Version 5.1A or 5.1B patch kit before you return your system to production.
You must install both the Tru64 UNIX and the TruCluster patches.
It does not matter whether you install NHD-6 or the release-appropriate patch kits first.
Tru64 UNIX Versions 5.1A and 5.1B patch kits are available on the World Wide Web at the following URLs:
http://ftp.support.compaq.com/public/unix/v5.1a/ http://ftp.support.compaq.com/public/unix/v5.1b/
3.3.4 Installing on a Cluster During Full Installation of Version 5.1A or 5.1B
Before you start this procedure, see the TruCluster Server Cluster Installation manual for information about creating a cluster. You must have the NHD-6 kit distribution, the Version 5.1A or 5.1B Tru64 UNIX Operating System CD-ROM, and the Associated Products, Volume 2, CD-ROM that includes the Version 5.1B TruCluster Server software.
Caution
Before you install NHD-6 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 2.1.8. Failure to follow these instructions can cause your NHD-6 installation to fail.
Follow these steps to install NHD-6 on a new cluster during a Full Installation:
Install NHD-6 during a Full Installation on the system that will be the first cluster member, as described in Section 3.3.2.
Load the Version 5.1A or 5.1B Associated Products, Volume 2, CD-ROM into the CD-ROM drive.
Mount the Associated Products, Volume 2, CD-ROM. For example:
# mount /dev/disk/cdrom0a /mnt
Use the
setld -l
command to load the TruCluster Server
software:
# setld -l /mnt/TruCluster/kit
You see output similar to the following:
*** Enter subset selections *** The following subsets are mandatory and will be installed automatically unless you choose to exit without installing any subsets: * TruCluster Base Components The subsets listed below are optional: There may be more optional subsets than can be presented on a single screen. If this is the case, you can choose subsets screen by screen or all at once on the last screen. All of the choices you make will be collected for your confirmation before any subsets are installed. - TruCluster(TM) Software : 1) TruCluster Migration Components 2) TruCluster Reference Pages Estimated free diskspace(MB) in root:269.2 usr:18175.4 var:18665.0 Choices (for example, 1 2 4-6): Or you may choose one of the following options: 3) ALL mandatory and all optional subsets 4) MANDATORY subsets only 5) CANCEL selections and redisplay menus 6) EXIT without installing any subsets Estimated free diskspace(MB) in root:269.2 usr:18175.4 var:18665.0 Enter your choices or press RETURN to redisplay menus. Choices (for example, 1 2 4-6):
Enter
3
to select all mandatory and
optional subsets.
You see output similar to the following:
You are installing the following mandatory subsets: TruCluster Base Components You are installing the following optional subsets: - TruCluster(TM) Software : TruCluster Migration Components TruCluster Reference Pages Estimated free diskspace(MB) in root:269.2 usr:18173.6 var:18665.0 Is this correct? (y/n):
Enter
y
to confirm your selection.
You see output similar to the following:
Checking file system space required to install selected subsets: File system space checked OK. 3 subsets will be installed. Loading subset 1 of 3 ... TruCluster Migration Components Copying from /mnt/TruCluster/kit (disk) Verifying Loading subset 2 of 3 ... TruCluster Reference Pages Copying from /mnt/TruCluster/kit (disk) Verifying Loading subset 3 of 3 ... TruCluster Base Components Copying from /mnt/TruCluster/kit (disk) Verifying 3 of 3 subsets installed successfully. Configuring "TruCluster Migration Components" (TCRMIGRATEnnn) Configuring "TruCluster Reference Pages" (TCRMANnnn) Running : /usr/lbin/mkwhatis : in the background... Configuring "TruCluster Base Components" (TCRBASEnnn) Use /usr/sbin/clu_create to create a cluster.
Change to the root directory and unmount the Associated Products, Volume 2, CD-ROM:
# cd / # umount /mnt
Remove the Associated Products, Volume 2, CD-ROM and load the New Hardware Delivery CD-ROM.
Mount the NHD-6 kit. For example:
# mount /dev/disk/cdrom0a /mnt
Change directory to the mounted NHD-6 kit. For example:
# cd /mnt
Enter the following command to install the NHD cluster kit:
# ./install_nhd
Note
The
nhd_install
script checks your system before installing the NHD cluster kit. You do not have to use the-install_cluster
argument.
You see output similar to the following:
Checking file system space required to install specified subsets: File system space checked OK. 1 subsets will be installed. Loading subset 1 of 1 ... New Hardware TruCluster(TM) Support V6.0 Copying from /mnt/nnn/kit (disk) Working....Thu Jun 20 18:16:41 EDT 2002 Verifying 1 of 1 subsets installed successfully. Configuring "New Hardware TruCluster(TM) Support V6.0" (OSHTCRBASEnnn) The installation of the New Hardware TruCluster(TM) Support V6.0 (OSHTCRBASEnnn) software subset is complete.
After installing the NHD cluster kit, use the
clu_create
command to create a single-member cluster as described in the TruCluster Server
Cluster Installation
manual for more information.
Add additional cluster members as needed. See the TruCluster Server Cluster Installation manual for more information.
Caution
When you install NHD-6, you also must install the most current Version 5.1A or 5.1B patch kit before you return your system to production.
You must install both the Tru64 UNIX and the TruCluster patches.
You must install TruCluster Server software before you install TruCluster patches.
It does not matter whether you install NHD-6 or the release-appropriate patch kits first.
Tru64 UNIX Versions 5.1A and 5.1B patch kits are available on the World Wide Web at the following URLs:
http://ftp.support.compaq.com/public/unix/v5.1a/ http://ftp.support.compaq.com/public/unix/v5.1b/
3.3.5 Rebuilding the Kernel After Adding Supported Hardware
The preceding instructions tell you to install the supported hardware before you install the NHD-6 kit. There may be circumstances where you must add supported hardware after NHD-6 is already installed on your system. For example, you may add a Smart Array 5304 RAID controller to an existing AlphaServer DS25 system.
Follow these instructions to include support for the new hardware in your custom kernel on either the single system or the cluster member where you install the new hardware:
At the shell prompt, shut down the system:
# shutdown -h now
Make sure that the value of the
auto_action
console variable is set to
halt
:
>>> set auto_action halt
Power down the system, install the new hardware, and power up the system.
At the console prompt, boot the generic kernel:
>>> boot -fi genvmunix dqb0
After the system boots, log in as
root
.
At the shell prompt, use the
doconfig
utility
to rebuild the custom kernel:
# doconfig
You see messages similar to the following:
*** KERNEL CONFIGURATION AND BUILD PROCEDURE *** Enter a name for the kernel configuration file. [SYSNAME]:
Press Return to accept the default. You see messages similar to the following:
A configuration file with the name 'SYSNAME' already exists. Do you want to replace it? (y/n) [n]:
Enter
y
and press Return.
You see messages
similar to the following:
Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck *** KERNEL OPTION SELECTION *** Selection Kernel Option -------------------------------------------------------------- 1 System V Devices 2 NTP V3 Kernel Phase Lock Loop (NTP_TIME) 3 Kernel Breakpoint Debugger (KDEBUG) 4 Packetfilter driver (PACKETFILTER) 5 IP-in-IP Tunneling (IPTUNNEL) 6 IP Version 6 (IPV6) 7 Point-to-Point Protocol (PPP) 8 STREAMS pckt module (PCKT) 9 X/Open Transport Interface (XTISO, TIMOD, TIRDWR) 10 Digital Versatile Disk File System (DVDFS) 11 ISO 9660 Compact Disc File System (CDFS) 12 Audit Subsystem 13 ATM UNI 3.0/3.1 ILMI (ATMILMI3X) 14 IP Switching over ATM (ATMIFMP) 15 LAN Emulation over ATM (LANE) 16 Classical IP over ATM (ATMIP) 17 ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) 18 Asynchronous Transfer Mode (ATM) 19 All of the above 20 None of the above 21 Help 22 Display all options again -------------------------------------------------------------- Enter your choices. Choices (for example, 1 2 4-6) [20]:
Select the kernel options you want built into your new custom
kernel.
This should include the same options you were already running on your
system.
In this example, if you want to select all listed kernel options,
enter
19
and press Return.
You see messages similar to the following:
You selected the following kernel options: System V Devices NTP V3 Kernel Phase Lock Loop (NTP_TIME) Kernel Breakpoint Debugger (KDEBUG) Packetfilter driver (PACKETFILTER) IP-in-IP Tunneling (IPTUNNEL) IP Version 6 (IPV6) Point-to-Point Protocol (PPP) STREAMS pckt module (PCKT) X/Open Transport Interface (XTISO, TIMOD, TIRDWR) Digital Versatile Disk File System (DVDFS) ISO 9660 Compact Disc File System (CDFS) Audit Subsystem ATM UNI 3.0/3.1 ILMI (ATMILMI3X) IP Switching over ATM (ATMIFMP) LAN Emulation over ATM (LANE) Classical IP over ATM (ATMIP) ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X) Asynchronous Transfer Mode (ATM) Is that correct? (y/n) [y]:
Enter
y
to confirm your selection and
press Return.
You see the following prompt:
Do you want to edit the configuration file? (y/n) [n]:
Enter
n
and press Return.
You see messages
similar to the following:
*** PERFORMING KERNEL BUILD *** A log file listing special device files is located in /dev/MAKEDEV.log Working....Thu Jun 20 14:59:36 EDT 2002 Working....Thu Jun 20 15:01:53 EDT 2002 Working....Thu Jun 20 15:05:32 EDT 2002 The new kernel is /sys/SYSNAME/vmunix
Copy the new custom kernel.
On a single system, use the following command to copy the
new custom kernel to
/vmunix
:
# cp /sys/SYSNAME/vmunix /vmunix
On cluster member N where you installed the hardware, use the following command to copy the new custom kernel to the member-specific directory:
# cp /sys/SYSNAME/vmunix /cluster/members/memberN/boot_partition
Shut down your system:
# shutdown -h now
At the console prompt, boot the lead member with the new custom kernel.
>>> boot -fi "vmunix" dqb0