After installing and configuring the Tru64 UNIX system, follow the directions in this chapter to make this system the first member of the cluster. Table 4-1 lists the tasks in order and references necessary information.
Note
If you are using Fibre Channel storagesets for the first member boot disk or the quorum disk, read the Fibre Channel chapter in the Cluster Hardware Configuration manual.
Table 4-1: Create Single-Member Cluster Tasks
Task | See |
Gather the information needed to create a cluster. | Chapter 2 |
Install and configure Tru64 UNIX. | Chapter 3 |
Register the TruCluster Server license. | Section 4.1 |
Load the TruCluster Server subsets. | Section 4.2 |
Run the
clu_create
command. |
Section 4.3 |
Boot the first member's cluster boot disk. | Section 4.3 and Section 4.4 |
Make on-disk backup copies of important configuration files | Section 4.5 |
Perform a full backup of the single-member cluster. | Tru64 UNIX System Administration manual |
4.1 Register the TruCluster Server Software License
The TruCluster Server kit includes a license Product Authorization Key (PAK). Use this PAK when registering a TruCluster Server license. (Section 1.5 describes the TruCluster Server license.) If you do not have a PAK, call your customer service representative.
For information on installing a license PAK, see the Tru64 UNIX
Software License Management
manual,
lmf
(8), and
lmfsetup
(8).
4.2 Load the TruCluster Server Subsets
To load the TruCluster Server kit, follow these steps:
Log in as superuser (root
).
Mount the device or directory containing the TruCluster Server Software Version 5.1A kit.
Enter the
setld -l
command specifying the
directory where the kit is located.
For example, if you mount the
CD-ROM on
/mnt
:
# setld -l /mnt/TruCluster/kit
You can choose one of the following subset installation options:
All mandatory subsets only
All mandatory and selected optional subsets
All mandatory and all optional subsets
We recommend that you choose the "All mandatory and all optional subsets" option.
After you select an option, the installation procedure verifies that there is sufficient file system space before copying the subsets onto your system.
Note
Patch kits include fixes for both the base operating system and for cluster software. If installing a patch kit, patch the system after loading the TruCluster Server subsets but before running
clu_create
to create a single-member cluster.
4.3 Run the clu_create Command
The
/usr/sbin/clu_create
command creates the first
member of the cluster from the Tru64 UNIX system.
Note
The
clu_create
command usesvdump
andvrestore
to populate the clusterwide root (/
),/usr
, and/var
file systems. If any of these file systems on the Tru64 UNIX system has a Network File System (NFS) file system mounted on it, and if that file system's NFS server is down,vdump
will hang when trying to dump the file system. (If you run theautomount
daemon, remember that it mounts and unmounts NFS file systems, and the same potential for hangs exists.)Before running
clu_create
, either unmount all NFS file systems or verify that they are accessible. Alternatively, you can reboot the Tru64 UNIX system before runningclu_create
to clean up any stale mounts. For systems that run theautomount
daemon, you can disable automounting by running the/sbin/init.d/nfsmount stop
command before runningclu_create
.
Run the
/usr/sbin/clu_create
command.
The command prompts
for the information needed to create a single-member cluster.
Answer the prompts using the information from the checklists in
Appendix A.
The command also provides online help for
each question.
To display the relevant help message,
enter
help
or a question mark,
?, at a prompt.
The
clu_create
command performs the following
tasks:
Sets up the clusterwide root (/
),
/usr
, and
/var
file systems,
and the first member's boot disk.
Configures a quorum disk (optional).
Builds a kernel with cluster components.
If the kernel build succeeds,
clu_create
copies the
new kernel to the first member's boot disk.
If the kernel build
fails,
clu_create
displays warning messages but
continues creating this first member.
(You can boot the cluster
genvmunix
from the boot disk and attempt to
build a kernel on the single-member cluster.)
Sets boot-related console variables:
bootdef_dev
and
boot_reset
; creates and sets
boot_dev
.
If
clu_create
can set the variables and if the
kernel build was successful,
clu_create
offers to reboot
the system for you.
If you choose not to reboot the system at this
time, use the information in
Section 4.4
when you are ready to boot the system as a single-member cluster.
If
clu_create
cannot set the variables, halt the
system, set the variables to the values specified in
Section 2.6, and use the information in
Section 4.4
to boot the system as a single-member
cluster.
Note
If the first member's boot disk is accessed through HS controllers that are connected to dual SCSI or Fibre Channel buses and configured for multiple-bus failover, or the system is an AlphaServer 8200 or 8400, halt the system and see Section 2.6 for information on setting the
bootdef_dev
console variable.
The
clu_create
command writes a log file of the
installation to
/cluster/admin/clu_create.log
.
The log file contains all installation prompts, responses, and
messages.
Examine this log file for errors before booting the system as
a single-member cluster.
(Section C.1
contains a sample
clu_create
log file.)
4.4 Boot the System as a Single-Member Cluster
If you decided to boot the system yourself, examine the following items
in the
clu_create
log file,
/cluster/admin/clu_create.log
, before booting the
system as a single-member cluster:
Verify that the kernel for the new member built properly and was
copied to the boot partition for this member.
Look for the following
kinds of messages in the
clu_create.log
file:
*** PERFORMING KERNEL BUILD *** Working....Tue May 8 15:54:11 EDT 2001 The new kernel is /sys/PEPICELLI/vmunix Finished running the doconfig program. The kernel build was successful and the new kernel has been copied to this member's boot disk. Restoring kernel build configuration.
Verify that the boot-related console variables are set to boot the cluster boot disk for the first member, not the Tru64 UNIX boot disk. In the log, look for the following kinds of messages:
Updating console variables Setting console variable 'bootdef_dev' to dsk10 Setting console variable 'boot_dev' to dsk10 Setting console variable 'boot_reset' to ON Saving console variables to non-volatile storage
If the new kernel is in place and the console variables are set correctly, reboot the system as a single-member cluster from its newly created cluster member boot disk. For example:
# shutdown -r now
If the kernel build did not succeed or the console variables could not be set, halt the system:
# shutdown -h now
Perform the following procedure:
If
clu_create
could not set the console variables,
set the console variables according to the values specified
in
Section 2.6.
(The remaining steps assume that the console variables are set correctly.)
If the kernel build succeeded, boot
vmunix
from the first
member's cluster boot disk (make sure that
bootdef_dev
is
set to the first member's cluster boot disk, not to the Tru64 UNIX disk):
>>> boot
If the kernel build did not succeed or you could not boot
vmunix
from the first member's boot disk, make
sure that
bootdef_dev
is set to the first member's
cluster boot disk (not to the Tru64 UNIX disk) and boot
genvmunix
:
>>> boot -file genvmunix
When the system reaches multiuser mode, log in and attempt to
build a kernel.
If the build succeeds, copy (cp
) the
new kernel from
/sys/HOSTNAME/vmunix
to
/vmunix
.
(If you move (mv
)
the kernel to
/vmumix
, you will overwrite the
/vmunix
context-dependent symbolic link (CDSL).)
Then reboot the system so it will be running on its customized kernel.
# doconfig -c HOSTNAME # cp /sys/HOSTNAME/vmunix /vmunix # shutdown -r now
If you assigned a reserved network IP address to the default cluster alias, see Section 2.3.1 for information on advertising that alias address.
If you plan to use LSM, see Section 2.4.2.
When you boot this node as a single-member cluster, note that some
access-related files, like
/etc/ftpusers
, are
shared by all cluster members while other files, like
/etc/securettys
, are replaced by CDSLs that point
to member-specific files.
The reason is that files like
ftpusers
deal with user accounts (which are
clusterwide entities), while files like
securettys
deal with member-specific information, in this
case
tty
devices.
When first booted as a cluster member, the system runs the
clu_check_config
command to examine the configuration
of several important cluster subsystems.
Look at the
clu_check_config
log files in the
/cluster/admin
directory to verify that these
subsystems are configured properly and operating correctly.
If you
discover any problems, read
clu_check_config
(8)
so you know what tests the command performs.
You can then run the
command in verbose mode to display more information about why a subsystem
failed the initial test.
See the Tru64 UNIX
System Administration
and the
TruCluster Server
Cluster Administration
manuals for information on configuring
subsystems.
The following group of commands are useful for taking a quick look at the initial configuration of the cluster and some of its major subsystems:
# clu_get_info -full | more # clu_quorum # cfsmgr | more # drdmgr `ls /etc/fdmns/* | grep dsk | sed 's/[a-z]$//' |\ uniq | sort` | more # cluamgr -s DEFAULTALIAS # caa_stat
4.5 Make On-Disk Backup Copies of Important Configuration Files
Because cluster members rely on the information in the following
files, we recommend that, after booting the first member of the
cluster, you make on-disk copies of these files in case of inadvertent
modification.
For member-specific files, the examples assume that the
member ID of the first member is
1
(memberid=1
).
/etc/sysconfigtab.cluster
:
# cp /etc/sysconfigtab.cluster /etc/sysconfigtab.cluster.sav
/etc/rc.config.common
:
# cp /etc/rc.config.common /etc/rc.config.common.sav
/etc/sysconfigtab
- This file is a CDSL whose
target is:
../cluster/members/{memb}/boot_partition/etc/sysconfigtab
To make a backup copy, change directory to the first member's
boot_partition/etc
directory and make a
copy of its
sysconfigtab
file.
For example:
# cd /cluster/members/member1/boot_partition/etc # cp sysconfigtab sysconfigtab.sav
/etc/rc.config
- This file is a CDSL whose
target is:
../cluster/members/{memb}/etc/rc.config
To make a backup copy, change directory to the first member's
etc
directory and make a
copy of its
rc.config
file.
For example:
# cd /cluster/members/member1/etc # cp rc.config rc.config.sav
In addition, we recommend that you perform a full backup of the single-member cluster.