B    Installation Examples

This chapter provides samples of the logs written by:

B.1    clu_create Log

Each time you run clu_create, it writes log messages to /cluster/admin/clu_create.log. Example B-1 shows a sample clu_create log file.

Example B-1:  Sample clu_create Log File

Do you want to continue creating the cluster? [yes]:yes
 
Each cluster has a unique cluster name, which is a hostname
used to identify the entire cluster.
 
Enter a fully-qualified cluster name []:deli.zk3.dec.com
Checking cluster name: deli.zk3.dec.com
 
You entered 'deli.zk3.dec.com' as your cluster name.
Is this correct? [yes]:yes
 
The cluster alias IP address is the IP address associated with the
default cluster alias.  (192.168.168.1 is an example of an IP address.)
 
Enter the cluster alias IP address []:16.140.112.209
Checking cluster alias IP address: 16.140.112.209
 
You entered '16.140.112.209' as the IP address for the default cluster alias.
Is this correct? [yes]:yes
 
The cluster root partition is the disk partition (for example, dsk4b)
that will hold the clusterwide root (/) file system.
 
    Note: The default 'a' partition on most disks is not large
    enough to hold the clusterwide root AdvFS domain.
 
Enter the device name of the cluster root partition []:dsk7b
Checking the cluster root partition: dsk7b
 
You entered 'dsk7b' as the device name of the cluster root partition.
Is this correct? [yes]:yes
 
The cluster usr partition is the disk partition (for example, dsk4g)
that will contain the clusterwide usr (/usr) file system.
 
    Note: The default 'g' partition on most disks is usually
    large enough to hold the clusterwide usr AdvFS domain.
 
Enter the device name of the cluster usr partition [dsk7g]:dsk7g
Checking the cluster usr partition: dsk7g
 
You entered 'dsk7g' as the device name of the cluster usr partition.
Is this correct? [yes]:yes
 
To use this default value, press Return at the prompt.
 
The cluster var device is the disk partition (for example, dsk4h)
that will hold the clusterwide var (/var) file system.
 
    Note: The default 'h' partition on most disks is usually
    large enough to hold the clusterwide var AdvFS domain.
 
Enter the device name of the cluster var partition [dsk7h]:dsk7h
Checking the cluster var partition: dsk7h
 
You entered 'dsk7h' as the device name of the cluster var partition.
Is this correct? [yes]:yes
 
Do you want to define a quorum disk device at this time? [yes]:yes
The quorum disk device is the name of the disk (for example, 'dsk5')
that will be used as this cluster quorum disk.
 
Enter the device name of the quorum disk []:dsk6
Checking the quorum disk device: dsk6
 
You entered 'dsk6' as the device name of the quorum disk device.
Is this correct? [yes]:yes
 
By default the quorum disk is assigned '1' vote(s).
To use this default value, press Return at the prompt.
 
The number of votes for the quorum disk is an integer usually 0 or 1.
If you select 0 votes then the quorum disk will not contribute votes to the
cluster. If you select 1 vote then the quorum disk must be accessible to
boot and run a single member cluster.
 
Enter the number of votes for the quorum disk [1]:1
Checking number of votes for the quorum disk: 1
 
You entered '1' as the number votes for the quorum disk.
Is this correct? [yes]:yes
 
The default member ID for the first cluster member is '1'.
To use this default value, press Return at the prompt.
 
A member ID is used to identify each member in a cluster.
Each member must have a unique member ID, which is an integer in
the range 1-63, inclusive.
 
Enter a cluster member ID [1]:1
Checking cluster member ID: 1
 
You entered '1' as the member ID.
Is this correct? [yes]:yes
 
By default the 1st member of a cluster is assigned '1' vote(s).
Checking number of votes for this member: 1
 
Each member has its own boot disk, which has an associated
device name; for example, 'dsk5'.
 
Enter the device name of the member boot disk []:dsk10
Checking the member boot disk: dsk10
 
The specified disk contains the required 'a', 'b', and 'h'
partitions.  The current partition sizes are acceptable for a member's
boot disk.  You can either keep the current disk partition layout or have
the installation program relabel the disk.  If the program relabels the disk,
the new label will contain the following partitions and sizes (in blocks):
 
    Current                 New
    -------                 ---
    a: 524288               a: 524288
    b: 7849648              b: 7853744
    h: 2048                  h: 2048
 
Do you want to use the current disk partitions? [yes]:yes
 
You entered 'dsk10' as the device name of this member's boot disk.
Is this correct? [yes]:yes
 
Device 'ics0' is the default virtual cluster interconnect device
Checking virtual cluster interconnect device: ics0
 
The virtual cluster interconnect IP name 'pepicelli-ics0' was formed by
appending '-ics0' to the system's hostname.
To use this default value, press Return at the prompt.
 
Each virtual cluster interconnect interface has a unique IP name (a 
hostname) associated with it.
 
Enter the IP name for the virtual cluster interconnect [pepicelli-ics0]:pepicelli-ics0
Checking virtual cluster interconnect IP name: pepicelli-ics0
 
You entered 'pepicelli-ics0' as the IP name for the virtual cluster interconnect.
Is this name correct? [yes]:yes
 
The virtual cluster interconnect IP address '10.0.0.1' was created by
replacing the last byte of the default virtual cluster interconnect network
address '10.0.0.0' with the previously chosen member ID '1'.
To use this default value, press Return at the prompt.
 
The virtual cluster interconnect IP address is the IP address
associated with the virtual cluster interconnect IP name.  (192.168.168.1 
is an example of an IP address.)
 
Enter the IP address for the virtual cluster interconnect [10.0.0.1]:10.0.0.1
Checking virtual cluster interconnect IP address: 10.0.0.1
 
You entered '10.0.0.1' as the IP address for the virtual cluster interconnect.
Is this address correct? [yes]:yes
 
What type of cluster interconnect will you be using?
 
    Selection   Type of Interconnect
----------------------------------------------------------------------
         1      Memory Channel
         2      Local Area Network
         3      None of the above
         4      Help
         5      Display all options again
----------------------------------------------------------------------
Enter your choice [1]:2
You selected option '2' for the cluster interconnect
Is that correct? (y/n) [y]:y
 
The physical cluster interconnect interface device is the name of the
physical device(s) which will be used for low level cluster node
communications. Examples of the physical cluster interconnect interface
device name are: tu0, ee0, and nr0.
 
Enter the physical cluster interconnect device name(s) []:ee0
Would you like to place this Ethernet device into a NetRAIN set? [yes]:no
Checking physical cluster interconnect interface device name(s): ee0
 
You entered 'ee0' as your physical cluster interconnect interface
device name(s). Is this correct? [yes]:yes
 
The physical cluster interconnect IP name 'member1-icstcp0' was formed by
appending '-icstcp0' to the word 'member' and the member ID.
Checking physical cluster interconnect IP name: member1-icstcp0
 
The physical cluster interconnect IP address '10.1.0.1' was created by
replacing the last byte of the default cluster interconnect network address
'10.1.0.0' with the previously chosen member ID '1'.
To use this default value, press Return at the prompt.
 
The cluster physical interconnect IP address is the IP address
associated with the physical cluster interconnect IP name. (192.168.168.1
is an example of an IP address.)
 
Enter the IP address for the physical cluster interconnect [10.1.0.1]:10.1.0.100
Checking physical cluster interconnect IP address: 10.1.0.100
 
You entered '10.1.0.100' as the IP address for the physical cluster interconnect.
Is this address correct? [yes]:yes
 
 
You entered the following information:
 
    Cluster name:                                            deli.zk3.dec.com
    Cluster alias IP Address:                                16.140.112.209
    Clusterwide root partition:                              dsk7b
    Clusterwide usr  partition:                              dsk7g
    Clusterwide var  partition:                              dsk7h
    Clusterwide i18n partition:                              Directory-In-/usr
    Quorum disk device:                                      dsk6
    Number of votes assigned to the quorum disk:             1
    First member's member ID:                                1
    Number of votes assigned to this member:                 1
    First member's boot disk:                                dsk10
    First member's virtual cluster interconnect device name: ics0
    First member's virtual cluster interconnect IP name:     pepicelli-ics0
    First member's virtual cluster interconnect IP address:  10.0.0.1
    First member's physical cluster interconnect devices     ee0
    First member's NetRAIN device name                       Not-Applicable
    First member's physical cluster interconnect IP address  10.1.0.100
 
If you want to change any of the above information, answer 'n' to the
following prompt. You will then be given an opportunity to change your
selections.
Do you want to continue to create the cluster? [yes]:yes
 
Creating required disk labels.
  Creating disk label on member disk : dsk10
  Initializing cnx partition on member disk : dsk10h
  Creating disk label on quorum disk : dsk6
  Initializing cnx partition on quorum disk : dsk6h
 
Creating AdvFS domains:
  Creating AdvFS domain 'root1_domain#root' on partition '/dev/disk/dsk10a'.
  Creating AdvFS domain 'cluster_root#root' on partition '/dev/disk/dsk7b'.
  Creating AdvFS domain 'cluster_usr#usr' on partition '/dev/disk/dsk7g'.
  Creating AdvFS domain 'cluster_var#var' on partition '/dev/disk/dsk7h'.
 
Populating clusterwide root, usr, and var file systems:
  Copying root file system to 'cluster_root#root'.
....
  Copying usr file system to 'cluster_usr#usr'.
........................................
  Copying var file system to 'cluster_var#var'.
..
 
Creating Content Dependent Symbolic Links (CDSLs) for file systems:
  Creating CDSLs in root file system.
  Creating CDSLs in usr  file system.
  Creating CDSLs in var  file system.
  Creating links between clusterwide file systems
 
Populating member's root file system.
 
Modifying configuration files required for cluster operation:
  Creating /etc/fstab file.
  Configuring cluster alias.
  Updating /etc/hosts - adding IP address '16.140.112.209' and hostname 'deli.zk3.dec.com'
  Updating member-specific /etc/inittab file with 'cms' entry.
  Updating /etc/hosts - adding IP address '10.0.0.1' and hostname 'pepicelli-ics0'
  Updating /etc/hosts - adding IP address '10.1.0.100' and hostname 'member1-icstcp0'
  Updating /etc/rc.config file.
  Updating /etc/sysconfigtab file.
  Retrieving cluster_root major and minor device numbers.
  Creating cluster device file CDSLs.
  Updating /.rhosts - adding hostname 'deli.zk3.dec.com'.
  Updating /etc/hosts.equiv - adding hostname 'deli.zk3.dec.com'
  Updating /.rhosts - adding hostname 'pepicelli-ics0'.
  Updating /etc/hosts.equiv - adding hostname 'pepicelli-ics0'
  Updating /.rhosts - adding hostname 'member1-icstcp0'.
  Updating /etc/hosts.equiv - adding hostname 'member1-icstcp0'
  Updating /etc/ifaccess.conf - adding deny entry for 'ee0'
  Updating /etc/ifaccess.conf - adding deny entry for 'ee1'
  Updating /etc/ifaccess.conf - adding deny entry for 'sl0'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu0'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu1'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu2'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu3'
  Updating /etc/ifaccess.conf - adding deny entry for 'tun0'
  Updating /etc/ifaccess.conf - adding deny entry for 'ee0'
  Updating /etc/ifaccess.conf - adding deny entry for 'ee1'
  Updating /etc/ifaccess.conf - adding deny entry for 'sl0'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu0'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu1'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu2'
  Updating /etc/ifaccess.conf - adding deny entry for 'tu3'
  Updating /etc/ifaccess.conf - adding deny entry for 'tun0'
  Updating /etc/cfgmgr.auth - adding hostname 'ernest.zk3.dec.com'
  Finished updating member1-specific area.
 
Building a kernel for this member.
  Saving kernel build configuration.
  The kernel will now be configured using the doconfig program.
 
 
*** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
 
Saving /sys/conf/ERNEST as /sys/conf/PEPICELLI.bck
 
 
*** PERFORMING KERNEL BUILD ***
	Working....Wed Apr 18 16:39:52 EDT 2001
 
The new kernel is /sys/PEPICELLI/vmunix
  Finished running the doconfig program.
 
  The kernel build was successful and the new kernel
   has been copied to this member's boot disk.
  Restoring kernel build configuration.
 
Updating console variables
  Setting console variable 'bootdef_dev' to dsk10
  Setting console variable 'boot_dev' to dsk10
  Setting console variable 'boot_reset' to ON
  Saving console variables to non-volatile storage
 
clu_create: Cluster created successfully.
 
To run this system as a single member cluster it must be rebooted.
If you answer yes to the following question clu_create will reboot the
system for you now. If you answer no, you must manually reboot the
system after clu_create exits.
Would you like clu_create to reboot this system now? [yes]:y
Shutdown at 16:53 (in 0 minutes) [pid 4211]
 

B.2    clu_add_member Log

Each time you run clu_add_member, it writes log messages to /cluster/admin/clu_add_member.log. Example B-2 shows a sample clu_add_member log file.

Example B-2:  Sample clu_add_member Log File

Do you want to continue adding this member? [yes]:yes
 
Each cluster member has a hostname, which is assigned to the HOSTNAME
variable in /etc/rc.config.
 
Enter the new member's fully qualified hostname []:polishham.zk3.dec.com
Checking member's hostname: polishham.zk3.dec.com
 
You entered 'polishham.zk3.dec.com' as this member's hostname.
Is this name correct? [yes]:yes
 
The next available member ID for a cluster member is '2'.
To use this default value, press Return at the prompt.
 
A member ID is used to identify each member in a cluster.
Each member must have a unique member ID, which is an integer in
the range 1-63, inclusive.
 
Enter a cluster member ID [2]:2
Checking cluster member ID: 2
 
You entered '2' as the member ID.
Is this correct? [yes]:yes
 
By default, when the current cluster's expected votes are greater then 1,
each added member is assigned 1 vote(s). Otherwise, each added member is
assigned 0 (zero) votes.
To use this default value, press Return at the prompt.
 
The number of votes for a member is an integer usually 0 or 1
Enter the number of votes for this member [1]:1
Checking number of votes for this member: 1
 
You entered '1' as the number votes for this member.
Is this correct? [yes]:yes
 
Each member has its own boot disk, which has an associated
device name; for example, 'dsk5'.
 
Enter the device name of the member boot disk []:dsk11
Checking the member boot disk: dsk11
 
You entered 'dsk11' as the device name of this member's boot disk.
Is this correct? [yes]:yes
 
Device 'ics0' is the default virtual cluster interconnect device
Checking virtual cluster interconnect device: ics0
 
The virtual cluster interconnect IP name 'polishham-ics0' was formed by
appending '-ics0' to the system's hostname.
To use this default value, press Return at the prompt.
 
Each virtual cluster interconnect interface has a unique IP name (a 
hostname) associated with it.
 
Enter the IP name for the virtual cluster interconnect [polishham-ics0]:polishham-ics0
Checking virtual cluster interconnect IP name: polishham-ics0
 
You entered 'polishham-ics0' as the IP name for the virtual cluster interconnect.
Is this name correct? [yes]:yes
 
The virtual cluster interconnect IP address '10.0.0.2' was created by
replacing the last byte of the virtual cluster interconnect network address
'10.0.0.0' with the previously chosen member ID '2'.
To use this default value, press Return at the prompt.
 
The virtual cluster interconnect IP address is the IP address
associated with the virtual cluster interconnect IP name.  (192.168.168.1 
is an example of an IP address.)
 
Enter the IP address for the virtual cluster interconnect [10.0.0.2]:10.0.0.2
Checking virtual cluster interconnect IP address: 10.0.0.2
 
You entered '10.0.0.2' as the IP address for the virtual cluster interconnect.
Is this address correct? [yes]:yes
 
The physical cluster interconnect interface device is the name of the
physical device(s) which will be used for low level cluster node
communications. Examples of the physical cluster interconnect interface
device name are: tu0, ee0, and nr0.
 
Enter the physical cluster interconnect device name(s) []:ee0, ee1
Would you like to enter another Ethernet device? [yes]:no
Checking physical cluster interconnect interface device name(s): ee0,ee1
 
You entered 'ee0,ee1' as your physical cluster interconnect interface
device name(s). Is this correct? [yes]:yes
 
Enter a NetRAIN interface device name []:nr0
Checking NetRAIN interface device: nr0
 
You entered 'nr0' as your NetRAIN interface device name.
Is this correct? [yes]:yes
 
The physical cluster interconnect IP name 'member2-icstcp0' was formed by
appending '-icstcp0' to the word 'member' and the member ID.
Checking physical cluster interconnect IP name: member2-icstcp0
 
The physical cluster interconnect IP address '10.1.0.2' was created by
replacing the last byte of the physical cluster interconnect network address
'10.1.0.0' with the previously chosen member ID '2'.
To use this default value, press Return at the prompt.
 
The cluster physical interconnect IP address is the IP address
associated with the physical cluster interconnect IP name. (192.168.168.1
is an example of an IP address.)
 
Enter the IP address for the physical cluster interconnect [10.1.0.2]:10.1.0.200
Checking physical cluster interconnect IP address: 10.1.0.200
 
You entered '10.1.0.200' as the IP address for the physical cluster interconnect.
Is this address correct? [yes]:yes
 
Each cluster member must have its own registered TruCluster Server
license. The data required to register a new member is typically located on
the License PAK certificate or it may have been previously placed on your
system as a partial or complete license data file. If you are prepared to
enter this license data at this time, clu_add_member can configure the new
member to use this license data. If you do not have the license data at this
time you can enter this data on the new member when it is up and running.
Do you wish to register the TruCluster Server license for this new member at
this time? [yes]:no
 
You entered the following information:
 
    Member's hostname:                                 polishham.zk3.dec.com
    Member's ID:                                       2
    Number of votes assigned to this member:           1
    Member's boot disk:                                dsk11
    Member's virtual cluster interconnect devices:     ics0
    Member's virtual cluster interconnect IP name:     polishham-ics0
    Member's virtual cluster interconnect IP address:  10.0.0.2
    Member's physical cluster interconnect devices:    ee0,ee1
    Member's NetRAIN device name:                      nr0
    Member's physical cluster interconnect IP address: 10.1.0.200
    Member's cluster license:                          Not Entered
 
If you want to change any of the above information answers 'n' to the
following prompt. You will then be given an opportunity to change your
selections.
Do you want to continue to add this member? [yes]:yes
 
Creating required disk labels.
  Creating disk label on member disk : dsk11
  Initializing cnx partition on member disk : dsk11h
 
Creating AdvFS domains:
  Creating AdvFS domain 'root2_domain#root' on partition '/dev/disk/dsk11a'.
 
Creating cluster member-specific files:
  Creating new member's root member-specific files
  Creating new member's usr  member-specific files
  Creating new member's var  member-specific files
  Creating new member's boot member-specific files
 
Modifying configuration files required for new member operation:
  Updating /etc/hosts - adding IP address '10.0.0.2' and hostname 'polishham-ics0'
  Updating /etc/hosts - adding IP address '10.1.0.200' and hostname 'member2-icstcp0'
  Updating /etc/rc.config
  Updating /etc/sysconfigtab
  Updating member-specific /etc/inittab file with 'cms' entry.
  Updating /etc/securettys - adding ptys entry
  Updating /.rhosts - adding hostname 'polishham-ics0'
  Updating /etc/hosts.equiv - adding hostname 'polishham-ics0'
  Updating /.rhosts - adding hostname 'member2-icstcp0'
  Updating /etc/hosts.equiv - adding hostname 'member2-icstcp0'
  Updating /etc/cfgmgr.auth - adding hostname 'polishham.zk3.dec.com'
  Configuring cluster alias.
  Configuring Network Time Protocol for new member
  Adding interface 'pepicelli-ics0' as an NTP peer to member 'polishham.zk3.dec.com' 
  Adding interface 'polishham-ics0' as an NTP peer to member 'pepicelli.zk3.dec.com' 
 
Configuring automatic subset configuration and kernel build.
 
clu_add_member: Initial member 2 configuration completed successfully.
From the newly added member's console, perform the following steps to
complete the newly added member's configuration:
 
    1. Set the console variable 'boot_osflags' to 'A'.
    2. Identify the console name of the newly added member's boots device.
 
       >>>show device
 
       The newly added member's boot device has the following properties:
 
       Manufacturer: DEC
       Model: RZ1CF-CF (C) DEC
       Target: 12
       Lun: 0
       Serial Number: SCSI-WWID:04100024:"DEC     RZ1CF-CF (C) DEC    50066084"
 
       Note: The SCSI bus number may differ when viewed from different members.
 
    3. Boot the newly added member using genvmunix:
 
        >>>boot -file genvmunix <new-member-boot-device>
 
       During this initial boot the newly added member will:
 
       o  Configure each installed subset.
 
       o  Attempt to build and install a new kernel. If the system cannot
          build a kernel, it starts a shell where you can attempt to build
          a kernel manually. If the build succeeds, copy the new kernel to
          /vmunix. When you are finished exit the shell using ^D or 'exit'.
 
       o  The newly added member will attempt to set boot related console
          variables and continue to boot to multi-user mode.
 
       o  After the newly added member boots you should setup your system 
          default network interface using the appropriate system management
          command.