The TruCluster Server Cluster Installation manual describes how to initially configure services. We strongly suggest you configure services before the cluster is created. If you wait until after cluster creation to set up services, the process can be more complicated.
This chapter describes the procedures to set up network services after cluster creation. The chapter discusses the following topics:
Configuring DHCP (Section 7.1)
Configuring NIS (Section 7.2)
Configuring printing (Section 7.3)
Configuring DNS/BIND (Section 7.4)
Managing time synchronization (Section 7.5)
Managing NFS (Section 7.6)
Managing
inetd
configuration
(Section 7.7)
Managing mail (Section 7.8)
Configuring a cluster for RIS (Section 7.9)
Displaying X Windows Applications Remotely (Section 7.10)
A cluster can be a highly available Dynamic Host Configuration Protocol (DHCP) server. It cannot be a DHCP client. A cluster must use static addressing. On a cluster, DHCP runs as a single-instance application with cluster application availability (CAA) providing failover. At any one time, only one member of the cluster is the DHCP server. If failover occurs, the new DHCP server uses the same common database that was used by the previous server.
The DHCP server attempts to match its host name and IP address with the configuration in the DHCP database. If you configure the database with the host name and IP address of a cluster member, problems can result. If the member goes down, DHCP automatically fails over to another member, but the host name and IP address of this new DHCP server does not match the entry in the database. To avoid this and other problems, follow these steps:
Familiarize yourself with the DHCP server configuration process that is described in the chapter on DHCP in the Tru64 UNIX Network Administration: Connections manual.
On the cluster member that you want to act as the initial DHCP server,
run
/usr/bin/X11/xjoin
and configure DHCP.
Select
Server/Security
.
Under
Server/Security Parameters
,
set the
Canonical Name
entry to the default cluster alias.
From the pulldown menu that currently shows
Server/Security Parameters
,
select
IP Ranges
.
Set the
DHCP Server
entry to the IP address of the
default cluster alias.
There can be multiple entries for the DHCP Server IP address in
the DHCP database.
You might find it more convenient to use
the
jdbdump
command
to generate a text file representation of the
DHCP database.
Then use a text editor to change all the occurrences
of the original DHCP server IP address to the cluster alias
IP address.
Finally, use
jdbmod
to repopulate the
DHCP database from the file you edited.
For example:
# jdbdump > dhcp_db.txt # vi dhcp_db.txt
Edit
dhcp_db.txt
and change the owner IP address to the
IP address of the default cluster alias.
Update the database with your changes by entering the following command:
# jdbmod -e dhcp_db.txt
When you finish with
xjoin
, make DHCP a highly
available application.
DHCP already has an action script and a resource profile, and it is
already registered with the CAA daemon.
To start DHCP
with CAA, enter the following command:
# caa_start dhcp
For information about highly available applications and CAA,
see the TruCluster Server
Cluster Highly Available Applications
manual.
7.2 Configuring NIS
To provide high availability, the Network Information Service
(NIS) daemons
ypxfrd
and
rpc.yppasswdd
run on every cluster member.
As described in
Section 3.1, the ports that
are used by services that are
accessed through a cluster alias are defined as either
in_single
or
in_multi
.
(These definitions have nothing to do
with whether the service can or cannot run on more than one cluster
member at the same time.)
ypxfrd
runs as an
in_multi
service, which means that the cluster alias subsystem routes
connection requests and packets for that service to all eligible
members of the alias.
rpc.yppasswdd
runs as an
in_single
service, which means that only one alias member
receives connection requests or packets that are addressed to the
service.
If that member becomes unavailable, the cluster alias
subsystem selects another member of the alias as the recipient for all
requests and packets addressed to the service.
NIS parameters are stored in
/etc/rc.config.common
.
The database files
are in the
/var/yp/src
directory.
Both
rc.config.common
and the databases are
shared by all cluster members.
The cluster is a slave,
a master, or a client.
The functions of slave,
master, and client cannot be mixed among individual cluster members.
If you configured NIS at the time of cluster creation, then as far as NIS is concerned, you need do nothing when adding or removing cluster members.
To configure NIS after the cluster is running, follow these steps:
Run the
nissetup
command and configure NIS according
to the instructions in the chapter on NIS
in the Tru64 UNIX
Network Administration: Services
manual.
You have to supply the host names that NIS binds to. Include the cluster alias in your list of host names.
On each cluster member, enter the following commands:
# /sbin/init.d/nis stop # /sbin/init.d/nis start
7.2.1 Configuring an NIS Master in a Cluster with Enhanced Security
You can configure an NIS master to provide
extended user profiles and to use
the protected password database.
For information about NIS and enhanced security features, see
the Tru64 UNIX
Security
manual.
For details on configuring
NIS with enhanced security, see the appendix on enhanced security in a
cluster in the same manual.
7.3 Configuring Printing
With a few exceptions, printer setup on a cluster is the same as printer setup on a standalone Tru64 UNIX system. See the Tru64 UNIX System Administration manual for general information about managing the printer system.
In a cluster, a member can submit a print job to any printer anywhere in the
cluster.
A printer daemon,
lpd
, runs on each
cluster member.
This parent daemon serves both local
lpr
requests and incoming remote job requests.
The parent printer
daemon that runs on each node uses
/var/spool/lpd
, which is a
context-dependent symbolic link (CDSL) to
/cluster/members/{memb}/spool/lpd
.
Do not use
/var/spool/lpd
for any other
purpose.
Each printer that is local to the cluster has its own spooling
directory, which is located by convention
under
/usr/spool
.
The spooling directory must not be a
CDSL.
A new printer characteristic,
:on
,
has been introduced to support printing in clusters.
To configure a printer, run either
printconfig
or
lprsetup
on any
cluster member.
If a printer is a local device that is connected to a member
via a COM port (/dev/tty01
) or a parallel port
(/dev/lp0
),
then set
:on
to the name of the member where the printer is connected.
For example,
:on=memberA
The printer is connected to the member
memberA
.
When configuring a network printer that is connected via TCP/IP, you have
two choices for values for the
:on
characteristic:
:on=localhost
Specify
localhost
when you want every member
of the cluster to serve the printer.
When a print job is submitted,
the first member that responds handles all printing until the
queue is empty.
For local jobs, the first member to respond is
the member on which the first job is submitted.
For incoming remote
jobs, the jobs are served based on the cluster alias.
:on=member1,member2,...,memberN
List specific cluster members when you want all printing to be handled
by a single cluster member.
The first member in the
:on
list handles all printing.
If that member
becomes unavailable, then the next member in the list takes over,
and so on.
Using Advanced Printing Software
For information on installing and using Advanced Printing Software in
a cluster, see the configuration notes chapter in the
Tru64 UNIX
Advanced Printing Software
Release Notes.
7.4 Configuring DNS/BIND
Configuring a cluster as a Berkeley Internet
Name Domain (BIND) server is similar to configuring
an individual Tru64 UNIX system as a BIND server.
In a cluster, the
named
daemon runs on a single cluster member, and
that system is the actual BIND server.
The cluster alias handles
queries, so that it appears the entire cluster is the server.
Failover is provided by CAA.
If the serving member becomes
unavailable, CAA starts the
named
daemon on another
member.
If the cluster is configured as a BIND client, then the entire cluster is configured as a client. No cluster member can be a BIND client if the cluster is configured as a BIND server.
Whether you configure BIND at the time of cluster creation or after the cluster is running, the process is the same.
To configure a cluster as either a BIND server or client, use the command
bindconfig
or
sysman dns
.
If you are configuring
the cluster as a client, then it does not matter on which member you run
the command.
If you
are configuring a BIND server, then you determine which member
becomes the server by running the command on that member.
Note that
the
sysman
-focus
option does
not work for configuring BIND.
You must log in
to the system you want to act as the BIND server and then run
sysman dns
or
bindconfig
.
The
/etc/resolv.conf
and
/etc/svc.conf
files are clusterwide files.
For details on configuring BIND, see the chapter on the Domain Name
System (DNS) in
the Tru64 UNIX
Network Administration: Services
manual.
7.5 Managing Time Synchronization
All cluster members need time synchronization.
The Network Time Protocol (NTP) meets
this requirement.
Because of this, the
clu_create
command configures NTP
on the initial cluster member at the time of
cluster creation, and NTP is automatically configured
on each member as it is added to the cluster.
All members are configured
as NTP peers.
If your site chooses not to use NTP, make sure that whatever time service you use meets the granularity specifications that are defined in RFC 1035 Network Time Protocol (Version 3) Specification, Implementation and Analysis.
Because the system times of cluster members should not vary by
more than a few seconds, we do not recommend using the
timed
daemon to synchronize the time.
7.5.1 Configuring NTP
The
Cluster Installation
manual recommends that you configure NTP on
the Tru64 UNIX system before you install the cluster
software that makes the system the initial cluster member.
If you did not
do this,
clu_create
and
clu_add_member
configured NTP automatically
on each cluster member.
In this configuration, the NTP server for
each member is
localhost
.
Members are set up as NTP peers of each other, and use
the IP address of their cluster interconnect interfaces.
The
localhost
entry is used only
when the member is the only node
running.
The peer entries act to keep all cluster members synchronized
so that the time offset is in microseconds across the cluster.
Do not change these initial server and peer entries
even if you later change the NTP configuration and add external servers.
To change the NTP configuration after the cluster is running,
you must run either
ntpconfig
or
sysman ntp
on each cluster member.
These
commands always act on a single cluster
member.
You can either log in to each member or
you can use the
-focus
option to
sysman
in order to designate the member on which you want to configure NTP.
Starting and stopping the NTP daemon,
xntpd
, is potentially disruptive to the operation
of the cluster, and should be performed on only one member at a time.
When you use
sysman
to
learn the status of the NTP daemon, you can get the status for
either the entire cluster or a single member.
7.5.2 All Members Should Use the Same External NTP Servers
You can add an external NTP server to just one member of the cluster. However, this creates a single point of failure. To avoid this, add the same set of external servers to all cluster members.
We strongly recommend that the list of external NTP servers be the same on
all members.
If you configure differing lists of external
servers from member to member,
you must ensure that the servers are all at the same stratum
level and that the time differential between them is very small.
7.5.2.1 Time Drift
If you notice a time drift among cluster members, you need to
resynchronize members with each other.
To do this you must log on
to each member of the cluster and enter
the
ntp -s -f
command and specify the cluster
interconnect name of a member other than the one where you are logged on.
By default a cluster interconnect name is the short form of the
hostname with
-mc0
appended.
For example,
if
provolone
is a cluster member, and you are
logged on to a member other than
provolone
, enter
the following command:
# ntp -s -f provolone-mc0
You then log on to the other cluster members and repeat this
command, in each case using a cluster interconnect name other than
the one of the system where you are logged on.
7.6 Managing NFS
A cluster can provide highly available
Network File System (NFS) service.
When a cluster acts as an NFS server, client systems that are external
to the cluster see it as a single system with the cluster
alias as its name.
When a cluster acts as an NFS client,
an NFS file system that is external to the cluster that is mounted
by one cluster member is accessible to all cluster members.
File accesses
are funneled through the mounting member to the
external NFS server.
The external NFS server sees the cluster as a set of
independent nodes and is not aware that the cluster members are
sharing the file system.
7.6.1 Configuring NFS
To configure NFS, use the
nfsconfig
or
sysman nfs
command.
Note
Do not use the
nfssetup
command in a cluster. It is not cluster-aware and will incorrectly configure NFS.
One or more cluster members can
run NFS daemons and the
mount
daemons, as
well as client versions of
lockd
and
statd
.
With
nfsconfig
or
sysman nfs
,
you can:
Start, restart, or stop NFS daemons clusterwide or on an individual member.
Configure or unconfigure server daemons clusterwide or on an individual member.
Configure or unconfigure client daemons clusterwide or on an individual member.
View the configuration status of NFS clusterwide or on an individual member.
View the status of NFS daemons clusterwide or on an individual member.
To configure NFS on a specific member,
use the
-focus
option to
sysman
.
When you configure NFS without any focus, the configuration applies to the
entire cluster and is saved in
/etc/rc.config.common
.
If a focus
is specified, then the configuration applies to only the
specified cluster member and is saved in the CDSL file
/etc/rc.config
for that member.
Local NFS configurations override the clusterwide configuration.
For
example, if you configure member
mutt
as not being an
NFS server, then
mutt
is not affected when you
configure the entire cluster as a server;
mutt
continues not to be a server.
For a more interesting example, suppose you
have a three-member cluster with members
alpha
,
beta
, and
gamma
.
Suppose you configure 8 TCP server threads clusterwide.
If you then set focus on member
alpha
and
configure 10 TCP server threads,
the
ps
command will
show 10 TCP server threads on
alpha
,
but only 8 on members
beta
and
gamma
.
If you then set focus clusterwide and set
the value from 8 TCP server threads to 12,
alpha
still has 10
TCP server threads, but
beta
and
gamma
now each have
12 TCP server threads.
If a member runs
nfsd
it must also run
mountd
, and vice versa.
This is automatically
taken care of when you configure NFS with
nfsconfig
or
sysman nfs
.
If locking is enabled on a cluster member, then the
rpc.lockd
and
rpc.statd
daemons are started on the member.
If locking is configured clusterwide, then the
lockd
and
statd
run clusterwide
(rpc.lockd -c
and
rpc.statd -c
),
and the daemons are highly available and are managed by CAA.
The server uses the default cluster alias or an alias that is specified in
/etc/exports.aliases
as its address.
When a cluster acts as an NFS server, client systems that are external to the
cluster see it as a single system with the cluster alias as its name.
Client systems that mount directories with CDSLs in them see only
those paths that are on the cluster member that is running the clusterwide
statd
and
lockd
pair.
You can start and stop services
either on a specific member or on the entire cluster.
Typically, you should not need to manage the clusterwide
lockd
and
statd
pair.
However,
if you do need to stop the daemons, enter the following command:
# caa_stop cluster_lockd
To start the daemons, enter the following command:
# caa_start cluster_lockd
To relocate the server
lockd
and
statd
pair to a different member, enter the
caa_relocate
command as follows:
# caa_relocate cluster_lockd
For more information about starting and stopping highly available
applications, see
Chapter 8.
7.6.2 Considerations for Using NFS in a Cluster
This section describes the differences between using NFS in a cluster and
in a standalone system.
7.6.2.1 Clients Must Use a Cluster Alias
When a cluster acts as an NFS server, clients must use the
default cluster alias, or an alias that is listed in
/etc/exports.aliases
, to specify the host when
mounting file systems served by the cluster.
If
a node that is external to the cluster attempts to mount a
file system from the cluster and the node does
not use the default cluster alias, or an alias that is listed in
/etc/exports.aliases
, a "connection refused" error is
returned to the external node.
Other commands
that run through
mountd
, like
umount
and
export
,
receive a "Program unavailable" error when the commands
are sent from external clients and do not use the default
cluster alias or an alias listed in
/etc/exports.aliases
.
Before configuring additional aliases for use as NFS servers,
read the sections in the
Cluster Technical Overview
that discuss how NFS
and the cluster alias subsystem interact for NFS, TCP, and User Datagram
Protocol (UDP) traffic.
Also read the
exports.aliases
(4)
reference page and the
comments at the beginning of the
/etc/exports.aliases
file.
7.6.2.2 Using CDSLs to Mount NFS File Systems
When a cluster acts as an NFS client, an NFS file system that is mounted by one cluster member is accessible to all cluster members: the Cluster File System (CFS) funnels file accesses through the mounting member to the external NFS server. That is, the cluster member performing the mount becomes the CFS server for the NFS file system and is the node that communicates with the external NFS server. By maintaining cache coherency across cluster members, CFS guarantees that all members at all times have the same view of the NFS file system.
However, in the event that the mounting member becomes unavailable, there is no failover. Access to the NFS file system is lost until another cluster member mounts the NFS file system.
There are several ways to address this possible loss of file system availability. You might find that using AutoFS to provide automatic failover of NFS file systems is the most robust solution because it allows for both availability and cache coherency across cluster members. Using AutoFS in a cluster environment is described in Section 7.6.2.5.
As an alternative to using AutoFS, you can
use the
mkcdsl -a
command to convert a mount point
into a CDSL.
This will copy an existing directory to a member-specific
area on all members.
You then use the CDSL as the mount point for the NFS file
system.
In this scenario, there is still only one NFS server for the
file system, but each cluster member is an NFS client.
Cluster members are not
dependent on one cluster member functioning as the CFS server of the NFS
file system.
If one cluster member becomes unavailable, access to the
NFS file system by the other cluster members is not affected.
However, cache coherency across cluster members is not provided by
CFS: the cluster members rely on NFS to
maintain the cache coherency using the usual NFS methods, which do not
provide single-system semantics.
If relying on NFS to provide the file system integrity is acceptable in your environment, perform the following steps to use a CDSL as the mount point:
Create the mount point if one does not already exist.
# mkdir /mountpoint
Use the
mkcdsl -a
command to
convert the directory into a CDSL.
This will copy an existing directory to a
member-specific area on all members.
#mkcdsl -a /mountpoint
Mount the NFS file system on each cluster member, using the same NFS server.
# mount server:/filesystem /mountpoint
We recommend adding the mount information to the
/etc/fstab
file so that the mount is performed
automatically on each cluster member.
7.6.2.3 Loopback Mounts Not Supported
NFS loopback mounts do not work in a cluster.
Attempts to NFS-mount a file system that is served by the cluster
onto a directory on the cluster fail and return the message,
Operation not supported
.
7.6.2.4 Do Not Mount Non-NFS File Systems on NFS-Mounted Paths
CFS does not permit non-NFS file systems to be mounted on NFS-mounted paths.
This limitation prevents problems with availability of the physical
file system in the event that the serving cluster member goes down.
7.6.2.5 Using AutoFS in a Cluster
If you want automatic mounting of NFS file systems, use
AutoFS.
AutoFS provides automatic failover of the automounting
service by means of CAA.
One member acts as the CFS server for
automounted file systems, and runs the one active copy of the AutoFS daemon,
autofsd
.
If this member fails, CAA starts
autofsd
on another member.
For instructions on configuring
AutoFS
, see
the section on automatically mounting a remote file system
in the Tru64 UNIX
Network Administration: Services
manual.
After you
have configured
AutoFS
, you must
start the daemon as follows:
# caa_start autofs
In TruCluster Server Version 5.1A, the
value of the
SCRIPT_TIMEOUT
attribute has been
increased to 3600 to reduce the possibility of the
autofs
timing out.
You can increase this value, but
we recommend that you do not decrease it.
In previous versions of TruCluster Server, depending on the number of file systems being imported, the speeds of datalinks, and the distribution of imported file systems among servers, you might see a CAA message like the following:
# CAAD[564686]: RTD #0: Action Script \ /var/cluster/caa/script/autofs.scr(start) timed out! (timeout=180)
In this situation, you need to increase the value of the
SCRIPT_TIMEOUT
attribute in the
CAA profile for
autofs
to a value greater than 180.
You can do this by editing
/var/cluster/caa/profile/autofs.cap
, or
you can use the
caa_profile -update autofs
command
to update the profile.
For example, to increase
SCRIPT_TIMEOUT
to 3600
seconds, enter the following command:
# caa_profile -update autofs -o st=3600
For more information about CAA profiles and
using the
caa_profile
command, see
caa_profile
(8).
If you use AutoFS, keep in mind the following:
On a cluster that imports a large number of file systems from a
single NFS server, or imports from a server over an especially
slow datalink, you might need to
increase the value of the
mount_timeout
kernel
attribute in the
autofs
subsystem.
The default
value for
mount_timeout
is 30 seconds.
You can
use the
sysconfig
command to change the attribute
while the member is running.
For example, to change the timeout value
to 50 seconds, use the following command:
# sysconfig -r autofs mount_timeout=50
When the
autofsd
daemon starts or when
autofsmount
runs to process maps for
automounted file systems, AutoFS
makes sure that all cluster members are running the same
version of the TruCluster Server software.
7.6.2.6 Forcibly Unmounting File Systems
If
AutoFS
on a cluster member is
stopped or becomes unavailable (for example, if the CAA
autofs
resource is stopped), intercept points
and file systems auto-mounted by
AutoFS
continue to be
available.
However, in the case where
AutoFS
is stopped on a cluster member on which there are busy file systems,
and then started on another member, there is a likely problem in
which
AutoFS
intercept points
continue to recognize the original cluster member as the server.
This occurs
because the
AutoFS
intercept points are busy when the
file systems that are mounted under them are busy, and these intercept points
still claim the original cluster member as the server.
These intercept points do not allow new auto-mounts.
7.6.2.6.1 Determining Whether a Forced Unmount is Required
There are two situations under which you might encounter this problem:
You detect an obvious problem accessing an auto-mounted file system.
You move the CAA
autofs
resource.
In the case where you detect an obvious problem accessing an auto-mounted file system, ensure that the auto-mounted file system is being served as expected. To do this, perform the following steps:
Use the
caa_stat
autofs
command to see where CAA indicates the
autofs
resource is running.
Use the
ps
command to verify that the
autofsd
daemon is running on the member on which CAA
expects it to run:
# ps agx | grep autofsd
If it is not running, run it and see whether this fixes the problem.
Determine the auto-mount map entry that is associated with the
inaccessible file system.
One way to do this is to search the
/etc/auto.x
files for the entry.
Use the
cfsmgr -e
command to
determine whether the mount point exists and is being served by the expected
member.
If the server is not what CAA expects, the problem exists.
In the case where you move the CAA resource to another member,
use the
mount
-e
command to identify AutoFS intercept points and the
cfsmgr -e
command to show the servers for all mount
points.
Verify that all
AutoFS
intercept points and auto-mounted file systems have been unmounted
on the member on which
AutoFS
was stopped.
When you use the
mount -e
command, search the output
for
autofs
references similar to the following:
# mount -e | grep autofs /etc/auto.direct on /mnt/mytmp type autofs (rw, nogrpid, direct)
When you use the
cfsmgr -e
command, search the output
for map file entries similar to the following.
The
Server Status
field does not indicate whether the
file system is actually being served; look in the
Server
Name
field for the name of the member on which
AutoFS
was stopped.
# cfsmgr -e Domain or filesystem name = /etc/auto.direct Mounted On = /mnt/mytmp Server Name = provolone Server Status : OK
7.6.2.6.2 Correcting the Problem
If you can wait until the busy file systems in
question become inactive, do so.
Then, run the
autofsmount
-U
command on the former
AutoFS
server node
to unmount them.
Although this approach takes more time, it is a less
intrusive solution.
If waiting
until the busy file systems in question become inactive is not
possible, use the
cfsmgr -K
directory
command on
the former
AutoFS
server node to
forcibly unmount all
AutoFS
intercept points and
auto-mounted file systems served by that node, even if they are busy.
Note
The
cfsmgr -K
command makes a best effort to unmount allAutoFS
intercept points and auto-mounted file systems served by the node. However, thecfsmgr -K
command may not succeed in all cases. For example, thecfsmgr -K
command does not work if an NFS operation is stalled due to a down NFS server or an inability to communicate with the NFS server.The
cfsmgr -K
command results in applications receiving I/O errors for open files in affected file systems. An application with its current working directory in an affected file system will no longer be able to navigate the file system namespace using relative names.
Perform the following steps to relocate the
autofs
CAA resource and forcibly unmount the
AutoFS
intercept points and auto-mounted file systems:
Bring the system to a quiescent state if possible to minimize disruption to users and applications.
Stop the
autofs
CAA resource by
entering the following command:
# caa_stop autofs
CAA considers the
autofs
resource to be stopped even if some auto-mounted file systems are
still busy.
Enter the following command to
verify that all
AutoFS
intercept points and
auto-mounted file systems have been unmounted.
Search the output for
autofs references.
# mount -e
In the event that they have not all been unmounted, enter the
following command to forcibly unmount the
AutoFS
intercepts and auto-mounted file systems:
# cfsmgr -K directory
Specify the directory on which an
AutoFS
intercept point or auto-mounted file system is mounted.
You need enter
only one mounted-on directory to remove all of the intercepts and
auto-mounted file systems served by the same node.
Enter the following command to start the
autofs
resource:
# caa_start autofs -c cluster_member_to_be_server
7.7 Managing inetd Configuration
Configuration data for the Internet server daemon
(inetd
) is kept in the
following two files:
Shared clusterwide by all members.
Use
/etc/inetd.conf
for services that should run identically on every member.
/etc/inetd.conf.local
The
/etc/inetd.conf.local
file holds configuration
data specific to each cluster member.
Use it to configure per-member network services.
To disable a clusterwide service on a local member,
edit
/etc/inetd.conf.local
for that member, and
enter
disable
in the
ServerPath
field for the service to be disabled.
For example, if
finger
is enabled clusterwide in
inetd.conf
and you want to disable it on a
member, add a line like the following to that member's
inetd.conf.local
file:
finger stream tcp nowait root disable fingerd
When
/etc/inetd.conf.local
is not present on a
member, the configuration in
/etc/inetd.conf
is used.
When
inetd.conf.local
is present,
its entries take precedence over those in
inetd.conf
.
7.8 Managing Mail
TruCluster Server supports the following mail protocols:
Simple Mail Transfer Protocol (SMTP)
DECnet Phase IV
DECnet Phase V
Message Transport System (MTS)
Unix-to-Unix Copy Program (UUCP)
X.25
In a cluster, all members must have the same mail configuration. If DECnet, SMTP, or any other protocol is configured on one cluster member, it must be configured on all members, and it must have the same configuration on each member. You can configure the cluster as a mail server, client, or as a standalone configuration, but the configuration must be clusterwide. For example, you cannot configure one member as a client and another member as a server.
Of the supported protocols, only SMTP is cluster-aware, so only SMTP can make use of the cluster alias. SMTP handles e-mail sent to the cluster alias, and labels outgoing mail with the cluster alias as the return address.
When configured, an instance of
sendmail
runs on each cluster
member.
Every member can handle messages waiting for processing because
the mail queue file is shared.
Every member can handle mail delivered
locally because each user's maildrop is shared among all members.
The other mail protocols, DECnet Phase IV, DECnet Phase V, Message Transport System (MTS), UUCP, and X.25, can run in a cluster environment, but they act as though each cluster member is a standalone system. Incoming e-mail using one of these protocols must be addressed to an individual cluster member, not to the cluster alias. Outgoing e-mail using one of these protocols has as its return address the cluster member where the message originated.
Configuring DECnet Phase IV, DECnet Phase V, Message Transport System (MTS), UUCP, or X.25 in a cluster is like configuring it in a standalone system. It must be configured on each cluster member, and any hardware that is required by the protocol must be installed on each cluster member.
The following sections describe managing mail in more detail.
7.8.1 Configuring Mail
Configure mail with either the
mailsetup
or
mailconfig
command.
Whichever command you choose, you have
to use it for future mail configuration on the cluster, because
each command understands only its own configuration format.
7.8.1.1 Mail Files
The following mail files are all common files shared clusterwide:
/usr/adm/sendmail/sendmail.cf
/usr/adm/sendmail/aliases
/var/spool/mqueue
/usr/spool/mail/*
The following mail files are member-specific:
Files in
/var/adm/sendmail
that have
hostname
as part of the file name
use the default cluster alias in place of
hostname.
For example, if the cluster
alias is
accounting
,
/var/adm/sendmail
contains files named
accounting.m4
and
Makefile.cf.accounting
.
Because the mail statistics file,
/usr/adm/sendmail/sendmail.st
, is member-specific,
mail statistics are unique to each cluster member.
The
mailstat
command returns statistics only for
the member on which the command executed.
When mail protocols other than SMTP are configured, the
member-specific
/var/adm/sendmail/protocols.map
file
stores member-specific information about the protocols in use.
In addition to a list of protocols,
protocols.map
lists DECnet Phase IV and DECnet Phase V aliases, when those protocols
are configured.
7.8.1.2 The Cw Macro (System Nicknames List)
Whether you configure mail with
mailsetup
or
mailconfig
,
the configuration process automatically adds the names of
all cluster members and the cluster alias to the
Cw
macro (nicknames
list) in the
sendmail.cf
file.
The nicknames list must contain these names.
If, during mail configuration, you accidentally delete the
cluster alias or a member name from the nicknames list, the
configuration program will add it back in.
During configuration you are given the opportunity to
specify additional nicknames for the cluster.
However, if
you do a quick setup in
mailsetup
,
you are not prompted to update the nickname list.
The cluster members and the
cluster alias are still automatically added to the
Cw
macro.
7.8.1.3 Configuring Mail at Cluster Creation
We recommend that you configure mail on your Tru64 UNIX
system before you run the
clu_create
command.
If you run only SMTP, then you do not need to perform further mail
configuration when you add new members to the cluster.
The
clu_add_member
command takes care of correctly
configuring mail on new members as they are added.
If you configure DECnet Phase IV, DECnet Phase V, MTS, UUCP,
or X.25, then each time that you add a new cluster member, you must run
mailsetup
or
mailconfig
and configure the protocol on the new
member.
7.8.1.4 Configuring Mail After the Cluster Is Running
All members must have the same mail configuration.
If you want to run only SMTP, then you need configure mail only once,
and you can run
mailsetup
or
mailconfig
from any cluster member.
If you want to run a protocol other than SMTP, you must manually run
mailsetup
or
mailconfig
on every member and configure the protocols.
Each member must also have any hardware required by the protocol.
The protocols must be configured for every cluster member,
and the configuration of each protocol must be the same on every
member.
The
mailsetup
and
mailconfig
commands cannot be focused on individual cluster members.
In the case of SMTP, the commands configure mail for the entire
cluster.
For other mail protocols,
the commands configure the protocol only for the cluster member on
which the command runs.
If you try to run
mailsetup
with the
-focus
option, you get the following error message:
Mail can only be configured for the entire cluster.
Whenever you add a new member to the cluster,
and you are running any mail protocol other than SMTP, you must run
mailconfig
or
mailsetup
and
configure the protocol on the new member.
If you run only SMTP, then no mail configuration is required when
a member is added.
Deleting members from the cluster requires no reconfiguration of mail,
regardless of the protocols that you are running.
7.8.2 Distributing Mail Load Among Cluster Members
Mail handled by SMTP can be load balanced by means of
the cluster alias selection priority (selp
) and
selection weight (selw
), which
load balance network connections among cluster members as follows:
The cluster member with the highest selection priority receives all connection requests.
The selection priority can be any integer from 1 through 100. The default value is 1.
Selection weight determines the distribution of connections among members with the same selection priority. A member receives, on average, the number of connection requests equal to the selection weight, after which requests are routed to the next member with the same selection priority.
The selection weight can be any integer from 0 through 100. A member with a selection weight of 0 receives no incoming connection requests, but can send out requests.
By default, all cluster members have the same selection
priority (selp=1
) and selection weight
(selw=1
), as determined by the
/etc/clu_alias.config
file on each member.
(The
clu_create
command uses a
default selection weight of 3, but if you create an alias the default
selection weight is 1.) When all
members share the same selection priority and the same
selection weight, then connection requests are distributed equally
among the members.
In the case of
the default system configuration, each member in turn
handles one incoming connection.
If you want all incoming mail (and all other connections) to be handled by a subset of cluster members, set the selection priority for those cluster members to a common value that is higher than the selection priority of the remaining members.
You can also create a mail alias that includes only those cluster members that you want to handle mail, or create a mail alias with all members and use the selection priority to determine the order in which members of the alias receive new connection requests.
Set the selection weight or selection priority for a member by
running the
cluamgr
command on that member.
If your cluster members have the default values for
selp
and
selw
, and you want
all incoming mail (and
all
other connections) to be handled by a
single cluster member, log in to that member and assign it
a
selp
value greater than the default.
For example, enter the following command:
# cluamgr -a alias=DEFAULTALIAS,selp=50
Suppose you have an eight-member cluster and you want two of the members,
alpha
and
beta
,
to handle all incoming connections,
with the load split 40/60 between
alpha
and
beta
,
respectively.
Log in to
alpha
and enter the following command:
# cluamgr -a alias=DEFAULTALIAS,selp=50,selw=2
Then log in to
beta
and enter the following command:
# cluamgr -a alias=DEFAULTALIAS,selp=50,selw=3
Assuming that the other members have the default
selp
of 1,
beta
and
alpha
will handle
all connection requests.
beta
will take
three connections, then
alpha
will take two,
then
beta
will take the next three, and so on.
Note
Setting
selp
andselw
in this manner affects all connections through the cluster alias, not just the mail traffic.
For more information on balancing connection requests, see
Section 3.9
and
cluamgr
(8).
7.9 Configuring a Cluster for RIS
To create a Remote Installation Services (RIS) server in a cluster, perform the following procedure in addition to the procedure that is described in the Tru64 UNIX Sharing Software on a Local Area Network manual:
Modify
/etc/bootptab
so that the
NFS mount point is set to the default cluster alias.
Set the
tftp
server address to the
default cluster alias:
sa=default_cluster_alias
For information about
/etc/bootptab
,
see
bootptab
(4).
Note
Depending on your network configuration, you may need to supply a unique, arbitrary hardware address when registering the alias with the RIS server.
To use a cluster as an RIS client, you must do the following:
Register the cluster member from which you will be using
the
setld
command with the RIS server.
Do this by registering the member name and the
hardware address of that member.
Register the default cluster alias.
If you are registering for an operating system kit,
you will be prompted to enter a hardware address.
The
cluster alias does not have a physical interface associated
with its host name.
Instead, use any physical address that
does not already appear in either
/etc/bootptab
or
/usr/var/adm/ris/clients/risdb
.
If your cluster uses the cluster alias virtual MAC (vMAC) feature, register that virtual hardware address with the RIS server as the default cluster alias's hardware address. If your cluster does not use the vMAC feature, you can still generate a virtual address by using the algorithm that is described in the virtual MAC (vMAC) section, Section 3.11.
A virtual MAC address consists of a prefix (the default is AA:01)
followed by the IP address of the alias in hexadecimal format.
For
example, the default vMAC address for the default cluster alias
deli
whose IP address is
16.140.112.209
is
AA:01;10:8C:70:D1
.
The address is derived in the
following manner:
Default vMAC prefix: AA:01 Cluster Alias IP Address: 16.140.112.209 IP address in hex. format: 10.8C.70:D1 vMAC for this alias: AA:01:10:8C:70:D1
Therefore, when registering this default cluster alias as a RIS
client, the host name is
deli
and the hardware
address is
AA:01:10:8C:70:D1
.
If you do not register both the default cluster alias and the member,
the
setld
command will return a message such
as one of the following:
# setld -l ris-server: setld: Error contacting server ris-server: Permission denied. setld: cannot initialize ris-server:
# setld -l ris-server: setld: ris-server: not in server database setld: cannot load control information
7.10 Displaying X Window Applications Remotely
You can configure the cluster so that a user on a system outside the cluster can run X applications on the cluster and display them on the user's system using the cluster alias.
The following example shows the use of
out_alias
as a way to apply single-system semantics
to X applications that are displayed from cluster members.
In
/etc/clua_services
, the
out_alias
attribute is set for the X server port (6000).
A user on a system outside the cluster wants to run an X application
on a cluster member and display back to the user's system.
Because the
out_alias
attribute is set on port 6000 in the
cluster, the user must specify the name of the default cluster alias
when running the
xhost
command to allow
X clients access to the user's local system.
For example,
for a cluster named
deli
, the user runs the
following command on the local system:
# xhost +deli
This use of
out_alias
allows any X application from
any cluster member to be displayed on that user's system.
A cluster
administrator who wants users to allow access on a per-member
basis can either comment out the
Xserver
line in
/etc/clua_services
, or remove the
out_alias
attribute from that line (and then run
cluamgr -f
on each cluster member to make the
change take effect).
For more information on cluster aliases, see Chapter 3.