3    Managing the Cluster Alias Subsystem

As cluster administrator, you control the number of aliases, the membership of each alias, and the attributes specified by each member of an alias. For example, you can set the weighting selections that determine how client requests for in_multi services are distributed among members of an alias. You also control the alias-related attributes assigned to ports in the /etc/clua_services file.

This chapter discusses the following topics:

You can use both the cluamgr command and the SysMan Menu to configure cluster aliases:

3.1    Summary of Alias Features

The chapter on cluster alias in the TruCluster Server Cluster Technical Overview manual describes cluster alias concepts. Read that chapter before modifying any alias or service attributes.

The following list summarizes important facts about the cluster alias subsystem:

3.2    Configuration Files

The following configuration files manage cluster aliases and services:

/sbin/init.d/clu_alias

The boot-time startup script for the cluster alias subsystem.

/etc/clu_alias.config

A CDSL pointing to a member-specific clu_alias.config file, which is called from the /sbin/init.d/clu_alias script. Each member's clu_alias.config file contains the cluamgr commands that are run at boot time to configure and join aliases, including the default cluster alias, for that member. (The cluamgr command does not modify or update this file; the SysMan utility edits this file.) Although you can manually edit the file, the preferred method is through the SysMan Menu.

/etc/clua_services

Defines ports, protocols, and connection attributes for Internet services that use cluster aliases. The cluamgr command reads this file at boot time and calls clua_registerservice() to register each service that has one or more service attributes assigned to it.

If you modify the file, run cluamgr -f on each cluster member. For more information, see clua_services(4) and cluamgr(8).

/etc/exports.aliases

Contains the names of cluster aliases (one alias per line) whose members will accept NFS requests. By default, the default cluster alias is the only cluster alias that will accept NFS requests. Use the /etc/exports.aliases file to specify additional aliases as NFS servers.

/etc/gated.conf.membern

Each cluster member's cluster alias daemon, aliasd, creates a /etc/gated.conf.membern file for that member. The daemon starts gated using this file as gated's configuration file rather than the member's /cluster/members/{memb}/etc/gated.conf file.

If you stop alias routing on a cluster member with cluamgr -r stop, the alias daemon restarts gated with that member's gated.conf as gated's configuration file.

3.3    Planning for Cluster Aliases

Managing aliases can be divided into three broad categories:

Consider the following things when planning the alias configuration for a cluster:

3.4    Preparing to Create Cluster Aliases

To prepare to create cluster aliases, follow these steps:

  1. For services with fixed port assignments, examine the entries in /etc/clua_services. Add entries for any additional services.

  2. For each alias, make sure that its IP address is associated with a host name in whatever hosts table your site uses; for example, /etc/hosts, Berkeley Internet Name Domain (BIND), or Network Information Service (NIS).

    Note

    If you modify a .rhosts file on a client to allow nonpassword-protected logins and remote shells from the cluster, use the default cluster alias as the host name, not the host names of individual cluster members. Login requests originating from the cluster use the default cluster alias as the source address.

  3. If any alias addresses are on virtual subnets, register the subnet with local routers. (Remember that a virtual subnet cannot have any real systems in it.)

3.5    Specifying and Joining a Cluster Alias

Before you can specify or join an alias, you must have a valid host name and IP address for the alias.

The cluamgr command is the command-line interface for specifying, joining, and managing aliases. When you specify an alias on a cluster member, that member is aware of the alias and can advertise a route to the alias. The simplest command that specifies an alias using the default values for all alias attributes is:

# cluamgr -a alias=alias
 

When you specify and join an alias on a cluster member, that member can advertise a route to the alias and receive connection requests or packets addressed to that alias. The simplest command that both specifies and joins an alias using the default values for all attributes is:

# cluamgr -a alias=alias,join
 

To specify and join a cluster alias, follow these steps:

  1. Get a host name and IP address for the alias.

  2. Using the SysMan Menu, add the alias. Specify alias attributes when you do not want to use the default values for the alias; for example, to change the value of selp or selw.

    SysMan Menu only writes the command lines to a member's clu_alias.config file. Putting the aliases in a member's clu_alias.config file means that the aliases will be started at the next boot, but it does not start them now.

    The following are sample cluamgr command lines from one cluster member's clu_alias.config file. All alias IP addresses are on a common subnet.

    /usr/sbin/cluamgr -a alias=DEFAULTALIAS,rpri=1,selw=3,selp=1,join
    /usr/sbin/cluamgr -a alias=clua_ftp,join,selw=1,selp=1,rpri=1,virtual=f
    /usr/sbin/cluamgr -a alias=printall,selw=1,selp=1,rpri=1,virtual=f
     
    

  3. Manually run the appropriate cluamgr commands on those members to specify or join the aliases, and to restart alias routing. For example:

    # cluamgr -a alias=clua_ftp,join,selw=1,selp=1,rpri=1
    # cluamgr -a alias=printall,selw=1,selp=1,rpri=1
    # cluamgr -r start
     
    

    The previous example does not explicitly specify virtual=f for the two aliases because f is the default value for the virtual attribute. As mentioned earlier, to join an alias and accept the default values for the alias attributes, the following command will suffice:

    cluamgr -a alias=alias_name,join
     
    

The following example shows how to configure an alias on a virtual network; it is not much different from configuring an alias on a common subnet.

# cluamgr -a alias=virtestalias,join,virtual,mask=255.255.255.0

The cluster member specifies, joins, and will advertise a host route to alias virtestalias and a network route to the virtual network. The command explicitly defines the subnet mask that will be used when advertising a net route to this virtual subnet. If you do not specify a subnet mask, the alias daemon uses the network mask of the first interface through which the virtual subnet will be advertised.

If you do not want a cluster member to advertise a network route for a virtual subnet, you do not need to specify virtual or virtual=t for an alias in a virtual subnet. For example, the cluster member on which the following command is run will join the alias, but will not advertise a network route:

# cluamgr -a alias=virtestalias,join

See cluamgr(8) for detailed instructions on configuring an alias on a virtual subnet.

When configuring an alias whose address is in a virtual subnet, remember that the aliasd daemon does not keep track of the stanzas that it writes to a cluster member's gated.conf.membern configuration file for virtual subnet aliases. If more than one alias resides in the same virtual subnet, the aliasd daemon creates extra stanzas for the given subnet. This can cause gated to exit and write the following error message to the daemon.log file:

	duplicate static route
 

To avoid this problem, modify cluamgr virtual subnet commands in /etc/clu.alias.config to set the virtual flag only once for each virtual subnet. For example, assume the following two virtual aliases are in the same virtual subnet:

/usr/sbin/cluamgr -a alias=virtualalias1,rpri=1,selw=3,selp=1,join,virtual=t
/usr/sbin/cluamgr -a alias=virtualalias2,rpri=1,selw=3,selp=1,join
 

Because there is no virtual=t argument for the virtualalias2 alias, aliasd will not add a duplicate route stanza to this member's gated.conf.membern file.

3.6    Modifying Cluster Alias and Service Attributes

You can run the cluamgr command on any cluster member at any time to modify alias attributes. For example, to change the selection weight of the clua_ftp alias, enter the following command:

# cluamgr -a alias=clua_ftp,selw=2
 

To modify service attributes for a service in /etc/clua_services, follow these steps:

  1. Modify the entry in /etc/clua_services.

  2. On each cluster member, enter the following command to force cluamgr to reread the file:

    # cluamgr -f
     
    

Note

Reloading the clua_services file does not affect currently running services. After reloading the configuration file, you must stop and restart the service.

For example, the telnet service is started by inetd from /etc/inetd.conf. If you modify the service attributes for telnet in clua_services, you have to run cluamgr -f, and then stop and restart inetd in order for the changes to take effect. Otherwise the changes take effect at the next reboot.

3.7    Leaving a Cluster Alias

Enter the following command on each cluster member that you want to leave a cluster alias that it has joined:

# cluamgr -a alias=alias,leave
 

If configured to advertise a route to the alias, the member will still advertise a route to this alias but will not be a destination for any connections or packets that are addressed to this alias.

3.8    Monitoring Cluster Aliases

Use the cluamgr -s all command to learn the status of cluster aliases. For example:

cluamgr -s all
 
Status of Cluster Alias: deli.zk3.dec.com
 
netmask: 0
aliasid: 1
flags: 7<ENABLED,DEFAULT,IP_V4>
connections rcvd from net: 72
connections forwarded: 14
connections rcvd within cluster: 52
data packets received from network: 4083
data packets forwarded within cluster: 2439
datagrams received from network: 28
datagrams forwarded within cluster: 0
datagrams received within cluster: 28
fragments received from network: 0
fragments forwarded within cluster: 0
fragments received within cluster: 0
Member Attributes:
memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED>
memberid: 2, selw=2, selp=1, rpri=1 flags=11<JOINED,ENABLED>
 

Note

Running netstat -i does not display cluster aliases.

For aliases on a common subnet, you can run arp -a on each member to determine which member is routing for an alias. Look for the alias name and permanent published. For example:

# arp -a | grep permanent
deli (16.140.112.209) at 00-00-f8-24-a9-30 permanent published
 

3.9    Load Balancing

The concept of load balancing applies only to in_multi services. All packets and requests for a single-instance service go to only one member of the alias at a time.

The cluster alias subsystem does not monitor the performance of individual cluster members and perform automatic load balancing for in_multi services. You control the distribution of connection requests when you assign the selection priority and selection weight for each member of an alias. You can manually modify these values at any time.

You can use an alias's selection priority, selp=n, to create logical subsets within an alias. For example, assume that four cluster members have joined an alias:

As long as any selp=5 member can respond to requests, no requests are directed to any selp=4 member. Therefore, as long as members A and B are capable of serving requests, members C and D will not receive any packets or requests addressed to this alias. You can use selection priority to create a failover hierarchy among members of a cluster alias.

You can use an alias's selection weight, selw=n, to control the distribution of requests among members of an alias. The selection weight that a member attaches to an alias translates, on average, to the number of requests (per application) that are directed to this member before requests are directed to the next member of the alias with the same selection priority. For example, assume that four cluster members have joined a cluster alias:

Assuming that all selection priorities are the same, the round-robin algorithm will walk through the list of members, distributing selw requests to each member before moving to the next. Member A gets 3 requests, then member B gets 3 requests, then member C gets two requests, and so on.

When assigning selection weights to members of an alias, assign higher weights to members whose resources best match those of the application that is accessed through the alias.

An administrator with shell script experience can write a script to monitor the performance of cluster members and use this information as a basis for raising or lowering alias selection weights. In this case, performance is determined by whatever is relevant to the applications in question.

As an example, assume you have a four-member cluster that you want to configure as a Web site whose primary purpose is a file archive. Users will connect to the site and download large files. The cluster consists of four members that are connected to a common network. Within the cluster, members A and B share one set of disks while members C and D share another set of disks. The network interfaces for members A and B are tuned for bulk data transfer (for example, ftp transfers); the network interfaces for members C and D are tuned for short timeouts and low latency (connections from the Web).

You define two cluster aliases: clua_ftp and clua_http. All four cluster members join both aliases, but with different values.

A and B have the following lines in their /etc/clu_alias.config files:

/usr/sbin/cluamgr -a alias=clu_ftp,selw=1,selp=10,join
/usr/sbin/cluamgr -a alias=clu_http,selw=1,selp=5,join
 

C and D have the following lines in their /etc/clu_alias.config files:

/usr/sbin/cluamgr -a alias=clu_ftp,selw=1,selp=5,join
/usr/sbin/cluamgr -a alias=clu_http,selw=1,selp=10,join
 

The result is that as long as either A or B is up, they will handle all ftp requests; as long as either C or D is up, they will handle all http requests. However, because all four members belong to both aliases, if the two primary servers for either alias go down, the remaining alias members (assuming that quorum is maintained) will continue to service client requests.

3.10    Extending Clusterwide Port Space

The number of ephemeral (dynamic) ports that are available clusterwide for services is determined by the inet subsystem attributes ipport_userreserved_min (default: 1024) and ipport_userreserved (default: 5000).

Because port space is shared among all cluster members, clusters with more members might experience contention for available ports. If a cluster has more than two members, we recommend that you set the value of ipport_userreserved to its maximum allowable value (65535). (Setting ipport_userreserved = 65535 has no adverse side effects.)

To set ipport_userreserved to its maximum value, follow these steps:

  1. On one member of the cluster, add the following lines to the clusterwide /etc/sysconfigtab.cluster file to configure members to set ipport_userreserved to 65535 when they next reboot:

    inet:
     ipport_userreserved=65535
     
    

  2. On each member of the cluster run the sysconfig command to modify the current value of ipport_userreserved:

    sysconfig -r inet ipport_userreserved=65535
     
    

3.11    Enabling Cluster Alias vMAC Support

When a cluster alias IP address is configured in a common subnet, one cluster member in that subnet will, based on its router priority (rpri) value for that alias, act as the alias's proxy ARP master. This member will respond to local ARP requests addressed to the alias, and will broadcast a gratuitous ARP packet to inform other systems of the hardware (MAC) address that is associated with the alias's IP address. The other local systems then update their ARP tables to reflect this cluster-alias-to-MAC association.

However, this broadcast packet is a problem for systems that do not understand gratuitous ARP packets. They will not become aware of changes in the cluster alias-to-MAC association until the normal timeout interval for their ARP tables has elapsed. A solution is to provide a virtual hardware address (vMAC address) for each cluster alias.

A virtual MAC address is a unique hardware address that can be automatically created for each alias IP address. An alias's vMAC address follows the cluster alias proxy ARP master from node to node as needed. Regardless of which cluster member is serving as the proxy ARP master for the alias, the alias's vMAC address does not change.

When vMAC support is enabled, if a cluster member becomes the proxy ARP master for a cluster alias, it creates a virtual MAC address for use with that cluster alias. A virtual MAC address consists of a prefix (the default is AA:01) followed by the IP address of the alias in hexadecimal format. For example, the default vMAC address for an alias whose IP address is 16.140.112.209 is AA:01:10:8C:70:D1:

        Default vMAC prefix:       AA:01
        Cluster Alias IP Address:  16.140.112.209
        IP address in hex. format: 10.8C.70:D1
        vMAC for this alias:       AA:01:10:8C:70:D1
 

When another cluster member becomes the proxy ARP master for this alias, the virtual MAC address moves with the alias so that a consistent MAC address is presented within the common subnet for each cluster alias.

When configuring vMAC support, configure all cluster members identically. For this reason, set vMAC configuration variables in /etc/rc.config.common.

By default, vMAC support is disabled. To enable vMAC support, use rcmgr to put the appropriate entry in /etc/rc.config.common:

rcmgr -c set VMAC_ENABLED yes
 

Conversely, to disable vMAC support, enter:

rcmgr -c set VMAC_ENABLED no
 

To change the default AA:01 vMAC prefix, enter:

rcmgr -c set VMAC_PREFIX xx:xx
 

To manually enable or disable vMAC support on an individual cluster member, specify the cluamgr vmac or novmac routing option. For example, to enable vMAC support for a cluster member, enter:

cluamgr -r vmac
 

To manually disable vMAC support for an individual cluster member, enter:

cluamgr -r novmac
 

Because all cluster members should have the same vMAC settings, the recommended sequence when enabling vMAC support is as follows:

  1. On any cluster member, enter:

    rcmgr -c set VMAC_ENABLED yes
     
    

    This ensures that vMAC support is automatically enabled at boot time. However, because setting this variable only affects a member when it reboots, the currently running cluster does not have vMAC support enabled.

  2. To manually enable vMAC support for the currently running cluster, enter the following command on each cluster member:

    cluamgr -r vmac
     
    

    You do not have to add the cluamgr -r vmac command to each cluster member's /etc/clu_alias.config file. Running the cluamgr -r vmac command manually on each member enables vMAC support now; setting VMAC_ENABLED to yes in the shared /etc/rc.config.common file automatically enables vMAC support at boot time for all cluster members.

3.12    Routing Configuration Guidelines

Cluster alias operations require that all subnets that are connected to the cluster include a functioning router. This allows cluster alias connectivity to work without any manual routing configuration. For a connected subnet with no router, some manual routing configuration is required because the cluster alias daemons on cluster members cannot unambiguously determine and verify routes that act correctly for all possible routing topologies.

If you cannot configure a router in a subnet that is connected to the cluster (for example, one cluster member is connected to an isolated LAN containing only nonrouters), you must manually configure a network route to that subnet on each cluster member that is not connected to that subnet. For each member that is connected to a routerless subnet, add a network route to that subnet to that member's cluster interconnect interface.

Note

Multiple clusters on the same LAN can use the same virtual subnet.

This works because of host routes; any router on the LAN will see each cluster alias's individual host route, and will therefore direct packets to the correct cluster. Off of the LAN, advertisements to the virtual subnet will be propagated using the advertised network routes, and packets to cluster alias addresses in the virtual subnet will find their way to a router on the LAN. In summary, you should not need to use a separate virtual subnet for each cluster as long as (1) host routes are being generated and (2) the clusters share the same LAN.

However, using the same virtual subnet for multiple clusters is more complicated when the clusters are multi-homed. For instance, if two clusters both connect to LAN 1 but are separately connected to LAN 2 and LAN 3, using the same virtual subnet for both clusters does not work for packets that are coming into LAN 2 and LAN 3. A homogeneous LAN connection is required.

3.13    Cluster Alias and NFS

When a cluster is configured as an NFS server, NFS client requests must be directed either to the default cluster alias or to an alias listed in /etc/exports.aliases. NFS mount requests directed at individual cluster members are rejected.

As shipped, the default cluster alias is the only alias that NFS clients can use. However, you can create additional cluster aliases. If you put the name of a cluster alias in the /etc/exports.aliases file, members of that alias accept NFS requests. This feature is useful when some members of a cluster are not directly connected to the storage that contains exported file systems. In this case, creating an alias with only directly connected systems as alias members can reduce the number of internal hops that are required to service an NFS request.

As described in the Cluster Technical Overview, you must make sure that the members of an alias serving NFS requests are directly connected to the storage containing the exported file systems. In addition, if any other cluster members are directly connected to this storage but are not members of the alias, you must make sure that these systems do not serve these exported file systems. Only members of the alias used to access these file systems should serve these file systems. One approach is to use cfgmgr to manually relocate these file systems to members of the alias. Another option is to create boot-time scripts that automatically learn which members are serving these file systems and, if needed, relocate them to members of the alias.

Before configuring additional aliases for use as NFS servers, read the sections in the Cluster Technical Overview that discuss how NFS and the cluster alias subsystem interact for NFS, TCP, and Internet User Datagram Protocol (UDP) traffic. Also read the exports.aliases(4) reference page and the comments at the beginning of the /etc/exports.aliases file.

3.14    Cluster Alias and Cluster Application Availability

This section provides a general discussion of the differences between the cluster alias subsystem and cluster application availability (CAA).

There is no obvious interaction between the two subsystems. They are independent of each other. CAA is an application-control tool that starts applications, monitors resources, and handles failover. Cluster alias is a routing tool that handles the routing of connection requests and packets addressed to cluster aliases. They provide complementary functions: CAA decides where an application will run; cluster alias decides how to get there, as described in the following:

One potential cause for confusion is the term single-instance application. CAA uses this term to refer to an application that runs on only one cluster member at a time. However, for cluster alias, when an application is designated in_single, it means that the alias subsystem sends requests and packets to only one instance of the application, no matter how many members of the alias are listening on the port that is associated with the application. Whether the application is running on all cluster members or on one cluster member, the alias subsystem arbitrarily selects one alias member from those listening on the port and directs all requests to that member. If that member stops responding, the alias subsystem directs requests to one of the remaining members.

In the /etc/clua_services file, you can designate a service as either in_single or in_multi. In general, if a service is in /etc/clua_services and is under CAA control, designate it as an in_single service. However, even if the service is designated as in_multi, the service will operate properly for the following reasons:

All cluster members are members of the default cluster alias. However, you can create a cluster alias whose members are a subset of the entire cluster. You can also restrict which cluster members CAA uses when starting or restarting an application (favored or restricted placement policy).

If you create an alias and tell users to access a CAA-controlled application through this alias, make sure that the CAA placement policy for the application matches the members of the alias. Otherwise, you can create a situation where the application is running on a cluster member that is not a member of the alias. The cluster alias subsystem cannot send packets to the cluster member that is running the application.

The following examples illustrate the interaction of cluster alias and service attributes with CAA.

For each alias, the cluster alias subsystem recognizes which cluster members have joined that alias. When a client request uses that alias as the target host name, the alias subsystem sends the request to one of its members based on the following criteria:

Assume the same scenario, but now the application is controlled by CAA. As an added complication, assume that someone has mistakenly designated the application as in_multi in clua_services.

In yet another scenario, the application is not under CAA control and is running on several cluster members. All instances bind and listen on the same well-known port. However, the entry in clua_services is not designated in_multi; therefore, the cluster alias subsystem treats the port as in_single:

And finally, a scenario that demonstrates how not to combine CAA and cluster alias: