The terms in this glossary are commonly used in a TruCluster software environment.
Shell scripts used by CAA to control how applications are started,
stopped, and checked.
Action scripts are located in the
/var/cluster/caa/script
directory.
The file
names of action scripts take the form
resource_name.scr
.
A device that converts the protocol and hardware interface of one bus type into that of another bus.
Electrical switches on the side or rear of some disk drives that determine the SCSI address setting for the drive.
A cluster member that makes a cluster alias address known to the network and receives incoming packets for that alias. By default, all cluster members are configured as alias routers at boot time.
The characteristic of a computing system that allows it to provide computing services (such as applications) to clients with little or no disruption.
See also highly available
Flat or twisted-wire cable or a backplane composed of individual parallel circuits. A bus connects computer system components to provide communications paths for addresses, data, and control information.
A computer system that uses resources provided by another computer, called a server.
A loosely coupled collection of servers that share storage and other resources that make applications and data highly available. A cluster consists of communications media, member systems, peripheral devices, and applications. The systems communicate over a high-performance interconnect.
An IP address used to address all or a subset of the members in a cluster. A cluster alias makes some or all of the systems in a cluster look like a single system to the outside world.
The CAA subsystem provides high availability for single-instance applications and monitoring of the state of other types of resources (such as network interfaces). A single instance of any application that can run on Tru64 UNIX can be made highly available in a cluster with CAA.
See expected votes
A cluster virtual file system that sits above the physical file systems and provides clusterwide access (with assistance from the device request dispatcher) to all mounted file systems in a cluster. CFS maintains cache coherency across all cluster members, which ensures that all members have an identical, consistent view of file systems directly connected to the cluster.
Private physical bus employed by cluster members for intracluster communications.
The basic computing resource in a cluster. A member system must be physically connected to a cluster interconnect and at least one shared SCSI bus.
In common usage, a system configured with TruCluster Server software that is capable of joining a cluster. From the point of view of the connection manager, a system that has either formed a single-member cluster or has been granted membership in an existing cluster. The connection manager dynamically determines cluster membership based on communications among the cluster members. Only an active cluster member can access the shared resources of a cluster.
A situation in which an existing cluster can divide into two or more clusters.
In the context of cluster aliases, a cluster member that makes a cluster alias IP address known to the network and receives incoming packets addressed to the alias. By default, all cluster members are cluster routers.
In the context of cluster aliases, an existing physical subnet. Cluster alias IP addresses are either in a common subnet or in a virtual subnet.
The cluster software component that coordinates participation of systems in the cluster, and maintains cluster integrity when systems join or leave the cluster.
A special form of a symbolic link whose target
pathname includes an environment variable,
{memb}
, which is resolved at run time.
In a cluster, CDSLs make it possible to maintain per-system
configuration and data files within the shared CFS root (/
),
/usr
, and
/var
file systems.
The number of votes contributed by current cluster members and the quorum disk as seen by this member.
See locked port
A kernel subsystem that controls all I/O access to storage devices in a cluster. The device request dispatcher supports clusterwide access to both character and block disk devices.
Note: Do not confuse the device request dispatcher with the Distributed Raw Disk (DRD) services provided in the TruCluster Production Server product. The device request dispatcher is fully integrated with the kernel, and removes the need for having a specific service to make storage accessible to cluster members.
A special cluster alias created during cluster installation. All cluster members are, by default, members of the default cluster alias.
A SCSI bus where the signal's level is determined by the potential difference between two wires.
An application that is specifically designed to run on a cluster, using different members for specific purposes. These applications use the Memory Channel, distributed lock manager (DLM), and cluster alias application programming interfaces to integrate application with the cluster resources.
The cluster software component that synchronizes access to shared resources among cooperating processes throughout the cluster.
The sum of all member votes held by cluster members, plus the vote of the quorum disk, if one is defined.
The event manager (EVM) facility lets kernel-level and user-level
processes and components post events, and provides a means for
processes to subscribe for notification when selected events occur.
The facility provides an event viewer, an API, and command-line
utilities.
See
EVM
(5)
for more information.
See event manager
A transfer of the responsibility to provide services. A failover occurs when a hardware or software failure causes a service to restart on another member system.
A Memory Channel logical rail configuration that consists of two physical rails, with one physical rail active and the other inactive. If the active physical rail fails, a failover takes place and the inactive physical rail is used.
An optional mode of SCSI-2 that allows transmission rates of up to 10 MB per second.
A bus speed that uses the fast synchronous transfer option, enabling I/O devices to attain high peak-rate transfers (10 MB per second) in synchronous mode.
Software code stored in hardware.
In the TruCluster software, the ability to survive any single hardware or software failure.
A cluster can be considered highly available if the hardware and software provides protection against any single failure, such as a system or disk failure or a SCSI cable disconnection.
A service can be considered highly available if the hardware it depends on provides protection against any single failure, and the service is configured to fail over in case of a failure.
The ability to replace a device on a shared bus while the bus is active.
When a service's port is designated as
in_multi
, the cluster alias subsystem routes
connection requests and packets to all eligible members of the alias.
When a service's port is designated as
in_noaliase
,
the cluster alias subsystem ensures that the port will not receive
inbound alias messages.
When a service's port is designated as
in_nolocal
,
the cluster alias subsystem ensures that the port will not honor
connection requests to a nonalias address.
When a service's port is designated as
in_single
,
the cluster alias subsystem ensures that only
one alias member will receive connection requests or packets
for that service.
See private SCSI bus
A file that indicates that operations on one or more other files are restricted or prohibited. The presence of the lock file can be used as the indication, or the lock file can contain information describing the nature of the restrictions.
A port in the clusterwide port space that is dedicated for use by a single node in the cluster.
One or more Memory Channel physical rails. Logical rails are configured as a single-rail or as a failover pair.
The Logical Storage Manager (LSM) is a disk storage management tool that protects against data loss, improves disk I/O performance, and customizes the disk configuration.
System administrators use LSM to perform disk management functions without disrupting users or applications accessing data on those disks.
A physical or virtual peripheral device addressable through a target. LUNs use their target's bus connection to communicate on a SCSI bus.
A group of Logical Storage Manager (LSM) disks that share a common configuration. The configuration information for an LSM disk group consists of a set of records describing objects including LSM disks, LSM volumes, LSM plexes, and LSM subdisks that are associated with the LSM disk group. Each LSM disk group has an administrator-assigned name that can be used to reference that LSM disk group.
A Logical Storage Manager (LSM) volume is a special device that contains data used by a UNIX file system, a database, or other applications. LSM transparently places an LSM volume between applications and a physical disk. Applications then operate on the LSM volume rather than on the physical disk. For example, a file system is created on an LSM volume rather than on a physical disk.
An LSM volume presents block and raw interfaces that are compatible in their use with disk partition special devices. Because an LSM volume is a virtual device, it can be mirrored, spanned across disk drives, moved to use different storage, and striped using administrative commands. The configuration of an LSM volume can be changed using LSM utilities without disrupting applications or file systems that are using the LSM volume.
A Logical Storage Manager (LSM) plex is a copy of an LSM volume's logical data address space, sometimes known as a mirror. An LSM volume can have up to eight LSM plexes associated with it. A read can be satisfied from any LSM plex, while a write is directed to all LSM plexes.
See cluster member
An integer, in the range 1-63, used to identify a cluster member system. Each member has a unique member ID, which is assigned during the installation procedure.
The number of quorum votes assigned to a cluster member.
A peripheral component interconnect (PCI) cluster interconnect that provides fast and reliable communications between cluster members. Physically, the interconnect consists of a Memory Channel adapter installed in a PCI slot in each member system, one or more Memory Channel link cables to connect the adapters, and an optional Memory Channel hub.
A directory file that is the name of a mounted file system.
An application that can run on multiple cluster members at the same time. A multi-instance application is, by definition, highly available because the failure of one cluster member does not affect the instances of the application running on other members.
Two or more computing systems that are linked for the purpose of exchanging information and sharing resources.
The network adapter and the software that allows a system to communicate over a network.
A cluster member with 0 (zero) votes is considered to be a nonvoting member.
See also voting member
When a service's port is designated as
out_alias
,
the cluster alias subsystem ensures that the default cluster alias is
used as the source address whenever the port is used as a destination.
An abnormal condition in which nodes in an existing cluster divide into two independent clusters.
A peripheral component interconnect (PCI) bus is an industry-standard expansion I/O bus that is a synchronous, asymmetrical I/O channel.
The module on a storage shelf that provides the interface between a differential SCSI bus and the storage shelf single-ended SCSI bus. Switches on the module enable SCSI bus termination and control SCSI bus IDs for the storage shelf.
A Memory Channel hub with its cables and Memory Channel adapters and the Memory Channel driver for the adapters on each node.
See also logical rail
A placement policy determines where an application under CAA control
is run.
Supported policies are:
balanced
,
favored
, and
restricted
.
A SCSI bus that connects private storage to the local system.
A storage device on a private SCSI bus. Storage devices include hard disks, floppy disks, and compact disk drives, tape drives, and other devices.
The Address Resolution Protocol (ARP) is used to map dynamically between IP addresses and Ethernet addresses. An ARP request contains the IP address of an interface on the target host. The host that recognizes this IP address should respond with its Ethernet address. All other hosts should ignore the ARP request.
Proxy ARP is, in essence, when a system or router lies about being the system with an interface that matches the IP address in the ARP request. The proxy ARP system responds to the ARP request by returning its own Ethernet address. The system then routes the packets to the real target system. Proxy ARP is useful for subnetting and also when adding routers to a topology where some hosts are not yet configured to use the routers.
In a cluster, proxy ARP is the mechanism used by the physical cluster members to handle requests addressed to cluster aliases whose addresses reside on common subnets.
A cluster state in which members are allowed to access clusterwide shared resources and thus perform useful work. The cluster has quorum when the connection manager determines that the member and quorum disk votes in the cluster equal or exceed the required number of quorum votes.
See also quorum votes
A mathematical method that the connection manager uses to determine the circumstances under which a given member can participate in a cluster, safely accessing clusterwide resources and performing useful work.
A disk whose
h
partition contains cluster status
and quorum information.
Each cluster can have a maximum of one quorum
disk.
The quorum disk is assigned votes that are used when calculating
quorum.
The number of votes that a quorum disk contributes towards quorum.
A cluster state in which no member is allowed to access clusterwide shared resources. A cluster enters a quorum loss state when the connection manager determines that the member and quorum disk votes in the cluster are less than the required number of quorum votes.
See also quorum votes
The number of votes required to form or maintain a cluster. The formula for calculating quorum votes is:
quorum votes = round_down((cluster-expected-votes+2)/2)
A SCSI bus adapter between a differential SCSI bus and the single-ended RAID array storage shelves. It responds to host commands to access the RAID array disk or tape devices.
A technique that organizes disk data to improve performance and reliability. RAID has three attributes:
It is a set of physical disks viewed by the user as a single logical device or multiple logical devices.
Disk data is distributed across the physical set of drives in a defined manner.
Redundant disk capacity is added so data can be recovered if a drive fails.
Describes duplicate hardware that provides spare capacity that can be used when a component fails.
A cluster hardware or software component that provides a service to end users or to other software components. Examples of resources are disks, tapes, file systems, network interfaces, and application software.
The resource manager consists of all the CAA daemons running on cluster members. These daemons are independent but they communicate with each other, sharing information about the status of the resources.
The resource manager communicates with all the components of the CAA subsystem, as well as the connection manager and the event manager (EVM). The resource manager also uses the resource monitors that monitor the status of a particular type of resource.
There is one resource monitor for each type of resource (application, network, tape, and media changer). Resource monitors are loaded by the resource manager at boot time.
Each application under CAA control has a resource profile, which
contains that application's resource requirements.
The file contains
keyword/value pairs used by CAA to monitor resources and control
application failover.
Resource profiles are located in the
/var/cluster/caa/profile
directory.
The file
names of resource profiles take the form
resource_name.cap
.
Router priority controls the proxy ARP router selection for a cluster alias on a common subnet. For each alias in a common subnet, the cluster member with the highest router priority for that alias will route for that alias.
A program that is interpreted and executed by the shell.
An extension to the original SCSI standard featuring multiple systems on the same bus and hot swap. Hot swap is the ability to replace a device on a shared bus while the bus is active. The SCSI-2 standard is ANSI standard X3.T9.2/86-109.
A storage adapter, commonly referred to as a host bus adapter (HBA), that provides a connection between an I/O bus and a SCSI bus.
A bus that supports the transmission and signaling requirements of a SCSI protocol.
The data transfer speed for a SCSI bus. SCSI bus speed can be either slow, up to 5 MB/s; fast, up to 10 MB/s; fast and wide, up to 20 MB/s; or UltraSCSI, up to 40 MB/s.
See SCSI adapter
A SCSI controller, peripheral controller, or intelligent peripheral that can be attached to a SCSI bus.
Unique address from 0-15 that identifies a device on a SCSI bus.
Selection priority determines the order in which members of a cluster alias receive new connection requests. The selection priority establishes a hierarchy within the members of an alias. Connection requests are distributed among those members sharing the highest selection priority value.
Selection weight indicates the number of connections (on average) this member is given before connections are given to the next alias member with the same selection priority value.
A computing system that provides a specific set of applications or data to clients.
A SCSI bus that is connected to more than one member system and, optionally, one or more storage devices.
Disks that are connected to a shared SCSI bus.
Converts signals between a single-ended SCSI bus and a differential SCSI bus.
A signal path in which one data lead and one ground lead are utilized to make a device connection. This transmission method is economical, but is more susceptible to noise than a differential SCSI bus.
An application that is run on only one cluster member at a time. The cluster application availability (CAA) subsystem can provide high availability for single-instance applications by controlling the initial startup and failover characteristics for a single-instance application.
A Memory Channel logical rail configuration where there is a one-to-one relationship between physical rails and logical rails. This configuration has no failover properties; if the physical rail fails, the logical rail fails.
An American National Standards Institute (ANSI) standard interface for connecting disks and other peripheral devices to a computer system. SCSI-based devices can be configured in a series, with multiple devices on the same bus.
External interface to console firmware for operating systems that expect firmware compliance with the Alpha System Reference Manual (SRM).
A Memory Channel interconnect configuration that uses a Memory Channel hub to connect Memory Channel adapters. To set up a Memory Channel interconnect in standard mode, use a link cable to connect each Memory Channel adapter to a linecard installed in a Memory Channel hub.
When a service's port is designated as
static
,
the cluster alias subsystem ensures that the port will not be assigned
as a dynamic port.
The modular storage subsystem (MSS), which consists of a family of mass storage products that can be configured to meet current and future storage needs.
A software module that can be installed, which is compatible
with the Tru64 UNIX
setld
software installation utility.
The private (nonshared) interconnect used on the CPU subsystem. This bus connects the processor module, the memory module, and the I/O module.
A device that can be addressed by a SCSI ID on a SCSI bus.
Resistor array device used for terminating a SCSI bus. A SCSI bus must be terminated at its two physical ends.
A connector that joins two cables to a single device, or allows terminating a shared SCSI bus external to the adapter or RAID controller.
In the context of cluster aliases, moving an
mbuf
chain between
cluster members after receipt.
A differential SCSI bus standard that uses smaller diameter cables with smaller connectors and allows bus speeds up to 40 MB/s at 25 meters.
A specialized signal converter with multiple connectors. An UltraSCSI hub converts differential input SCSI signals from a host bus adapter to single-ended, then converts the single-ended signals back to differential for the output connection to a RAID array controller. An UltraSCSI hub allows radial connection of UltraSCSI devices and increases the separation between host and storage.
A Memory Channel interconnect configuration that does not use a Memory Channel hub to connect Memory Channel adapters. Virtual hub mode is supported only for clusters that have two member systems. To set up a Memory Channel interconnect in virtual hub mode, use a Memory Channel link cable to connect the Memory Channel adapter in one member system to the corresponding Memory Channel adapter in the other member system.
In the context of cluster aliases, a vMAC (virtual Media Access Control) address is a unique hardware address that can be automatically created for each alias IP address. An alias vMAC address follows the cluster alias proxy ARP master from node to node as needed. Regardless of which cluster member is serving as the proxy ARP master for an alias, the alias's vMAC address does not change.
In the context of cluster aliases, a subnet with no physical connections. Cluster alias IP addresses are either in a common subnet or in a virtual subnet.
Votes (either 0 or 1) are contributed to the cluster by cluster members and by the quorum disk if one is configured. The connection manager uses votes to calculate quorum.
Each member with a vote is considered to be a voting member of the cluster.
See also nonvoting member
A worldwide ID (WWID) is a unique identifier assigned to a disk by its manufacturer.
See worldwide ID
A cable that joins two cables to a single device, or allows terminating a shared SCSI bus external to the adapter or RAID controller.