Glossary


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
adapter
See cluster transport adapter.

administrative console
A workstation that is used to run cluster administrative software.

amnesia
A condition where a cluster restarts, after a shutdown, with stale configuration data in the Cluster Configuration Repository (CCR). For example, on a two-node cluster with only node 1 operational, if a cluster configuration change occurs on node 1, node 2's CCR becomes stale. If the cluster is shut down and then restarted on node 2, an amnesia condition would result because of node 2's stale CCR.

application
See data service.

automatic failback
The process of returning a resource group or device group to its primary node after the primary node has failed and has been restarted as a cluster member.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
backup group
See IP Network Multipathing group.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
cable
See cluster transport cable.

checkpoint
The notification sent by a primary node to a secondary node to keep the software state synchronized between them.

See also primary and secondary.

cluster
Two or more interconnected nodes or domains that share a cluster file system and are configured together to run failover, parallel, or scalable resources.

Cluster Configuration Repository (CCR)
A highly available, replicated data store that the Sun Cluster software uses to persistently store cluster configuration information.

cluster file system
A cluster service that provides cluster-wide, highly available access to existing local file systems.

cluster interconnect
The hardware networking infrastructure that includes cluster transport cables, cluster transport junctions, and cluster transport adapters. The Sun Cluster and data service software use this infrastructure for intra-cluster communication.

See also cluster transport junction, cluster transport adapter, cluster transport cable, and endpoint.

cluster member
An active member of the current cluster incarnation. A cluster member can share resources with other cluster members and provide services both to other cluster members and to clients of the cluster.

See also cluster node.

Cluster Membership Monitor (CMM)
The software that maintains a consistent cluster membership roster. All the rest of the clustering software uses this membership information to decide where to locate highly available services. The CMM ensures that non-cluster members cannot corrupt data and transmit corrupt or inconsistent data to clients.

cluster node
A node that is configured to be a cluster member. A cluster node might or might not be a current member.

See also cluster member.

cluster transport adapter
The network adapter that is located on a node and connects the node to the cluster interconnect.

See also cluster interconnect.

cluster transport cable
The network connection that connects to the endpoints. A connection between a cluster transport adapter and a cluster transport junction, or between two cluster transport adapters.

See also cluster interconnect.

cluster transport junction
A hardware device, such as a switch, that is used as part of the cluster interconnect.

See also cluster interconnect.

collocation
The property of being on the same node or nearby nodes. This concept is used during cluster configuration to improve performance.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
data service
An application that has been instrumented to run as a highly available resource under control of the Resource Group Manager (RGM).

default master
The default cluster member on which a failover resource type is brought online.

See also potential primary.

device group
A user-defined group of device resources, such as disks, that can be mastered from different nodes in a cluster High Availability configuration. This group can include device resources of disks, Solaris Volume Manager disk sets, and, if your configuration includes VERITAS Volume Manager, VERITAS Volume Manager disk groups.

device ID
A means of identifying devices that are made available through Solaris. Device IDs are described in the devid_get(3DEVID) man page.

The Sun Cluster DID driver uses device IDs to determine the correlation between the Solaris logical names on different cluster nodes. The DID driver probes each device for its device ID. If that device ID matches another device somewhere else in the cluster, both devices are given the same DID name. If the device ID has not been seen in the cluster before, a new DID name is assigned.

See also Solaris logical name and DID driver.

DID driver
A driver implemented by Sun Cluster software that is used to provide a consistent device namespace across the cluster.

See also DID name.

DID name
Used to identify global devices in a SunPlex system. It is a clustering identifier with a one-to-one or a one-to-many relationship with Solaris logical names. It takes the form dXsY, where X is an integer and Y is the slice name.

See also Solaris logical name.

disk device group
See device group.

disk group
See device group.

disk set
See device group.

DiskSuite
See Solaris Volume Manager.

Distributed Lock Manager (DLM)
The locking software used in a shared disk Oracle Parallel Server (OPS) environment. The DLM enables Oracle processes running on different nodes to synchronize database access. The DLM is designed for high availability. If a process or node crashes, the remaining nodes do not have to be shut down and restarted. A quick reconfiguration of the DLM is performed to recover from such a failure.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
endpoint
A physical port on a cluster transport adapter or cluster transport junction.

event
A change in the state, ownership, severity, or description of a managed object.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
failback
See automatic failback.

failfast
The shutdown and removal from the cluster of a faulty node before its potentially incorrect operation can prove damaging.

failover
The automatic relocation of a resource group or a device group from a current primary node to a new primary node after a failure has occurred.

See also logical host name resource

failover resource group
A failover resource group contains failover resources: the logical host name and one or more data services. The failover resource group can be online on only one node at a time.

See also logical host name resource and scalable resource group.

failover resource type
A resource type that has resources that can be mastered by only one node at a time.

See also scalable resource type.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
generic resource type
A template for a data service. A generic resource type can be used to make a simple application into a failover data service (stop on one node, start on another). This type does not require programming by the Sun Cluster API.

generic resource
An application daemon and its child processes put under control of the Resource Group Manager as part of a generic resource type.

global device
A device that is accessible from all cluster members, such as a disk device group, CD-ROM, or tape.

global device namespace
A namespace that contains the logical cluster-wide names for global devices. Local devices in the Solaris environment are defined in the /dev/dsk, /dev/rdsk, and /dev/rmt directories. The global device namespace defines global devices in the /dev/global/dsk, /dev/global/rdsk, and /dev/global/rmt directories.

global interface (GIF)
A network interface that physically hosts shared addresses.

See also shared address resource.

global interface node
A node that hosts a shared address interface.

global resource
A highly available resource that is provided at the kernel level of the Sun Cluster software. Global resources include disks (HA device groups), the cluster file system, and global networking.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
HA data service
See data service.

heartbeat
A periodic message that is sent across all available cluster interconnect transport paths. Lack of a heartbeat after a specified interval and number of retries might trigger an internal failover of transport communication to another path. Failure of all paths to a cluster member results in the Cluster Membership Monitor (CMM) reevaluating the cluster quorum.

Highly Available data service
See data service.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
interconnect
See cluster interconnect.

instance
See resource invocation.

IPMP
See IP Network Multipathing (IPMP).

IP Network Multipathing (IPMP)
A feature that groups network interfaces together to provide failover, recovery detection, and outbound load spreading for IP.

If

  1. A failure occurs in the network adapter
  2. You have an alternate adapter connected to the same IP link
the system switches all the network accesses automatically from the failed adapter to the alternate adapter. This process ensures uninterrupted access to the network. Also, when you have multiple network adapters connected to the same IP link, you achieve increased traffic throughput by spreading the traffic across multiple network adapters.

See also Public Network Management (PNM).

IP Network Multipathing group
A set of one or more network adapters on the same node that are configured to be on the same subnet. Failover of IP addresses takes place between adapters in an IP Network Multipathing group. Outbound traffic is load balanced between adapters in an IP Network Multipathing group.

See also IP Network Multipathing (IPMP).


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
junction
See cluster transport junction.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
load balancing
Applies only to scalable services. The process of distributing the application load across nodes in the cluster so that client requests are serviced in a timely manner.

See also scalable service.

load-balancing policy
Applies only to scalable services. The preferred way that application request load is distributed across nodes.

See also scalable service.

local disk
A disk that is physically private to a given cluster node.

logical host name resource
A resource that contains a collection of logical host names that represent network addresses. Logical host name resources can only be mastered by one node at a time.

See also shared address resource and failover resource group.

logical network interface
In the Internet architecture, a host can have one or more IP addresses. Sun Cluster configures additional logical network interfaces to establish a mapping between several logical network interfaces and a single physical network interface. Thus, each logical network interface has a single IP address. This mapping enables a single physical network interface to respond to multiple IP addresses. This mapping also enables the IP address to move from one cluster member to the other in a takeover or switchover, without requiring additional hardware interfaces.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
master
See primary.

multihomed host
A host that is on more than one public network.

multihost disk
A disk that is physically connected to multiple nodes.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
network resource
A resource that contains one or more logical host names or shared addresses.

See also logical host name resource and shared address resource.

NFS
A distributed computing file system that can be made highly available in a Sun Cluster environment. HA-NFS provides highly available remote mount service, status monitor service, and network locking service.

node
A physical machine or domain (in the Sun Enterprise E10000 server) that can be part of a Sun cluster. Also a synonym for "host."

non-cluster mode
The resulting state achieved by booting a cluster member with the -x boot option. In this state, the node is no longer a cluster member, but is still a cluster node.

See also cluster member and cluster node.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
parallel resource type
A resource type, such as a parallel database, that has been instrumented to run in a cluster environment so that it can be mastered by two or more nodes simultaneously.

parallel service instance
An instance of a parallel resource type running on an individual node.

potential master
See potential primary.

potential primary
A cluster member that is able to master a failover resource type if the primary node fails.

See also default master.

primary
A node on which a resource group or device group is currently online. That is, a primary is a node that is currently hosting or implementing the service associated with the resource.

See also secondary.

primary host name
The name of a node on the primary public network. The primary host name is always the node name that is specified in /etc/nodename.

See also secondary host name.

private host name
The host name alias used to communicate with a node over the cluster interconnect.

Public Network Management (PNM)
Software that uses fault monitoring and failover to prevent loss of node availability that is caused by a single network adapter or cable failure. PNM failover uses sets of network adapters called Network Adapter Failover groups to provide redundant connections between a cluster node and the public network. The fault monitoring and failover capabilities work together to ensure availability of resources.

See also IP Network Multipathing (IPMP) and IP Network Multipathing group.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
quorum device
A disk, shared by two or more nodes, that contributes "votes" that are used to establish a quorum for the cluster to run. The cluster can operate only when a quorum of votes is available. When a cluster becomes partitioned into separate sets of equal numbers of nodes, the quorum device establishes which set of nodes constitute the new cluster.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
resource
An instance of a resource type. Many resources of the same type might coexist, each resource with its own name and set of property values, so that many instances of the underlying application might run on the cluster.

resource group
A collection of resources that the RGM manages as a unit. Each resource that the RGM manages must be configured in a resource group. Typically, related and interdependent resources are grouped.

See also scalable resource group and failover resource group.

Resource Group Manager (RGM)
A software facility that follows preconfigured policies to make cluster resources highly available and scalable by automatically starting and stopping these resources on selected cluster nodes. This facility is activated after hardware or software failures or reboots.

resource group state
The state of the resource group on any given node.

resource invocation
An instance of a resource type that runs on a node. An abstract concept that represents a resource that was started on the node.

Resource Management API (RMAPI)
The application programming interface within Sun Cluster that makes an application highly available in a cluster environment.

resource monitor
An optional part of a resource type implementation that runs periodic fault probes on resources to determine if they are running correctly and how they are performing.

resource state
The state of a Resource Group Manager's resources on a given node.

resource status
The condition of the resources as reported by the fault monitor.

resource type
The unique name given to a data service, LogicalHostname cluster object, or SharedAddress cluster object. Data service resource types can either be failover types or scalable types.

See also data service, failover resource type, and scalable resource type.

resource type property
A key-value pair, stored by the Resource Group Manager (RGM) as part of the resource type, that is used to describe and manage resources of the given type.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Scalable Coherent Interface (SCI)
High-speed interconnect hardware that is used as the cluster interconnect.

scalable resource group
A scalable resource group contains scalable resources: the shared address and one or more scalable data services. The scalable resource group can be online on multiple nodes simultaneously.

See also scalable resource type and shared address resource.

scalable resource type
A resource type that runs on multiple nodes (an instance on each node) and uses the cluster interconnect to give the appearance of a single service to remote clients of the service.

See also failover resource type.

scalable service
A data service that is implemented by using a scalable resource type.

secondary
A cluster member that is available to master disk device groups and resource groups in the event that the primary fails.

See also primary.

secondary host name
The name that is used to access a node on a secondary public network.

See also primary host name.

service
See scalable service.

shared address resource
A network address that can be bound by all scalable services that are running on nodes within the cluster to make the services scale on those nodes. A cluster can have multiple shared addresses, and a service can be bound to multiple shared addresses.

See also logical host name resource.

Solaris logical name
The names typically used to manage Solaris devices. For disks, these usually look something like /dev/rdsk/c0t2d0s2. For each one of these Solaris logical device names, there is an underlying Solaris physical device name.

See also Solaris physical name.

Solaris physical name
The name that is given to a device by its device driver in Solaris. This shows up on a Solaris machine as a path under the /devicestree. For example, a typical SCSI disk has a Solaris physical name of something like: /devices/sbus@1f,0/SUNW,fas@e,8800000/sd@6,0:c,raw.

See also Solaris logical name.

Solaris Volume Manager, formerly known as Solstice DiskSuite
A volume manager that is used by Sun Cluster.

See also volume manager.

Solstice DiskSuite
See Solaris Volume Manager.

split brain
A condition in which a cluster breaks into multiple partitions, with each partition forming without knowledge of the existence of any other partition.

switchback
See automatic failback.

switchover
The orderly transfer of a resource group or device group from one owner (node) in a cluster to another owner, or multiple owners if resource groups are configured for multiple primaries. A switchover is initiated by an administrator by using the scswitch(1M) command.

See also failover.

System Service Processor (SSP)
In Sun Enterprise 10000 configurations, a device, external to the cluster, that is used specifically to communicate with cluster members.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
terminal concentrator
In configurations other than the Sun Enterprise 10000, a device that is external to the cluster and is used specifically to communicate with cluster members.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
UFS
An abbreviation for the UNIX file system.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
VERITAS Volume Manager
A volume manager that is used by Sun Cluster.

Note: VERITAS Volume Manager (VxVM) is currently available for use on SPARC based clusters only. VERITAS Volume Manager is not currently available for use on x86 based clusters.

See also volume manager.

volume manager
A software product that provides data reliability through disk striping, concatenation, mirroring, dynamic growth of volumes, and file systems.

volume state database replica (replica)
A database, stored on disk, that records the configuration and state of all volumes and error conditions. This information is important for correct operation of Solaris Volume Manager disk sets. This information is also replicated.