 |
|
|
NAME
cluster.conf - SMCT configuration
file describing the cluster in terms of nodes, node groups, domains, and services
SYNOPSIS
SMCT_CONFIG_DIR/models/cluster.conf
smct-config-dir/models/cluster.conf
Do not use the SMCT tool with the current patch
level of the Foundation Services product.
The cluster.conf configuration file describes the
target cluster in terms of nodes, node groups, domains, and the services to
be run on each node group. This file uses the configuration elements shelf, board, disk, ip, and network, which are described in machine.conf and network.conf.
A pre-configured cluster.conf template file for each
example hardware configuration is available in the /opt/SUNWcgha/nhsmct/etc/models/ directory.
The cluster.conf configuration file contains the
following sections:
-
Cluster composition
The cluster INVOLVE block contains
high level elements that define the cluster:
-
The configuration element shelf describes
the shelves that contain the cluster nodes hardware.
-
The configuration element domain describes
the domain associated to the cluster.
-
The configuration element nodeGroup contains
a definition of each node group. Each node group is defined in terms of master-eligible,
dataless, or diskless, and the supported operating environment.
-
Cluster domain definition
The domain INVOLVE block defines
the networking parameters associated to the cluster:
The domain USE block defines the
access point to the current master node. This consists of the floating address
triplet of the master node.
-
Cluster node group definitions
The nodeGroup INCLUDE block defines
the set of nodes that belong to the node group.
The nodeGroup RUN block defines
the Foundation Services run by the nodes in the node group.
-
Configuration element definitions
The configuration elements define the characteristics for each of the
cluster nodes.
This section describes the parameters in the cluster.conf file:
ELEMENT cluster name
INVOLVE {shelf name}+ {nodeGroup name}+ domain name
ELEMENT domain name [ id domainid ]
[ USE {ip name}+ ]
[ INVOLVE {router name}* {network name}* ]
{ ELEMENT nodeGroup name type node-group-type os nodes-group-os arch arch-type
[ INCLUDE {nodeGroup name}+ ]
RUN {service service-name}+
}+
{ ELEMENT node name [id nodeid]
USE board name {disk name}*
}+
Note Parameters specified within square brackets ([]) in the above
syntax can be defined either in stage 1 or in stage 3 of the SMCT installation
process. For more information, see the Netra High Availability Suite Foundation Services 2.1 6/03 SMCT Installation Guide.
-
name
ASCII string.
-
nodeid
The CMM node ID. The nodeid must be a decimal
representation of the host part of the IP address specified in the network.conf file. If you do not specify a value for the nodeid, the SMCT calculates the value based on the IP address
specified in the network.conf file. For more information,
see the network.conf(4)
man page.
-
node-group-type
Type of node group. The type can be one of the following:
-
MASTER_ELIGIBLE
A group of master-eligible nodes.
-
DISKLESS
A group of diskless nodes.
-
DATALESS
A group of dataless nodes.
-
node-group-os
The operating system used by the node group. This must be configured
to SOLARIS.
-
arch-type
The architecture type of the node group hardware. By default, this value
is SPARC.
-
domainid
The CMM domain ID. Define this parameter in stage three of the configuration
process. For information on the range and format of domainid, see the nhfs.conf(4) man page.
-
service-name
Name of the service list that determines which services are run on the
node group. The following service lists are available for each type of node
group:
-
Master-eligible node group
To assign services for a master-eligible node group, use the following
service list:
NHAS_MASTER_ELIGIBLE [NSM] [RBS] [WDT_MASTER_ELIGIBLE]
For the master-eligible node groups, you can assign the following services:
-
NHAS_MASTER_ELIGIBLE to install the mandatory Foundation Services.
-
(Optional) NSM to install the Node State
Manager service.
-
(Optional) RBS to install the Reliable
Boot Service. The RBS option can only be assigned to master-eligible
node groups that contain diskless nodes. To enable a diskless node to boot,
you must assign the RBS option.
-
(Optional) WDT_MASTER_ELIGIBLE to install
the Watchdog Timer. Use the Watchdog Timer only for Netra servers with hardware
watchdogs the LOM level. Netra servers with hardware watchdogs at the OPB
level do not require this service. These hardware watchdogs are monitored
by the server's software.
-
Dataless node group
-
To assign services for a dataless node group in a cluster
running the Foundation Services, use the following service list:
NHAS_DATALESS [WDT_DATALESS]
-
NHAS_DATALESS to install the mandatory Foundation Services.
-
(Optional) WDT_DATALESS is the Watchdog
Timer for the dataless node group. Use the Watchdog Timer only for Netra servers
with hardware watchdogs the LOM level. Netra servers with hardware watchdogs
at the OPB level do not require this service. These hardware watchdogs are
monitored by the server's software.
-
To assign services for a dataless node group that runs only
the CGTP standalone service, use the following service list:
CGTP_STANDALONE
PATCH_DATALESS
-
Diskless node group
-
To assign services for a diskless node group in a cluster
running the Foundation Services, use the following service list:
NHAS_DISKLESS boot-policy [WDT_DISKLESS]
-
NHAS_DISKLESS to install the mandatory Foundation Services.
-
boot-policy is one of the following:
MAC_ADDR_POLICY--DHCP static boot policy based
on the Ethernet address of the diskless nodes.
STATIC_CLIENT_ID_POLICY--DHCP client ID boot
policy.
-
(Optional) WDT_DISKLESS is the Watchdog
Timer for the diskless node group. Use the Watchdog Timer only for Netra servers
with hardware watchdogs the LOM level. Netra servers with hardware watchdogs
at the OPB level do not require this service. These hardware watchdogs are
monitored by the server's software.
-
To assign services for a diskless node group that runs only
the CGTP standalone service, use the following service list:
CGTP_STANDALONE
boot-policy
PATCH_DISKLESS
The following are examples of components of the cluster.conf file. Example 1. Defining the Cluster Composition
Example of the cluster composition section of a twelve-node cluster.
# Cluster composition
#
ELEMENT cluster 12N_cluster
INVOLVE shelf shelf_1
shelf shelf_2
domain cluster_domain
nodeGroup master_el
nodeGroup dataless_T1200
nodeGroup dataless_T1105
Example 2. Defining the Cluster Domain
Example of the cluster domain section.
# Cluster domain definition
#
# id -> CMM domainId
# ip -> master master-nic0 master-nic1 floating addresses
#
ELEMENT domain cluster_domain id 100
INVOLVE network phys-A
network phys-B
network cgtp
network external
router default-router
USE ip master-cgtp
ip master-nic0
ip master-nic1
Example 3. Defining the Master-Eligible Node Group
Example node group and node definition for a master-eligible node group
in a four-node cluster.
# Master-eligible node group and related nodes definitions
ELEMENT nodeGroup master_el type MASTER_ELIGIBLE os SOLARIS arch SPARC
INCLUDE nodeGroup diskless
node peerNode1-4N
node peerNode2-4N
RUN service NHAS_MASTER_ELIGIBLE
service RBS
#
# Master-eligible node definitions
ELEMENT node peerNode1-4N
USE board T1105@peerNode1
disk disk1@peerNode1
#
ELEMENT node peerNode2-4N
USE board T1105@peerNode2
disk disk1@peerNode2
Example 4. Defining a Diskless Node Group
Example of a node group and node definition for a diskless node group
in a four-node cluster.
# diskless group and related nodes definitions
ELEMENT nodeGroup diskless type DISKLESS os SOLARIS arch SPARC
INCLUDE node peerNode3-4N
node peerNode4-4N
RUN service NHAS_DISKLESS
service MAC_ADDR_POLICY
#
# diskless nodes definitions
ELEMENT node peerNode3-4N
USE board T1105@peerNode3
#
ELEMENT node peerNode4-4N
USE board T1105@peerNode4
Example 5. Defining a Dataless Node Group
Example of a node group and node definition for a dataless node group
in a twelve-node cluster.
# Node Groups definitions
#
# Dataless group and related nodes definitions
ELEMENT nodeGroup dataless_T1200 type DATALESS os SOLARIS arch SPARC
INCLUDE node peerNode3-12N
node peerNode4-12N
node peerNode5-12N
node peerNode6-12N
RUN service NHAS_DATALESS
#
# Dataless nodes definitions
ELEMENT node peerNode3-12N
USE board T1200@peerNode3
disk disk1@peerNode3
#
ELEMENT node peerNode4-12N
USE board T1200@peerNode4
disk disk1@peerNode4
#
ELEMENT node peerNode5-12N
USE board T1200@peerNode5
disk disk1@peerNode5
#
ELEMENT node peerNode6-12N
USE board T1200@peerNode6
disk disk1@peerNode6
Example 6. Defining CGTP Standalone in Diskless Node Group
Example of a diskless node group with CGTP standalone.
ELEMENT nodeGroup standalone_diskless type DISKLESS os SOLARIS arch SPARC
INCLUDE node peerNode3
INCLUDE node peerNode4
RUN service CGTP_STANDALONE
service MAC_ADDR_POLICY
service PATCH_DISKLESS
Example 7. Defining CGTP Standalone for Dataless Nodes
Example of a dataless node group with CGTP standalone.
ELEMENT nodeGroup standalone_dataless type DATALESS os SOLARIS arch SPARC
INCLUDE node peerNode3
INCLUDE node peerNode4
RUN service CGTP_STANDALONE
service PATCH_DATALESS
See attributes(5)
for descriptions of the following attributes:
ATTRIBUTE TYPE | ATTRIBUTE VALUE |
Architecture | SPARC |
Availability | SUNWnhsmc |
Interface Stability | Evolving |
cluster_nodes_table(4), nhfs.conf(4), slconfig(1M), slcreate(1M), sldelete(1M), sldeploy(1M)
Netra High Availability Suite Foundation Services 2.1 6/03 SMCT Installation Guide
Netra HAS FS 2.1 | Go To Top | Last Changed September 2004 |
Company Info
|
Contact
|
Copyright 2004 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
|