This chapter describes Memory Channel configuration restrictions, and describes how to set up the Memory Channel cluster interconnect, including setting up a Memory Channel hub and Memory Channel optical converter (MC2 only), and connecting link cables.
Two versions of the Memory Channel peripheral component interconnect (PCI) adapter are available: CCMAA and CCMAB (MC2).
Two variations of the CCMAA PCI adapter are in use: CCMAA-AA (MC1) and CCMAA-AB (MC1.5). Because the hardware used with these two PCI adapters is the same, this manual often refers to MC1 when referring to either of these variations.
See the TruCluster Server Software Product Description (SPD) for a list of the supported Memory Channel hardware. See the Memory Channel User's Guide for illustrations and more detailed information about installing jumpers, Memory Channel adapters, and hubs.
See Section 2.2 for a discussion on Memory Channel restrictions.
You can have two Memory Channel adapters with TruCluster Server, but only one rail is active at a time. This is referred to as a failover pair. If the active rail fails, cluster communications fails over to the formerly inactive rail.
If you use multiple Memory Channel adapters with the Memory Channel
application programming interface (API) for high performance data
delivery over Memory Channel, setting the
rm_rail_style
configuration variable to zero
(rm_rail_style = 0
) enables single-rail style
with multiple active rails.
The default is one, which selects
failover pair.
For more information on the Memory Channel failover pair model, see the Cluster Highly Available Applications manual.
To set up the Memory Channel interconnects, follow these steps, referring to the appropriate section and the Memory Channel User's Guide as necessary:
Set the Memory Channel jumpers (Section 5.1).
Install the Memory Channel adapter into a PCI slot on each system (Section 5.2).
If you are using fiber optics with MC2, install the CCMFB fiber-optic module (Section 5.3).
If you have more than two systems in the cluster, install a Memory Channel hub (Section 5.4).
Connect the Memory Channel cables (Section 5.5).
After you complete steps 1 through 5 for all systems in the cluster, apply power to the systems and run Memory Channel diagnostics (Section 5.6).
Note
If you are installing SCSI or network adapters, you may want to complete all hardware installation before powering up the systems to run Memory Channel diagnostics.
Section 5.7.2
provides procedures for upgrading
from redundant MC1 interconnects to MC2 interconnects.
5.1 Setting the Memory Channel Adapter Jumpers
The meaning of the Memory Channel adapter module jumpers depends
upon the version of the Memory Channel module.
5.1.1 MC1 and MC1.5 Hub Mode Jumper
The MC1 and MC1.5 modules (CCMAA-AA and CCMAA-AB, respectively) have an adapter jumper (J4) that designates whether the configuration is using standard or virtual hub mode. If virtual hub mode is being used, there can be only two systems. One system must be virtual hub 0 (VH0) and the other must be virtual hub 1 (VH1).
The Memory Channel adapter should arrive with the J4 jumper set for
standard hub mode (pins 1 to 2 jumpered).
Confirm that the jumper
is set properly for your configuration.
The jumper configurations
in
Table 5-1
are
shown as if you are holding the module with the J4 jumper facing you,
with the module end plate in your left hand.
The jumper is next
to the factory/maintenance cable connector.
Table 5-1: MC1 and MC1.5 J4 Jumper Configuration
If hub mode is: | Jumper: | Example: |
Standard | J4 Pins 1 to 2 |
|
Virtual: VH0 | J4 Pins 2 to 3 |
|
Virtual: VH1 | None needed; store the jumper on J4 pin 1 or 3 |
|
If you are upgrading from virtual hub mode to standard hub mode (or from
standard hub mode to virtual hub mode), be sure to change the J4 jumper on
all Memory Channel adapters on the rail.
5.1.2 MC2 Jumpers
The MC2 module (CCMAB) has multiple jumpers. They are numbered right to left, starting with J1 in the upper right corner (as you view the jumper side of the module with the endplate in your left hand). The leftmost jumpers are J11 and J10. J11 is above J10.
Most of the jumper settings are straightforward, but the
window size
jumper, J3, needs some explanation.
If a CCMAA adapter (MC1 or MC1.5) is installed, 128 MB of address space is allocated for Memory Channel use. If a CCMAB adapter (MC2) PCI adapter is installed, the memory space allocation for Memory Channel depends on the J3 jumper and can be 128 MB or 512 MB.
If two Memory Channel adapters are used as a failover pair to provide redundancy, the address space allocated for the logical rail depends on the smaller window size of the physical adapters.
During a rolling upgrade (see
Section 5.7.2) from
an MC1 failover pair to an MC2 failover pair, the MC2 modules can be
jumpered for 128 MB or 512 MB.
If jumpered for 512 MB, the increased
address space is not achieved until all MC PCI adapters have been
upgraded and the use of 512 MB is enabled.
On one member system, use
the
sysconfig
command to reconfigure the Memory Channel
kernel subsystem to initiate the use of 512 MB address space.
The
configuration change is propagated to the other cluster member systems
by entering the following command:
# /sbin/sysconfig -r rm rm_use_512=1
See the Cluster Administration manual for more information on failover pairs.
The MC2 jumpers are described in
Table 5-2.
Table 5-2: MC2 Jumper Configuration
Jumper: | Description: | Example: |
J1: Hub Mode | Standard: Pins 1 to 2 |
|
VH0: Pins 2 to 3 |
|
|
VH1: None needed; store the jumper on pin 1 or pin 3 |
|
|
J3: Window Size | 512 MB: Pins 2 to 3 |
|
128 MB: Pins 1 to 2 |
|
|
J4: Page Size | 8-KB page size (UNIX): Pins 1 to 2 |
|
4-KB page size (not used): Pins 2 to 3 |
|
|
J5: AlphaServer 8x00 Mode | 8x00 mode selected: Pins 1 to 2 [Footnote 21] |
|
8x00 mode not selected: Pins 2 to 3 |
|
|
J10 and J11: Fiber-Optic Mode Enable | Fiber Off: Pins 1 to 2 |
|
Fiber On: Pins 2 to 3 pins |
|
The MC2 linecard (CCMLB) has two jumpers, J2 and J3, that are
used to enable fiber-optic mode.
The jumpers are located near the
middle of the module (as you view the jumper side of the module with
the endplate in your left hand).
Jumper J2 is on the right.
The
MC2 linecard jumpers are described in
Table 5-3.
Table 5-3: MC2 Linecard Jumper Configurations
Jumper: | Description: | Example: |
J2 and J3: Fiber Mode | Fiber Off: Pins 2 to 3 |
|
Fiber On: Pins 1 to 2 |
|
5.2 Installing the Memory Channel Adapter
Install the Memory Channel adapter in an appropriate peripheral component interconnect (PCI) slot. (See Section 2.2.) Secure the module at the backplane. Ensure that the screw is tight to maintain proper grounding.
The Memory Channel adapter comes with a straight extension plate. This fits most systems; however, you may have to replace the extender with an angled extender (AlphaServer 2100A, for instance), or for an AlphaServer 8200/8400, GS60, GS60E, or GS140, remove the extender completely.
If you are setting up a redundant Memory Channel configuration, install the second Memory Channel adapter immediately after installing the first Memory Channel adapter. Ensure that the jumpers are correct and are the same on both modules.
After you install the Memory Channel adapters, replace the system
panels, unless you have more hardware to install.
5.3 Installing the MC2 Optical Converter in the Member System
If you plan to use a CCMFB optical converter along with the MC2 PCI adapter, install it at the same time that you install the MC2 CCMAB. To install a MC2 CCMFB optical converter in the member system, follow these steps. See Section 5.5.2.4 if you are installing an optical converter in an MC2 hub.
Remove the bulkhead blanking plate for the desired PCI slot.
Thread one end of the fiber-optic cable (BN34R) through the PCI bulkhead slot.
Thread the cable through the slot in the optical converter module (CCMFB) endplate (at the top of the endplate).
Remove the cable tip protectors and attach the keyed plug to the connector on the optical converter module. Tie-wrap the cable to the module.
Seat the optical converter module firmly into the PCI backplane and secure the module with the PCI card cage mounting screw.
Attach the 1-meter (3.3-foot) BN39B-01 cable from the CCMAB MC2 PCI adapter to the CCMFB optical converter.
Route the fiber-optic cable to the remote system or hub.
Repeat steps 1 through 7 for the optical converter on the second system. See Section 5.5.2.4 if you are installing an optical converter in an MC2 hub.
5.4 Installing the Memory Channel Hub
You may use a hub in a two-node TruCluster Server cluster, but the hub is not required. When there are more than two systems in a cluster, you must use a Memory Channel hub as follows:
For use with the MC1 or MC1.5 CCMAA adapter, you must install the hub within 3 meters (9.8 feet) of each of the systems.
For use with the MC2 CCMAB adapter, the hub must be placed within 4 meters (13.1 feet) or 10 meters (32.8 feet) (the length of the BN39B link cables) of each system. If fiber optics is used in conjunction with the MC2 adapter, the hub may be placed up to 3000 meters (9842.5 feet) from the systems.
Ensure that the voltage selection switch on the back of the hub is set to select the correct voltage for your location (115V or 230V).
Ensure that the hub contains a linecard for each system in the cluster (the hub comes with four linecards) as follows:
CCMLA linecards for the CCMHA MC1 hub
CCMLB linecards for the CCMHB MC2 hub.
The
linecards cannot be installed in the
opto only
slot.
If you have a four-node cluster, you may want to install an extra linecard for troubleshooting use.
If you have an eight-node cluster, all linecards must be installed in the same hub.
For MC2, if fiber-optic converters are used, they can
only be installed in hub slots
opto only
,
0/opto
,
1/opto
,
2/opto
, and
3/opto
.
If you have a five-node or greater MC2 cluster using fiber optics, you will need two or three CCMHB hubs, depending on the number of fiber-optic connections. You will need one hub for the CCMLB linecards (and possible optics converters) and up to two hubs for the CCMFB optic converter modules. The CCMHB-BA hub has no linecards.
5.5 Installing the Memory Channel Cables
Memory Channel cable installation
depends on the Memory Channel module revision, and whether or not you are using
fiber optics.
The following sections describe how to install the
Memory Channel cables for MC1 and MC2.
5.5.1 Installing the MC1 or MC1.5 Cables
To set up an MC1 or MC1.5 interconnect, use the BC12N-10 3-meter (9.8-foot) link cables to connect Memory Channel adapters and, optionally, Memory Channel hubs.
Note
Do not connect an MC1 or MC1.5 link cable to an MC2 module.
5.5.1.1 Connecting MC1 or MC1.5 Link Cables in Virtual Hub Mode
For an MC1 virtual hub configuration (two nodes in the cluster), connect the BC12N-10 link cables between the Memory Channel adapters that are installed in each of the systems.
Caution
Be very careful when installing the link cables. Insert the cables straight in.
Gently push the cable's connector into the receptacle, and then use the screws to pull the connector in tight. The connector must be tight to ensure a good ground contact.
If you are setting up redundant interconnects, all Memory Channel adapters in a system must have the same jumper setting, either VH0 or VH1.
Note
With the TruCluster Server Version 5.1A product and virtual hub mode, there is no longer a restriction requiring that
mca0
in one system be connected tomca0
in the other system.
5.5.1.2 Connecting MC1 Link Cables in Standard Hub Mode
If there are more than two systems in a cluster, use a standard hub configuration. Connect a BC12N-10 link cable between the Memory Channel adapter and a linecard in the CCMHA hub, starting at the lowest numbered slot in the hub.
If you are setting up redundant interconnects, the following restrictions apply:
Each adapter installed in a system must be connected to a different hub.
Each Memory Channel adapter in a system must be connected to linecards that are installed in the same slot position in each hub. For example, if you connect one adapter to a linecard installed in slot 1 in one hub, you must connect the other adapter in that system to a linecard installed in slot 1 of the second hub.
Figure 5-1
shows Memory Channel adapters connected to
linecards that are in the same slot position in the Memory Channel hubs.
Figure 5-1: Connecting Memory Channel Adapters to Hubs
5.5.2 Installing the MC2 Cables
To set up an MC2 interconnect, use the BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) link cables for virtual hub or standard hub configurations without optical converters.
If optical converters are used, use the BN39B-01 (1-meter;
3.3-foot) link cable and the BN34R-10 (10-meter; 32.8-foot) or
BN34R-31 (31-meter; 101.7-foot) fiber-optic cable.
5.5.2.1 Installing the MC2 Cables for Virtual Hub Mode Without Optical Converters
To set up an MC2 configuration for virtual hub mode, use BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) Memory Channel link cables to connect Memory Channel adapters to each other.
Notes
MC2 link cables (BN39B) are black cables.
Do not connect an MC2 cable to an MC1 or MC1.5 CCMAA module.
Gently push the cable's connector into the receptacle, and then use the screws to pull the connector in tight. The connector must be tight to ensure a good ground contact.
If you are setting up redundant interconnects,
all Memory Channel adapters in a system must have the same jumper setting,
either VH0 or VH1.
5.5.2.2 Installing MC2 Cables in Virtual Hub Mode Using Optical Converters
If you are using optical converters in an MC2 configuration,
install an optical converter module (CCMFB) when
you install the CCMAB Memory Channel PCI adapter in each system in the
virtual hub configuration.
Also
connect the CCMAB Memory Channel adapter to the optical converter with a
BN39B-01 cable.
When you install the CCMFB optical converter module
in the second system, you connect the two systems with the BN34R
fiber-optic cable.
Customer-supplied cables may be up to 2
kilometers (1.24 miles) in length.
(See
Section 5.3.)
5.5.2.3 Connecting MC2 Link Cables in Standard Hub Mode (No Fiber Optics)
If there are more than two systems in a cluster, use a Memory Channel standard hub configuration. Connect a BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) link cable between the Memory Channel adapter and a linecard in the CCMHB hub, starting at the lowest numbered slot in the hub.
If you are setting up redundant interconnects, the following restrictions apply:
Each adapter installed in a system must be connected to a different hub.
Each Memory Channel adapter in a system must be connected to
linecards that are installed in the same slot position in each hub.
For
example, if you connect one adapter to a linecard installed in slot
0/opto
in one hub, you must connect the other
adapter in that system to a linecard
installed in slot
0/opto
of the second hub.
Note
You cannot install a CCMLB linecard in slot
opto only
.
5.5.2.4 Connecting MC2 Cables in Standard Hub Mode Using Optical Converters
If you are using optical converters in an MC2 configuration, install an optical converter module (CCMFB), with attached BN34R fiber-optic cable, when you install the CCMAB Memory Channel PCI adapter in each system in the standard hub configuration. Also connect the CCMAB Memory Channel adapter to the optical converter with a BN39B-01 cable.
Note
See Section 2.2 for restrictions on the lengths of Memory Channel fiber-optic cables.
Now you need to:
Set the CCMLB linecard jumpers to support fiber optics
Connect the fiber-optic cable to a CCMFB fiber-optic converter module
Install the CCMFB fiber-optic converter module for each fiber-optic link
Note
If you have more than four fiber-optic links, you need two or more hubs. The CCMHB-BA hub has no linecards.
To set the CCMLB jumpers and install CCMFB fiber-optic converter modules in an MC2 hub, follow these steps:
Remove the appropriate CCMLB linecard and set the
linecard jumpers to
Fiber On
(jumper pins 1 to 2)
to support fiber optics.
See
Table 5-3.
Remove the CCMLB endplate and install the alternate endplate (with the slot at the bottom).
Remove the hub bulkhead blanking plate from the appropriate hub slot. Ensure that you observe the slot restrictions for the optical converter modules. Also keep in mind that all linecards for one Memory Channel interconnect must be in the same hub. (See Section 5.4.)
Thread the BN34R fiber-optic cable through the hub bulkhead slot. Make sure that the other end is attached to a CCMFB optics converter in the member system.
Thread the BN34R fiber-optic cable through the slot near the bottom of the endplate. Remove the cable tip protectors and insert the connectors into the transceiver until they click into place. Secure the cable to the module using the tie-wrap.
Install the CCMFB fiber-optic converter in slot
opto
only
,
0/opto
,
1/opto
,
2/opto
, or
3/opto
, as appropriate.
Install a BN39B-01 1-meter (3.3-foot) link cable between the CCMFB optical converter and the CCMLB linecard.
Repeat steps 1 through 7 for each CCMFB module to be installed.
5.6 Running Memory Channel Diagnostics
After the Memory Channel adapters, hubs, link cables, fiber-optic converters, and fiber-optic cables have been installed, power up the systems and run the Memory Channel diagnostics.
There are two console level Memory Channel diagnostics,
mc_diag
and
mc_cable
:
Tests the Memory Channel adapters on the system running the diagnostic.
Runs as part of the initialization sequence when the system is powered up.
Runs on a standalone system or while connected to another system or a hub with the link cable.
Must be run on all systems in the cluster simultaneously (therefore, all systems must be at the console prompt).
Caution
If you attempt to run
mc_cable
on one cluster member while other members of the cluster are up, you may crash the cluster.
Is designed to isolate problems to the Memory Channel adapter, BC12N or BN39B link cables, hub linecards, fiber-optic converters, BN34R fiber-optic cable, and, to some extent, to the hub.
Indicates data flow through the Memory Channel by response messages.
Runs continuously until terminated with Ctrl/C.
Reports differences in connection state, not errors.
Can be run in standard or virtual hub mode.
When the console indicates a successful response from all other systems being tested, the data flow through the Memory Channel hardware has been completed and the test may be terminated by pressing Ctrl/C on each system being tested.
Example 5-1
shows a sample output from
node 1 of a standard hub configuration.
In this example, the test is started
on node 1, then on node 0.
The test must be terminated on each
system.
Example 5-1: Running the mc_cable Test
>>> mc_cable [1] To exit MC_CABLE, type<
Ctrl/C>
mca0 node id 1 is online [2] No response from node 0 on mca0 [2] mcb0 node id 1 is online [3] No response from node 0 on mcb0 [3] Response from node 0 on mca0 [4] Response from node 0 on mcb0 [5] mcb0 is offline [6] mca0 is offline [6] [Ctrl/C] [7] >>>
The
mc_cable
diagnostic is initiated on node 1.
[Return to example]
Node 1 reports that
mca0
is on line but has not communicated with the Memory Channel adapter
on node 0.
[Return to example]
Node 1 reports that
mcb0
is on line but has not communicated with the Memory Channel adapter
on node 0.
[Return to example]
Memory Channel adapter
mca0
has communicated with the adapter on the other node.
[Return to example]
Memory Channel adapter
mcb0
has communicated with the adapter on the other node.
[Return to example]
Typing a Ctrl/C on node 0 terminates the test on that node and the Memory Channel adapters on node 1 report off line. [Return to example]
Ctrl/C on node 1 terminates the test. [Return to example]
The following sections contain information about maintaining Memory Channel interconnects. See other sections in this chapter or the Memory Channel User's Guide for detailed information about maintaining the Memory Channel hardware. Topics in this section include:
Adding a Memory Channel interconnect (Section 5.7.1)
Upgrading Memory Channel adapters (Section 5.7.2)
Upgrading a virtual hub configuration to a standard hub configuration (Section 5.7.3)
5.7.1 Adding a Memory Channel Interconnect
If you want to change from a single Memory Channel interconnect to
redundant Memory Channel interconnects without shutting down the cluster,
follow the steps in
Table 5-4, which covers adding a
Memory Channel interconnect and rolling from a dual MC1 interconnect to a
dual MC2 interconnect.
Most of the steps are the same.
5.7.2 Upgrading Memory Channel Adapters
If you have a TruCluster Server configuration with redundant MC1 interconnects and want to upgrade to MC2 interconnects, you can do so without shutting down the entire cluster.
When performing an upgrade from MC1 interconnects, which use 128 MB Memory Channel address space, to MC2, which uses either 128 or 512 MB Memory Channel address space, all Memory Channel adapters must be operating at 128 MB Memory Channel address space (the default) until the last adapter has been changed. At that time the address space can be increased to 512 MB if all MC2 adapters are jumpered for 512 MB.
This section covers adding a Memory Channel interconnect and the following rolling upgrade situations:
Dual, redundant MC1 interconnects in virtual hub mode (Table 5-4 and Figure 5-2)
Dual, redundant MC1 interconnects in standard hub mode (Table 5-4 and Figure 5-3 through Figure 5-8)
The figures following Table 5-4 provide two sequences that you can follow while carrying out the steps of Table 5-4. Figure 5-2 shows a dual, redundant virtual hub configuration using MC1 hardware being upgraded to MC2. Figure 5-3 through Figure 5-8 show a three-node standard hub configuration being upgraded from MC1 to MC2.
Note
When you upgrade from dual, redundant MC1 hardware to dual, redundant MC2 hardware, you must replace all the MC1 hardware on one interconnect before you start on the second interconnect (except as described in step 4 of Table 5-4).
Memory Channel adapters jumpered for 512 MB may require a minimum of 512 MB physical RAM memory. Ensure that your system has enough physical memory to support the upgrade. For two MC2 Memory Channel adapters, you will need more than 1 GB of physical memory.
Table 5-4: Adding a Memory Channel Interconnect or Upgrading from a Dual, Redundant MC1 Interconnect to MC2 Interconnects
Step | Action | Refer to: |
1 | If desired, using the
cluster application availability (CAA)
caa_relocate
command, manually relocate all applications from the cluster member
that will be shut down.
|
TruCluster Server Cluster Administration |
2 | On the system having an MC1 adapter installed or replaced, log in
as the root user and execute the
shutdown -h
utility to halt the system. |
Tru64 UNIX System Administration |
|
||
3 | Turn off the system. | -- |
4 | Set the jumpers on the new Memory Channel module to be installed: | Section 5.1 and Memory Channel User's Guide |
MC1: | ||
Hub mode -- Standard hub mode or virtual hub mode (VH0 or VH1) | ||
|
||
MC2: | ||
Hub mode -- Standard hub mode or virtual hub mode (VH0 or VH1) | ||
|
||
J3 -- Memory Channel address space: Select 128 MB (jumper pins 1 to 2) or 512 MB (jumper pins 2 to 3) as required for your configuration | ||
|
||
J4 -- Page size: Jumper pins 1 to 3 to select 8 KB | ||
J5 -- AlphaServer 8x00 Mode: Jumper pins 1 to 2 for AlphaServer 8200, 8400, GS60, GS60E, and GS140 systems and jumper pins 2 to 3 for all other AlphaServer systems | ||
J10 -- Fiber Optics Mode Enable: Jumper pins 2 to 3 to enable the use of the fiber-optic modules. Jumper pins 1 to 2 to disable the use of fiber optics | ||
5 | If adding a Memory Channel interconnect: Install the Memory Channel adapter module. | Section 5.2 and Memory Channel User's Guide |
If this is the second system in a virtual hub configuration, connect an MC1 or MC2 link cable between the MC1 or MC2 modules. | ||
For a standard hub configuration, use a link cable to connect the adapter to the Memory Channel hub linecard in the hub slot that corresponds to the existing Memory Channel linecard in the other hub. | ||
If upgrading from a dual, redundant MC1 interconnect to MC2 interconnects: Remove the MC1 adapter and install the MC2 adapter. | ||
Virtual Hub: | ||
If this is the first system in a virtual hub configuration, replace the MC1 adapter with an MC2 adapter. | Figure 5-2 (B) | |
If this is the second system in a virtual hub configuration, replace both MC1 adapters with MC2 adapters. Use a BN39B-10 link cable to connect Memory Channel adapters between systems to form the first MC2 interconnect. | Figure 5-2 (C) | |
If this is the second adapter on the first system in a virtual hub configuration, replace the MC1 adapter with an MC2 adapter. Connect the second set of MC2 adapters with a BN39B-10 link cable to form the second Memory Channel interconnect. | Figure 5-2 (D) | |
Standard Hub Configuration: | ||
Remove the MC1 adapter and install the MC2 adapter in one system, and on one rail at a time. Use a BN39B-10 link cable to connect the new MC2 adapter to the linecard in the MC2 hub that corresponds to the same linecard that the MC1 module was connected to in the MC1 hub. | Figure 5-4 and Figure 5-5 | |
If this is the last system on this rail to receive an MC2 adapter (that is, all other member systems on this rail have one MC2 adapter) you can replace both MC1 adapters at the same time. Use a BN39B-10 link cable to connect the new MC2 adapters to the linecard in their respective MC2 hub that corresponds to the same linecard that the MC1 modules were connected to in the MC1 hubs. | Figure 5-6 | |
6 | Turn on the system and run the
mc_diag
Memory Channel diagnostic.
Note that you cannot run
mc_cable
because this is
the only system in the cluster that is shut down. |
Section 5.6 |
7 | Boot the system. | |
8 | Repeat steps 1 - 7 for all other systems in the cluster. When you have replaced both MC1 adapters in the last system, repeat steps 1 - 7 and replace the MC1 adapters on the other interconnect. | Figure 5-7 and Figure 5-8 |
9 | If desired, enable increasing the address space to 512 MB after the following conditions have been met: | sysconfig
reference pages |
|
||
On one member system, use the
sysconfig
command to reconfigure the Memory Channel
kernel subsystem to initiate the use of 512 MB address space.
The
configuration change is propagated to the other cluster member
systems:
/sbin/sysconfig -r rm rm_use_512=1 |
||
|
If you have used the
sysconfig
command to promote
Memory Channel address space to 512 MB, you may need to know the actual
address space being used by a logical rail.
Use the
dbx
debugger utility as follows to determine:
Logical size (in 8-KB pages) of a rail
Physical size (J3 jumper setting) for physical rails
# dbx -k /vmunix (dbx) p rm_log_rail_to_ctx[0]->mgmt_page_va->size [1] 16384 [2] (dbx) p rm_adapters[0]->rmp_prail_va->rmc_size [3] { [0] 65536 [4] [1] 0 [2] 65536 [4] [3] 0 [4] 65536 [4] [5] 0 [6] 0 [7] 0 } (dbx) p rm_adapters[1]->rmp_prail_va->rmc_size [5] { [0] 16384 [6] [1] 0 [2] 16384 [6] [3] 0 [4] 16384 [6] [5] 0 [6] 0 [7] 0 }
Find the size of a logical rail. [Return to example]
The logical rail is operating at 128 MB (16384 eight-KB pages). [Return to example]
Verify the jumper settings for the member systems on the first physical rail. [Return to example]
The J3 jumper is set at 512 MB for nodes 0, 2, and 4 on the first physical rail (65536 eight-KB pages). [Return to example]
Verify the jumper settings for the member systems on the second physical rail. [Return to example]
The J3 jumper is set at 128 MB for nodes 0, 2, and 4 on the second physical rail (16384 eight-KB pages). [Return to example]
Figure 5-2
shows a
dual, redundant virtual hub configuration using MC1 hardware being
upgraded to MC2.
Figure 5-2: MC1-to-MC2 Virtual Hub Upgrade
Figure 5-3
through
Figure 5-8
show a three-node standard hub configuration being
upgraded from MC1 to MC2.
Figure 5-3: MC1-to-MC2 Standard Hub Upgrade: Initial Configuration
Figure 5-4: MC1-to-MC2 Standard Hub Upgrade: First MC1 Module Replaced
Figure 5-5: MC1-to-MC2 Standard Hub Upgrade: Replace First MC1 Adapter in Second System
Figure 5-6: MC1-to-MC2 Standard Hub Upgrade: Replace Third System Memory Channel Adapters
Figure 5-7: MC1-to-MC2 Standard Hub Upgrade: Replace Second MC1 in Second System
Figure 5-8: MC1-to-MC2 Standard Hub Upgrade: Final Configuration
5.7.3 Upgrading a Virtual Hub Configuration to a Standard Hub Configuration
If your cluster is configured in virtual hub mode (two member systems with no Memory Channel hub), you must convert to standard hub mode in order to:
Add another member system to the cluster.
Add fiber optics to MC2 to provide more distance between the cluster systems.
Note
You need an additional PCI slot for each optical converter module to be installed in the system. The optical converter does not use PCI bandwidth, but it does take up a PCI slot.
You also need an available slot in the Memory Channel hub for an optical converter module for each member system.
There will be some cluster down time. During the procedure, you can maintain cluster operations except for the time it takes to shut down the second system and boot the first system as a single-node cluster.
Note
If you are not using a quorum disk, the first member you shut down must have zero votes for the cluster to survive its shutdown. Use the
clu_quorum
command to adjust quorum votes. See theclu_quorum
(8) reference page and the Cluster Administration manual for more information.
To upgrade from a virtual hub configuration to a standard hub
configuration, follow the steps in
Table 5-5.
In this procedure,
system1
is the member system that will be shut
down first.
Member system
system2
will be
shut down last.
The procedure is written with the assumption
that you have dual-rail failover-pair Memory Channel adapter
modules.
Table 5-5: Upgrading from a Virtual Hub Configuration to a Standard Hub Configuration
Step | Action | Refer to: |
1 | Install the Memory Channel hubs at an appropriate distance from the member systems. | Section 5.4 |
If you are adding fiber optics, for each system you will have in the cluster you need to: | -- | |
Set the hub linecard J2 and J3 jumpers to enable fiber optics. | Section 5.1.2 | |
Install the optical converters in the hub, ensuring that you connect the optical cable to the optical converter when it is installed. | Section 5.5.2.4 | |
Connect the fiber-optic module in the hub to the linecard with a 1-meter (3.3-foot) BN39B-01 link cable. |
Section 5.5.2.4 | |
2 | Manually relocate all
applications from
system1
to
system2 .
Use the cluster application availability (CAA)
caa_relocate
command.
|
caa_relocate (8)
reference page and
Cluster Administration |
3 | On
system1
log in as the root user and
execute the
shutdown -h
command to halt the
system. |
Tru64 UNIX System Administration |
|
||
4 | Turn off
system1
power. |
-- |
5 | Disconnect the Memory Channel cables
from
system1 . |
-- |
6 | Wearing an antistatic wrist strap, remove the Memory Channel adapter modules and place them on a grounded work surface. | -- |
7 | On each Memory Channel adapter module, move the hub mode jumper (J4 for MC1 or MC1.5 and J1 for MC2) to pins 1 and 2 to select standard hub mode. | Section 5.1 and Memory Channel User's Guide |
|
||
8 | Reinstall the Memory Channel modules. | Section 5.2 |
9 | If you are adding fiber optics, install the optical converters in the member system. | Section 5.3 |
|
||
10 | Connect the Memory Channel cables between the Memory Channel adapter module and the Memory Channel hub and turn on hub power. If you have multiple adapters, each adapter must be connected to a different hub, and be in the same linecard slot position in each hub. | Section 5.5 |
|
||
11 | Turn on
system1
system power and
run the
mc_diag
Memory Channel diagnostic.
(You
cannot run
mc_cable
because this is the only
system in the cluster that is at the console prompt and no other
systems are connected to the hub.) |
Section 5.6 |
|
||
12 | Use the
shutdown
-h
or
shutdown -c
command to shut down cluster
member
system2 . |
-- |
13 | When
system2
is at the console prompt, boot
system1 , the system that
is connected to the Memory Channel hub. |
-- |
14 | Repeat steps 4 - 9 for
system2 . |
-- |
15 | Connect the Memory Channel cables between the Memory Channel adapter module and the Memory Channel hub. If you have multiple adapters, each adapter must be connected to a different hub, and must be in the same linecard slot position in each hub. | Section 5.5 |
16 | Turn on
system2
power and run the
mc_diag
Memory Channel diagnostic.
(You cannot run
mc_cable
because the other system is at
multi-user mode.) |
Section 5.6 |
|
||
17 | Boot
system2 . |
-- |
You can now connect a new system to the Memory Channel hub.
After
configuring the hardware, use the
clu_add_member
command to add each new system to
the cluster.
(See the
clu_add_member
(8)
reference page and the
Cluster Installation
manual for more information.)