5    Setting Up the Memory Channel Cluster Interconnect

This chapter describes Memory Channel configuration restrictions, and describes how to set up the Memory Channel cluster interconnect, including setting up a Memory Channel hub and Memory Channel optical converter (MC2 only), and connecting link cables.

Two versions of the Memory Channel peripheral component interconnect (PCI) adapter are available: CCMAA and CCMAB (MC2).

Two variations of the CCMAA PCI adapter are in use: CCMAA-AA (MC1) and CCMAA-AB (MC1.5). Because the hardware used with these two PCI adapters is the same, this manual often refers to MC1 when referring to either of these variations.

See the TruCluster Server Software Product Description (SPD) for a list of the supported Memory Channel hardware. See the Memory Channel User's Guide for illustrations and more detailed information about installing jumpers, Memory Channel adapters, and hubs.

See Section 2.2 for a discussion on Memory Channel restrictions.

You can have two Memory Channel adapters with TruCluster Server, but only one rail is active at a time. This is referred to as a failover pair. If the active rail fails, cluster communications fails over to the formerly inactive rail.

If you use multiple Memory Channel adapters with the Memory Channel application programming interface (API) for high performance data delivery over Memory Channel, setting the rm_rail_style configuration variable to zero (rm_rail_style = 0) enables single-rail style with multiple active rails. The default is one, which selects failover pair.

For more information on the Memory Channel failover pair model, see the Cluster Highly Available Applications manual.

To set up the Memory Channel interconnects, follow these steps, referring to the appropriate section and the Memory Channel User's Guide as necessary:

  1. Set the Memory Channel jumpers (Section 5.1).

  2. Install the Memory Channel adapter into a PCI slot on each system (Section 5.2).

  3. If you are using fiber optics with MC2, install the CCMFB fiber-optic module (Section 5.3).

  4. If you have more than two systems in the cluster, install a Memory Channel hub (Section 5.4).

  5. Connect the Memory Channel cables (Section 5.5).

  6. After you complete steps 1 through 5 for all systems in the cluster, apply power to the systems and run Memory Channel diagnostics (Section 5.6).

    Note

    If you are installing SCSI or network adapters, you may want to complete all hardware installation before powering up the systems to run Memory Channel diagnostics.

Section 5.7.2 provides procedures for upgrading from redundant MC1 interconnects to MC2 interconnects.

5.1    Setting the Memory Channel Adapter Jumpers

The meaning of the Memory Channel adapter module jumpers depends upon the version of the Memory Channel module.

5.1.1    MC1 and MC1.5 Hub Mode Jumper

The MC1 and MC1.5 modules (CCMAA-AA and CCMAA-AB, respectively) have an adapter jumper (J4) that designates whether the configuration is using standard or virtual hub mode. If virtual hub mode is being used, there can be only two systems. One system must be virtual hub 0 (VH0) and the other must be virtual hub 1 (VH1).

The Memory Channel adapter should arrive with the J4 jumper set for standard hub mode (pins 1 to 2 jumpered). Confirm that the jumper is set properly for your configuration. The jumper configurations in Table 5-1 are shown as if you are holding the module with the J4 jumper facing you, with the module end plate in your left hand. The jumper is next to the factory/maintenance cable connector.

Table 5-1:  MC1 and MC1.5 J4 Jumper Configuration

If hub mode is: Jumper: Example:
Standard J4 Pins 1 to 2

Virtual: VH0 J4 Pins 2 to 3

Virtual: VH1 None needed; store the jumper on J4 pin 1 or 3

If you are upgrading from virtual hub mode to standard hub mode (or from standard hub mode to virtual hub mode), be sure to change the J4 jumper on all Memory Channel adapters on the rail.

5.1.2    MC2 Jumpers

The MC2 module (CCMAB) has multiple jumpers. They are numbered right to left, starting with J1 in the upper right corner (as you view the jumper side of the module with the endplate in your left hand). The leftmost jumpers are J11 and J10. J11 is above J10.

Most of the jumper settings are straightforward, but the window size jumper, J3, needs some explanation.

If a CCMAA adapter (MC1 or MC1.5) is installed, 128 MB of address space is allocated for Memory Channel use. If a CCMAB adapter (MC2) PCI adapter is installed, the memory space allocation for Memory Channel depends on the J3 jumper and can be 128 MB or 512 MB.

If two Memory Channel adapters are used as a failover pair to provide redundancy, the address space allocated for the logical rail depends on the smaller window size of the physical adapters.

During a rolling upgrade (see Section 5.7.2) from an MC1 failover pair to an MC2 failover pair, the MC2 modules can be jumpered for 128 MB or 512 MB. If jumpered for 512 MB, the increased address space is not achieved until all MC PCI adapters have been upgraded and the use of 512 MB is enabled. On one member system, use the sysconfig command to reconfigure the Memory Channel kernel subsystem to initiate the use of 512 MB address space. The configuration change is propagated to the other cluster member systems by entering the following command:

# /sbin/sysconfig -r rm rm_use_512=1
 

See the Cluster Administration manual for more information on failover pairs.

The MC2 jumpers are described in Table 5-2.

Table 5-2:  MC2 Jumper Configuration

Jumper: Description: Example:
J1: Hub Mode Standard: Pins 1 to 2

  VH0: Pins 2 to 3

  VH1: None needed; store the jumper on pin 1 or pin 3

J3: Window Size 512 MB: Pins 2 to 3

  128 MB: Pins 1 to 2

J4: Page Size 8-KB page size (UNIX): Pins 1 to 2

  4-KB page size (not used): Pins 2 to 3

J5: AlphaServer 8x00 Mode 8x00 mode selected: Pins 1 to 2 [Footnote 21]

  8x00 mode not selected: Pins 2 to 3

J10 and J11: Fiber-Optic Mode Enable Fiber Off: Pins 1 to 2

  Fiber On: Pins 2 to 3 pins

The MC2 linecard (CCMLB) has two jumpers, J2 and J3, that are used to enable fiber-optic mode. The jumpers are located near the middle of the module (as you view the jumper side of the module with the endplate in your left hand). Jumper J2 is on the right. The MC2 linecard jumpers are described in Table 5-3.

Table 5-3:  MC2 Linecard Jumper Configurations

Jumper: Description: Example:
J2 and J3: Fiber Mode Fiber Off: Pins 2 to 3

  Fiber On: Pins 1 to 2

5.2    Installing the Memory Channel Adapter

Install the Memory Channel adapter in an appropriate peripheral component interconnect (PCI) slot. (See Section 2.2.) Secure the module at the backplane. Ensure that the screw is tight to maintain proper grounding.

The Memory Channel adapter comes with a straight extension plate. This fits most systems; however, you may have to replace the extender with an angled extender (AlphaServer 2100A, for instance), or for an AlphaServer 8200/8400, GS60, GS60E, or GS140, remove the extender completely.

If you are setting up a redundant Memory Channel configuration, install the second Memory Channel adapter immediately after installing the first Memory Channel adapter. Ensure that the jumpers are correct and are the same on both modules.

After you install the Memory Channel adapters, replace the system panels, unless you have more hardware to install.

5.3    Installing the MC2 Optical Converter in the Member System

If you plan to use a CCMFB optical converter along with the MC2 PCI adapter, install it at the same time that you install the MC2 CCMAB. To install a MC2 CCMFB optical converter in the member system, follow these steps. See Section 5.5.2.4 if you are installing an optical converter in an MC2 hub.

  1. Remove the bulkhead blanking plate for the desired PCI slot.

  2. Thread one end of the fiber-optic cable (BN34R) through the PCI bulkhead slot.

  3. Thread the cable through the slot in the optical converter module (CCMFB) endplate (at the top of the endplate).

  4. Remove the cable tip protectors and attach the keyed plug to the connector on the optical converter module. Tie-wrap the cable to the module.

  5. Seat the optical converter module firmly into the PCI backplane and secure the module with the PCI card cage mounting screw.

  6. Attach the 1-meter (3.3-foot) BN39B-01 cable from the CCMAB MC2 PCI adapter to the CCMFB optical converter.

  7. Route the fiber-optic cable to the remote system or hub.

  8. Repeat steps 1 through 7 for the optical converter on the second system. See Section 5.5.2.4 if you are installing an optical converter in an MC2 hub.

5.4    Installing the Memory Channel Hub

You may use a hub in a two-node TruCluster Server cluster, but the hub is not required. When there are more than two systems in a cluster, you must use a Memory Channel hub as follows:

5.5    Installing the Memory Channel Cables

Memory Channel cable installation depends on the Memory Channel module revision, and whether or not you are using fiber optics. The following sections describe how to install the Memory Channel cables for MC1 and MC2.

5.5.1    Installing the MC1 or MC1.5 Cables

To set up an MC1 or MC1.5 interconnect, use the BC12N-10 3-meter (9.8-foot) link cables to connect Memory Channel adapters and, optionally, Memory Channel hubs.

Note

Do not connect an MC1 or MC1.5 link cable to an MC2 module.

5.5.1.1    Connecting MC1 or MC1.5 Link Cables in Virtual Hub Mode

For an MC1 virtual hub configuration (two nodes in the cluster), connect the BC12N-10 link cables between the Memory Channel adapters that are installed in each of the systems.

Caution

Be very careful when installing the link cables. Insert the cables straight in.

Gently push the cable's connector into the receptacle, and then use the screws to pull the connector in tight. The connector must be tight to ensure a good ground contact.

If you are setting up redundant interconnects, all Memory Channel adapters in a system must have the same jumper setting, either VH0 or VH1.

Note

With the TruCluster Server Version 5.1A product and virtual hub mode, there is no longer a restriction requiring that mca0 in one system be connected to mca0 in the other system.

5.5.1.2    Connecting MC1 Link Cables in Standard Hub Mode

If there are more than two systems in a cluster, use a standard hub configuration. Connect a BC12N-10 link cable between the Memory Channel adapter and a linecard in the CCMHA hub, starting at the lowest numbered slot in the hub.

If you are setting up redundant interconnects, the following restrictions apply:

Figure 5-1 shows Memory Channel adapters connected to linecards that are in the same slot position in the Memory Channel hubs.

Figure 5-1:  Connecting Memory Channel Adapters to Hubs

5.5.2    Installing the MC2 Cables

To set up an MC2 interconnect, use the BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) link cables for virtual hub or standard hub configurations without optical converters.

If optical converters are used, use the BN39B-01 (1-meter; 3.3-foot) link cable and the BN34R-10 (10-meter; 32.8-foot) or BN34R-31 (31-meter; 101.7-foot) fiber-optic cable.

5.5.2.1    Installing the MC2 Cables for Virtual Hub Mode Without Optical Converters

To set up an MC2 configuration for virtual hub mode, use BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) Memory Channel link cables to connect Memory Channel adapters to each other.

Notes

MC2 link cables (BN39B) are black cables.

Do not connect an MC2 cable to an MC1 or MC1.5 CCMAA module.

Gently push the cable's connector into the receptacle, and then use the screws to pull the connector in tight. The connector must be tight to ensure a good ground contact.

If you are setting up redundant interconnects, all Memory Channel adapters in a system must have the same jumper setting, either VH0 or VH1.

5.5.2.2    Installing MC2 Cables in Virtual Hub Mode Using Optical Converters

If you are using optical converters in an MC2 configuration, install an optical converter module (CCMFB) when you install the CCMAB Memory Channel PCI adapter in each system in the virtual hub configuration. Also connect the CCMAB Memory Channel adapter to the optical converter with a BN39B-01 cable. When you install the CCMFB optical converter module in the second system, you connect the two systems with the BN34R fiber-optic cable. Customer-supplied cables may be up to 2 kilometers (1.24 miles) in length. (See Section 5.3.)

5.5.2.3    Connecting MC2 Link Cables in Standard Hub Mode (No Fiber Optics)

If there are more than two systems in a cluster, use a Memory Channel standard hub configuration. Connect a BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) link cable between the Memory Channel adapter and a linecard in the CCMHB hub, starting at the lowest numbered slot in the hub.

If you are setting up redundant interconnects, the following restrictions apply:

5.5.2.4    Connecting MC2 Cables in Standard Hub Mode Using Optical Converters

If you are using optical converters in an MC2 configuration, install an optical converter module (CCMFB), with attached BN34R fiber-optic cable, when you install the CCMAB Memory Channel PCI adapter in each system in the standard hub configuration. Also connect the CCMAB Memory Channel adapter to the optical converter with a BN39B-01 cable.

Note

See Section 2.2 for restrictions on the lengths of Memory Channel fiber-optic cables.

Now you need to:

Note

If you have more than four fiber-optic links, you need two or more hubs. The CCMHB-BA hub has no linecards.

To set the CCMLB jumpers and install CCMFB fiber-optic converter modules in an MC2 hub, follow these steps:

  1. Remove the appropriate CCMLB linecard and set the linecard jumpers to Fiber On (jumper pins 1 to 2) to support fiber optics. See Table 5-3.

  2. Remove the CCMLB endplate and install the alternate endplate (with the slot at the bottom).

  3. Remove the hub bulkhead blanking plate from the appropriate hub slot. Ensure that you observe the slot restrictions for the optical converter modules. Also keep in mind that all linecards for one Memory Channel interconnect must be in the same hub. (See Section 5.4.)

  4. Thread the BN34R fiber-optic cable through the hub bulkhead slot. Make sure that the other end is attached to a CCMFB optics converter in the member system.

  5. Thread the BN34R fiber-optic cable through the slot near the bottom of the endplate. Remove the cable tip protectors and insert the connectors into the transceiver until they click into place. Secure the cable to the module using the tie-wrap.

  6. Install the CCMFB fiber-optic converter in slot opto only, 0/opto, 1/opto, 2/opto, or 3/opto, as appropriate.

  7. Install a BN39B-01 1-meter (3.3-foot) link cable between the CCMFB optical converter and the CCMLB linecard.

  8. Repeat steps 1 through 7 for each CCMFB module to be installed.

5.6    Running Memory Channel Diagnostics

After the Memory Channel adapters, hubs, link cables, fiber-optic converters, and fiber-optic cables have been installed, power up the systems and run the Memory Channel diagnostics.

There are two console level Memory Channel diagnostics, mc_diag and mc_cable:

When the console indicates a successful response from all other systems being tested, the data flow through the Memory Channel hardware has been completed and the test may be terminated by pressing Ctrl/C on each system being tested.

Example 5-1 shows a sample output from node 1 of a standard hub configuration. In this example, the test is started on node 1, then on node 0. The test must be terminated on each system.

Example 5-1:  Running the mc_cable Test

>>> mc_cable  [1]
To exit MC_CABLE, type <Ctrl/C>
mca0 node id 1 is online  [2]
No response from node 0 on mca0  [2]
mcb0 node id 1 is online  [3]
No response from node 0 on mcb0  [3]
Response from node 0 on mca0  [4]
Response from node 0 on mcb0  [5]
mcb0 is offline  [6]
mca0 is offline  [6]
[Ctrl/C]  [7]
>>>

  1. The mc_cable diagnostic is initiated on node 1. [Return to example]

  2. Node 1 reports that mca0 is on line but has not communicated with the Memory Channel adapter on node 0. [Return to example]

  3. Node 1 reports that mcb0 is on line but has not communicated with the Memory Channel adapter on node 0. [Return to example]

  4. Memory Channel adapter mca0 has communicated with the adapter on the other node. [Return to example]

  5. Memory Channel adapter mcb0 has communicated with the adapter on the other node. [Return to example]

  6. Typing a Ctrl/C on node 0 terminates the test on that node and the Memory Channel adapters on node 1 report off line. [Return to example]

  7. Ctrl/C on node 1 terminates the test. [Return to example]

5.7    Maintaining Memory Channel Interconnects

The following sections contain information about maintaining Memory Channel interconnects. See other sections in this chapter or the Memory Channel User's Guide for detailed information about maintaining the Memory Channel hardware. Topics in this section include:

5.7.1    Adding a Memory Channel Interconnect

If you want to change from a single Memory Channel interconnect to redundant Memory Channel interconnects without shutting down the cluster, follow the steps in Table 5-4, which covers adding a Memory Channel interconnect and rolling from a dual MC1 interconnect to a dual MC2 interconnect. Most of the steps are the same.

5.7.2    Upgrading Memory Channel Adapters

If you have a TruCluster Server configuration with redundant MC1 interconnects and want to upgrade to MC2 interconnects, you can do so without shutting down the entire cluster.

When performing an upgrade from MC1 interconnects, which use 128 MB Memory Channel address space, to MC2, which uses either 128 or 512 MB Memory Channel address space, all Memory Channel adapters must be operating at 128 MB Memory Channel address space (the default) until the last adapter has been changed. At that time the address space can be increased to 512 MB if all MC2 adapters are jumpered for 512 MB.

This section covers adding a Memory Channel interconnect and the following rolling upgrade situations:

The figures following Table 5-4 provide two sequences that you can follow while carrying out the steps of Table 5-4. Figure 5-2 shows a dual, redundant virtual hub configuration using MC1 hardware being upgraded to MC2. Figure 5-3 through Figure 5-8 show a three-node standard hub configuration being upgraded from MC1 to MC2.

Note

When you upgrade from dual, redundant MC1 hardware to dual, redundant MC2 hardware, you must replace all the MC1 hardware on one interconnect before you start on the second interconnect (except as described in step 4 of Table 5-4).

Memory Channel adapters jumpered for 512 MB may require a minimum of 512 MB physical RAM memory. Ensure that your system has enough physical memory to support the upgrade. For two MC2 Memory Channel adapters, you will need more than 1 GB of physical memory.

Table 5-4:  Adding a Memory Channel Interconnect or Upgrading from a Dual, Redundant MC1 Interconnect to MC2 Interconnects

Step Action Refer to:
1 If desired, using the cluster application availability (CAA) caa_relocate command, manually relocate all applications from the cluster member that will be shut down. TruCluster Server Cluster Administration
2 On the system having an MC1 adapter installed or replaced, log in as the root user and execute the shutdown -h utility to halt the system. Tru64 UNIX System Administration

Note

After the system is at the console prompt, use the console set command to set the auto_action console environment variable to halt. This halts the system at the console prompt when the system is turned on, ensuring that you are able to run the Memory Channel diagnostics.

>>> set auto_action halt
 

3 Turn off the system. --
4 Set the jumpers on the new Memory Channel module to be installed: Section 5.1 and Memory Channel User's Guide
  MC1:  
  Hub mode -- Standard hub mode or virtual hub mode (VH0 or VH1)  
 

  • Virtual hub mode, VH0: Jumper pins 2 to 3

  • Virtual hub mode, VH1: No jumper

  • Standard hub mode: Jumper pins 1 to 2

 
  MC2:  
  Hub mode -- Standard hub mode or virtual hub mode (VH0 or VH1)  
 

  • Virtual hub mode, VH0: Jumper pins 2 to 3

  • Virtual hub mode, VH1: No jumper

  • Standard hub mode: Jumper pins 1 to 2

 
  J3 -- Memory Channel address space: Select 128 MB (jumper pins 1 to 2) or 512 MB (jumper pins 2 to 3) as required for your configuration  

Note

If you set the J3 jumpers for 128 MB because the other interconnect is MC1, and then later on decide to upgrade to dual, redundant MC2 hardware using 512 MB address space, you will have to reset the jumpers. If you set the jumpers to 512 MB now, the software will only allow the use of 128 MB address space for a mixed rail cluster (MC1 on one rail, MC2 on the other rail).

  J4 -- Page size: Jumper pins 1 to 3 to select 8 KB  
  J5 -- AlphaServer 8x00 Mode: Jumper pins 1 to 2 for AlphaServer 8200, 8400, GS60, GS60E, and GS140 systems and jumper pins 2 to 3 for all other AlphaServer systems  
  J10 -- Fiber Optics Mode Enable: Jumper pins 2 to 3 to enable the use of the fiber-optic modules. Jumper pins 1 to 2 to disable the use of fiber optics  
5 If adding a Memory Channel interconnect: Install the Memory Channel adapter module. Section 5.2 and Memory Channel User's Guide
  If this is the second system in a virtual hub configuration, connect an MC1 or MC2 link cable between the MC1 or MC2 modules.  
  For a standard hub configuration, use a link cable to connect the adapter to the Memory Channel hub linecard in the hub slot that corresponds to the existing Memory Channel linecard in the other hub.  
  If upgrading from a dual, redundant MC1 interconnect to MC2 interconnects: Remove the MC1 adapter and install the MC2 adapter.  
     
  Virtual Hub:  
  If this is the first system in a virtual hub configuration, replace the MC1 adapter with an MC2 adapter. Figure 5-2 (B)
  If this is the second system in a virtual hub configuration, replace both MC1 adapters with MC2 adapters. Use a BN39B-10 link cable to connect Memory Channel adapters between systems to form the first MC2 interconnect. Figure 5-2 (C)
  If this is the second adapter on the first system in a virtual hub configuration, replace the MC1 adapter with an MC2 adapter. Connect the second set of MC2 adapters with a BN39B-10 link cable to form the second Memory Channel interconnect. Figure 5-2 (D)
     
  Standard Hub Configuration:  
  Remove the MC1 adapter and install the MC2 adapter in one system, and on one rail at a time. Use a BN39B-10 link cable to connect the new MC2 adapter to the linecard in the MC2 hub that corresponds to the same linecard that the MC1 module was connected to in the MC1 hub. Figure 5-4 and Figure 5-5
  If this is the last system on this rail to receive an MC2 adapter (that is, all other member systems on this rail have one MC2 adapter) you can replace both MC1 adapters at the same time. Use a BN39B-10 link cable to connect the new MC2 adapters to the linecard in their respective MC2 hub that corresponds to the same linecard that the MC1 modules were connected to in the MC1 hubs. Figure 5-6
6 Turn on the system and run the mc_diag Memory Channel diagnostic. Note that you cannot run mc_cable because this is the only system in the cluster that is shut down. Section 5.6
7 Boot the system.  
8 Repeat steps 1 - 7 for all other systems in the cluster. When you have replaced both MC1 adapters in the last system, repeat steps 1 - 7 and replace the MC1 adapters on the other interconnect. Figure 5-7 and Figure 5-8
9 If desired, enable increasing the address space to 512 MB after the following conditions have been met: sysconfig reference pages
 

  • The last member system has had its second MC1 adapter replaced with an MC2 adapter.

  • The cluster is operational.

  • All MC2 adapters are jumpered for 512 MB (and you need to utilize 512 MB address space).

 
  On one member system, use the sysconfig command to reconfigure the Memory Channel kernel subsystem to initiate the use of 512 MB address space. The configuration change is propagated to the other cluster member systems: /sbin/sysconfig -r rm rm_use_512=1  

Note

After the configuration change is propagated to the other member systems, you can reboot any member system and the 512 MB address space is still in effect.

If you use the sysconfig command to promote address space to 512 MB and inadvertently leave an MC2 adapter jumpered for 128 MB, then reboot that system, it will not rejoin the cluster. When the system with the Memory Channel adapter jumpered for 128 MB is shut down, and the TruCluster software running on the remaining cluster member systems discover that all operational Memory Channel adapters are jumpered for 512 MB, because address space has been promoted to 512 MB, the active rail will use 512 MB address space. A system jumpered for 128 MB cannot join the cluster. The startup error message on the system jumpered for 128 MB follows: panic: MC2 adapter has too little memory

If you have used the sysconfig command to promote Memory Channel address space to 512 MB, you may need to know the actual address space being used by a logical rail. Use the dbx debugger utility as follows to determine:

# dbx -k /vmunix
(dbx) p rm_log_rail_to_ctx[0]->mgmt_page_va->size  [1]
16384  [2]
(dbx)  p rm_adapters[0]->rmp_prail_va->rmc_size  [3]
{
        [0] 65536  [4]
        [1] 0
        [2] 65536  [4]
        [3] 0
        [4] 65536  [4]
        [5] 0
        [6] 0
        [7] 0
}
(dbx)  p rm_adapters[1]->rmp_prail_va->rmc_size  [5]
{
        [0] 16384  [6]
        [1] 0
        [2] 16384  [6]
        [3] 0
        [4] 16384  [6]
        [5] 0
        [6] 0
        [7] 0
} 
 

  1. Find the size of a logical rail. [Return to example]

  2. The logical rail is operating at 128 MB (16384 eight-KB pages). [Return to example]

  3. Verify the jumper settings for the member systems on the first physical rail. [Return to example]

  4. The J3 jumper is set at 512 MB for nodes 0, 2, and 4 on the first physical rail (65536 eight-KB pages). [Return to example]

  5. Verify the jumper settings for the member systems on the second physical rail. [Return to example]

  6. The J3 jumper is set at 128 MB for nodes 0, 2, and 4 on the second physical rail (16384 eight-KB pages). [Return to example]

Figure 5-2 shows a dual, redundant virtual hub configuration using MC1 hardware being upgraded to MC2.

Figure 5-2:  MC1-to-MC2 Virtual Hub Upgrade

Figure 5-3 through Figure 5-8 show a three-node standard hub configuration being upgraded from MC1 to MC2.

Figure 5-3:  MC1-to-MC2 Standard Hub Upgrade: Initial Configuration

Figure 5-4:  MC1-to-MC2 Standard Hub Upgrade: First MC1 Module Replaced

Figure 5-5:  MC1-to-MC2 Standard Hub Upgrade: Replace First MC1 Adapter in Second System

Figure 5-6:  MC1-to-MC2 Standard Hub Upgrade: Replace Third System Memory Channel Adapters

Figure 5-7:  MC1-to-MC2 Standard Hub Upgrade: Replace Second MC1 in Second System

Figure 5-8:  MC1-to-MC2 Standard Hub Upgrade: Final Configuration

5.7.3    Upgrading a Virtual Hub Configuration to a Standard Hub Configuration

If your cluster is configured in virtual hub mode (two member systems with no Memory Channel hub), you must convert to standard hub mode in order to:

There will be some cluster down time. During the procedure, you can maintain cluster operations except for the time it takes to shut down the second system and boot the first system as a single-node cluster.

Note

If you are not using a quorum disk, the first member you shut down must have zero votes for the cluster to survive its shutdown. Use the clu_quorum command to adjust quorum votes. See the clu_quorum(8) reference page and the Cluster Administration manual for more information.

To upgrade from a virtual hub configuration to a standard hub configuration, follow the steps in Table 5-5. In this procedure, system1 is the member system that will be shut down first. Member system system2 will be shut down last. The procedure is written with the assumption that you have dual-rail failover-pair Memory Channel adapter modules.

Table 5-5:  Upgrading from a Virtual Hub Configuration to a Standard Hub Configuration

Step Action Refer to:
1 Install the Memory Channel hubs at an appropriate distance from the member systems. Section 5.4
  If you are adding fiber optics, for each system you will have in the cluster you need to: --
  Set the hub linecard J2 and J3 jumpers to enable fiber optics. Section 5.1.2
  Install the optical converters in the hub, ensuring that you connect the optical cable to the optical converter when it is installed. Section 5.5.2.4
 

Connect the fiber-optic module in the hub to the linecard with a 1-meter (3.3-foot) BN39B-01 link cable.

Section 5.5.2.4
2 Manually relocate all applications from system1 to system2. Use the cluster application availability (CAA) caa_relocate command. caa_relocate(8) reference page and Cluster Administration
3 On system1 log in as the root user and execute the shutdown -h command to halt the system. Tru64 UNIX System Administration

Note

When system1 is at the console prompt, note the setting of the auto_action console environment variable, then use the console set command to set the auto_action variable to halt. This halts the system at the console prompt when the system is turned on, ensuring that you are able to run the Memory Channel diagnostics.

P00>>> show auto_action
       
.
.
.
P00>>> set auto_action halt  

4 Turn off system1 power. --
5 Disconnect the Memory Channel cables from system1. --
6 Wearing an antistatic wrist strap, remove the Memory Channel adapter modules and place them on a grounded work surface. --
7 On each Memory Channel adapter module, move the hub mode jumper (J4 for MC1 or MC1.5 and J1 for MC2) to pins 1 and 2 to select standard hub mode. Section 5.1 and Memory Channel User's Guide

Note

If you are also adding Memory Channel fiber optics capabilities, ensure that Memory Channel adapter module J10 and J11 jumpers are set to enable fiber optics.

8 Reinstall the Memory Channel modules. Section 5.2
9 If you are adding fiber optics, install the optical converters in the member system. Section 5.3

Note

Install the fiber-optic cable in cable runs between the hub and member system. Connect the fiber-optic cable to the optical converter when you install the converter in the system.

Connect the fiber-optic module to the Memory Channel adapter module with a 1-meter (3.3-foot) FN39B-01 link cable.

10 Connect the Memory Channel cables between the Memory Channel adapter module and the Memory Channel hub and turn on hub power. If you have multiple adapters, each adapter must be connected to a different hub, and be in the same linecard slot position in each hub. Section 5.5

Note

If you are using fiber optics with Memory Channel, you have already installed the fiber-optic cable. Turn on hub power.

11 Turn on system1 system power and run the mc_diag Memory Channel diagnostic. (You cannot run mc_cable because this is the only system in the cluster that is at the console prompt and no other systems are connected to the hub.) Section 5.6

Note

Set the auto_action console environment variable to its previous value, restart or boot, for instance:

>>> set auto_action restart
 

12 Use the shutdown -h or shutdown -c command to shut down cluster member system2. --
13 When system2 is at the console prompt, boot system1, the system that is connected to the Memory Channel hub. --
14 Repeat steps 4 - 9 for system2. --
15 Connect the Memory Channel cables between the Memory Channel adapter module and the Memory Channel hub. If you have multiple adapters, each adapter must be connected to a different hub, and must be in the same linecard slot position in each hub. Section 5.5
16 Turn on system2 power and run the mc_diag Memory Channel diagnostic. (You cannot run mc_cable because the other system is at multi-user mode.) Section 5.6

Note

Reset the auto_action console environment variable to its previous value, restart or boot, for instance:

>>> set auto_action restart
 

17 Boot system2. --

You can now connect a new system to the Memory Channel hub. After configuring the hardware, use the clu_add_member command to add each new system to the cluster. (See the clu_add_member(8) reference page and the Cluster Installation manual for more information.)