Jump to page titleUNITED STATES
hp.com home products and services support and drivers solutions how to buy
» contact hp


more options
 
hp.com home
End of Jump to page title
HP Services Software Patches
Jump to content


» software & drivers
» ask Compaq
» reference library
» forums & communities
» support tools
» warranty information
» contact support
» parts
» give us feedback

patches by topic
» DOS
» OpenVMS
» Security
» Tru64 Unix
» Ultrix 32
» Windows
» Windows NT

associated links
» what's new
» contract access
» browse patch tree
» search patch tree
» join mailing list

connection tools
» nameserver lookup
» traceroute
» ping


Find Support Information and Customer Communities for Presario.
Content starts here
HOST RAID RAIDE08024 for RAID Software for OpenVMS V2.4 ECO Summary
TITLE: HOST RAID RAIDE08024 for RAID Software for OpenVMS V2.4 ECO Summary
 
Modification Date:  06-NOV-2000 
Modification Type:  DOCUMENTATION:  Corrected kit size.

NOTE:  An OpenVMS saveset or PCSI installation file is stored
       on the Internet in a self-expanding compressed file.
 
       For OpenVMS savesets, the name of the compressed saveset
       file will be kit_name.a-dcx_vaxexe for OpenVMS VAX or
       kit_name.a-dcx_axpexe for OpenVMS Alpha. Once the OpenVMS
       saveset is copied to your system, expand the compressed
       saveset by typing RUN kitname.dcx_vaxexe or kitname.dcx_alpexe.
 
       For PCSI files, once the PCSI file is copied to your system,
       rename the PCSI file to kitname.pcsi-dcx_axpexe, then it can
       be expanded by typing RUN kitname.pcsi-dcx_axpexe.  The resultant
       file will be the PCSI installation file which can be used to install
       the ECO.
 
Copyright (c) Compaq Computer Corporation 1999, 2000.  All rights reserved.

PRODUCT:    Compaq RAID Software for OpenVMS

OP/SYS:     OpenVMS VAX
            OpenVMS Alpha

SOURCE:     Compaq Computer Corporation

ECO INFORMATION:

     ECO Kit Name:  RAIDE08024
     ECO Kits Superseded by This ECO Kit:  RAIDE07024 (Never
                                             Officially Released)
                                           RAIDE06024
                                           RAIDE05024
     ECO Kit Approximate Size: 13932 Blocks
                    Saveset A -  450 Blocks
                    Saveset B -  900 Blocks
                    Saveset C - 1458 Blocks
                    Saveset D - 1458 Blocks
                    Saveset E - 2736 Blocks
                    Saveset F - 2736 Blocks
                    Saveset G - 2736 Blocks
                    Saveset H - 1458 Blocks

     Kit Applies To:  Compaq RAID Software V2.4
                      OpenVMS VAX V5.5-2, V6.2, V7.1, V7.2
                      OpenVMS Alpha V6.2, V7.1, V7.2, V7.2-1

       NOTE:  This product will be supported on OpenVMS V7.2 and
              V7.2-1 with the following restriction:

              Bit 3 of the SYSGEN parameter MSCP_SERVE_ALL must be 
              set to prevent the serving of Host Based Raidsets to 
              other members of the cluster. Bit 3 enables the 
              pre-V7.2 behavior of not serving Raidsets to other 
              cluster members.       

     System/Cluster Reboot Necessary:  Yes
     Rolling Re-boot Supported:  Information Not Available
     Installation Rating:  INSTALL_UNKNOWN

     Kit Dependencies:

       The following remedial kit(s) must be installed BEFORE
       installation of this kit:

         None

       In order to receive all the corrections listed in this
       kit, the following remedial kits should also be installed:

         None

       Please see the Release Notes for this kit regarding which
       remedial ECO kits need to be installed before RAID software
       can be installed on the system.


ECO KIT SUMMARY:

An ECO kit exists for Compaq RAID Software V2.4 on OpenVMS VAX V5.5-2
through V7.1 and OpenVMS Alpha V6.1 through V7.2-1.  

Kit Description:  

This kit is a full kit update for all prior releases of StorageWorks 
RAID Software for OpenVMS Version 2.4x. This kit is cumulative of all 
kits from V2.4-1 through V2.4-7.  Please note the slight change in 
product name to align with COMPAQ standards, from StorageWorks RAID 
Software for OpenVMS to COMPAQ RAID Software for OpenVMS.

Version V2.4-7 and Version 2.4A are the same kit.  To address the full 
scope of RAID software customers, engineering chose to update the product 
through the Layered Products CDROM distributions as V2.4A.   Version 
2.4-7 was not released into TIMA nor through the DEC STD 204 route because 
of the availability of V2.4.

Version V2.4-8 is an optional kit to install.  This is why it is a V2.4-8 
and not a V2.4B.  Please see problems remedied in V2.4-8 to see if your 
environment can benefit from this kit.

This kit may be applied to systems running the following version(s) of 
OpenVMS:

This layered product can only support a specific version  of the Operating 
System to the extent that OpenVMS engineering is willing to support that 
version.  See your services contract for current information.

        OpenVMS VAX 5.5, 6.2, 7.1, 7.2
        OpenVMS Alpha 6.2, 7.1, 7.2 and 7.2-1

This kit automatically installs the correct images for your type of system
(VAX or Alpha).

Please refer to the release notes for specific fixes from ECO V2.4-1 
through ECO V2.4-7/V2.4A.

Problems Addressed in V2.4-8:

  o  Running multiple RAID$STARTUPs in a cluster (like after a VMScluster 
     reboot) take a long time to complete.  This was caused by a single 
     operation mode in the RAID$SERVER process. This has been changed 
     allowing better overlapping of activities between starting
     RAID$SERVER processes thus improving RAID startup time.  However, 
     this change has no effect on executing a single RAID$STARTUP (like 
     a node reboot).

  o  Under rare circumstances the RAID driver could cause a system crash 
     in module DRV_IC_INITIATE.

  o  A RAID BIND command fails on large VMSclusters (> 30 nodes). The 
     RAID$DIAGNOSTICS_*.LOG logfile contains:

       -RAID-I-FAILMOUNT, device _DKA100: could not be mounted
       -MOUNT-F-DEVBUSYVOL, mount or dismount in progress on device,
        VOL$ lock failure

     This is caused by a change in OpenVMS V7.2 MOUNT code.  The RAID 
     BIND command issues a MOUNT for all RAID set members in parallel 
     on all nodes in a VMScluster. This causes a high contention on 
     the mount synchronization lock. Eventually OpenVMS times out the 
     mount with a DEVBUSYVOL error. The RAID Software now retries the
     mount operation 10 times.

Problems Addressed in V2.4-6:

 o  A MOUNT command done shortly after a executing RAID$STARTUP.COM fails
    with: %MOUNT-F-MEDOFL.

    However, mount commands can still fail with either %MOUNT-F-DEVOFFLINE
    or %MOUNT-F-DEVACT. This is a day-one problem and will be fixed in a
    future release.
    Following is a workaround to make sure the mount command for a DPA
    device succeeds.

                $! RAID_MOUNT_DPA.COM - MOUNT DPA Device
                $!
                $!  P1 = DPA device name, e.g. "DPA160:"
                $!  P2 = Volume label
                $!
                $ Wait_loops = 20
                $!
                $ Wait_for_it:
                $!
                $ IF F$GETDVI(Dpa_device, "EXISTS") THEN GOTO Mount_it
                $ WAIT 0:0:1
                $ GOTO Wait_for_it
                $!
                $ Mount_it:
                $!
                $ MOUNT/SYSTEM/NOASSIST 'P1' 'P2'
                $ Status = $STATUS
                $ IF Status THEN EXIT Status
                $ Wait_loops = Wait_loops - 1
                $ IF WaitLoops.EQ.0 THEN EXIT Status
                $ WAIT 0:0:1
                $ GOTO Mount_it:

 o  RAID ANALYZE/UNITS exits with:

          %RAID-F-UNEXPSIG, Unexpected signal
          -SYSTEM-F-INTDIV, arithmetic trap, integer divide by zero...


Problems Addressed in V2.4-5:

 o   With V2.4-5 once a RAID member device (e.g. $1$DUA12, DSA6001)
     reaches mount verify timeout the DPA device for which this I/O
     failed enters mount verification. If the problem with the member
     device(s) is not fixed the DPA device eventually reaches mount
     verify timeout.  At that point the DPA device can be dismounted
     and the array can be unbound.  If a RAID member device is not
     dismounted automatically after an UNBIND it has to be dismounted
     manually with the DCL DISMOUNT command. Once all RAID member
     devices of an array have been dismounted the array can be rebound
     and the DPA devices can be mounted again.

     As long as the array is bound, the RAID driver tries to restart
     mount verification every MVTIMEOUT.  This allows for the problem
     with the member device(s) to be fixed without unbinding the array.
     In some cases (e.g. zero member shadow set) mount verification
     cannot be restarted.  If any member device stays in
     MntVerifyTimeout state the array has to be unbound and rebound
     once the problem with the member device(s) has been fixed.


Problems Addressed in V2.4-4:

  o  When a member of a RAID 0 or 0+1 array reached mount verify timeout
     the DPA device does not enter MountVerifyTimeout.  Now the DPA
     device times out but the member device restarts mount verify state.
     At this point the DCL command "SHOW DEVICE/FILE" can be used to find
     out which files are open on the DPA device.  Once all files have
     been closed the DPA device can be dismounted.  Now the array can be
     unbound or, if the problem with the member device has been corrected
     the DPA device can be re-mounted.

Problems Addressed in V2.4-3:

  o  After a cluster state transition DPA devices will not leave mount
     verify state and finally timeout mount verification. Access to the
     RAID set's member devices, i.e. DCL-DUMP/BLOCK=COUNT:1 Member_Disk:,
     returns disk block data without a hang.

Problems Addressed in V2.4-2:

  o  RAID CLONE command fails with RAID-I-FAILMOUNT and the relating
     message in RAID$DIAGNOSTICS_*.LOG file contains MOUNT-F-INCVOLLABEL.

  o  RAID DPDRIVER crashes with SS$_IVLOCKID on top of stack. This can
     happen during a "RAID SHUTDOWN" command run in parallel on more than
     one node in a VMScluster.

Problems Addressed in V2.4-1:

  o  RAID ANALYZE/ARRAY or /UNITS for a RAID 0 or RAID 0+1 array with 32
     members fails with an ACCVIO message.

  o  A starting or restarting server fails to bind certain arrays or
     crashes with an ACCVIO message logged in the diagnostics file.

  o  RAID ANALYZE/ARRAY or /UNITS does not list segment descriptors even
     though the array is in normal state.

  o  RAID ANALYZE/ARRAY and /UNITS report forced error flags set at the
     unused high end of a container file.

  o  A RAID ANALYZE/ARRAY/REPAIR reports to check for errors but no block
     number on a specific DPA device is listed.

  o  System bugcheck with reason: SHADDETINCON, SHADOWING detects
     inconsistent state. Further crashdump analysis shows that the I/O
     request size is larger than the maximum byte count for this shadow
     virtual unit. Any other SHADDETINCON bugcheck reason is pointing to
     an inconsistency in shadowing software.

  o  RAID 0+1 (striping plus shadowing) arrays go into mount verify state
     AND RAID ANALYZE/ERROR_LOG shows TIMEOUT errors for these DPA
     devices. This I/O timeout feature was intended for RAID 5 arrays
     only. I/Os to member devices of RAID 0 or RAID 0+1 arrays are no
     longer timed out after 30 seconds. This is done implicitly during a
     BIND and can also be set permanently using the RAID MODIFY/NOTIMEOUT
     command.


INSTALLATION NOTES:

Install this kit with the VMSINSTAL utility by logging into  the
SYSTEM account, and entering the following command at the DCL prompt:

     $ @SYS$UPDATE:VMSINSTAL RAIDE08024

After installing this kit, it is necessary to reboot your system.
For a mixed-architecture configuration or other heterogeneous
VMScluster configurations, the kit must be installed on each node.

For more details consult the StorageWorks RAID Software OpenVMS
Installation Guide for installation instructions. To obtain a copy
of the Installation Guide use the following command:

         $ BACKUP RAIDE08024.B/SAVE/SELECT=RAID_INST_GUIDE.* *



All trademarks are the property of their respective owners.


Files on this server are as follows:
»raide08024.README
».CHKSUM
»raide08024.a-dcx_axpexe
»raide08024.a-dcx_vaxexe
»raide08024.b-dcx_axpexe
»raide08024.b-dcx_vaxexe
»raide08024.c-dcx_axpexe
»raide08024.c-dcx_vaxexe
»raide08024.d-dcx_axpexe
»raide08024.d-dcx_vaxexe
»raide08024.e-dcx_axpexe
»raide08024.e-dcx_vaxexe
»raide08024.f-dcx_axpexe
»raide08024.f-dcx_vaxexe
»raide08024.g-dcx_axpexe
»raide08024.g-dcx_vaxexe
»raide08024.h-dcx_axpexe
»raide08024.h-dcx_vaxexe
»raide08024.CVRLET_TXT
privacy statement using this site means you accept its terms