SEARCH CONTACT US SUPPORT SERVICES PRODUCTS STORE
United States    
COMPAQ STORE | PRODUCTS | SERVICES | SUPPORT | CONTACT US | SEARCH
gears
compaq support options
support home
software & drivers
ask Compaq
reference library
support forum
frequently asked questions
support tools
warranty information
service centers
contact support
product resources
parts for your system
give us feedback
associated links
.
} what's new
.
} contract access
.
} browse patch tree
.
} search patches
.
} join mailing list
.
} feedback
.
patches by topic
.
} DOS
.
} OpenVMS
.
} Security
.
} Tru64 Unix
.
} Ultrix 32
.
} Windows
.
} Windows NT
.
connection tools
.
} nameserver lookup
.
} traceroute
.
} ping
HOST RAID RAIDE05024 for OpenVMS V2.4 ECO Summary

TITLE: HOST RAID RAIDE05024 for OpenVMS V2.4 ECO Summary Modification Date: 20-JAN-99 Modification Type: RAIDE05024 is put on Engineering Hold ********************************************************************** * * * Engineering is currently researching a problem with this ECO and * * requested that it be placed on Engineering Hold. * * * ********************************************************************** NOTE: An OpenVMS saveset or PCSI installation file is stored on the Internet in a self-expanding compressed file. The name of the compressed file will be kit_name-dcx_vaxexe for OpenVMS VAX or kit_name-dcx_axpexe for OpenVMS Alpha. Once the file is copied to your system, it can be expanded by typing RUN compressed_file. The resultant file will be the OpenVMS saveset or PCSI installation file which can be used to install the ECO. Copyright (c) Compaq Computer Corporation 1998. All rights reserved. PRODUCT: StorageWorks RAID Software for OpenVMS OP/SYS: OpenVMS VAX OpenVMS Alpha SOURCE: Compaq Computer Corporation ECO INFORMATION: ECO Kit Name: RAIDE05024 ECO Kits Superseded by This ECO Kit: RAIDE04024 ECO Kit Approximate Size: 13860 Blocks Saveset A - 378 Blocks Saveset B - 900 Blocks Saveset C - 1458 Blocks Saveset D - 1458 Blocks Saveset E - 2736 Blocks Saveset F - 2736 Blocks Saveset G - 2736 Blocks Saveset H - 1458 Blocks Kit Applies To: StorageWorks RAID V2.4 OpenVMS VAX, V5.5-2, V5.5-2HF, V5.5-2HW, V5.5-2H4, V6.1, V6.2, V6.2-0HF, V7.0, V7.1 OpenVMS Alpha V6.1, V6.1-1H1, V6.1-1H2, V6.2, V6.2-1H1, V6.2-1H2, V6.2-1H3, V7.0, V7.1, V7.1-1H1, V7.1-1H2, System/Cluster Reboot Necessary: Yes Rolling Re-boot Supported: Information Not Available Installation Rating: INSTALL_UNKNOWN Kit Dependencies: The following remedial kit(s) must be installed BEFORE installation of this kit: None In order to receive all the corrections listed in this kit, the following remedial kits should also be installed: None ECO KIT SUMMARY: An ECO kit exists for StorageWorks RAID V2.4 on OpenVMS VAX V5.5-2 through V7.1 and OpenVMS Alpha V6.1 through V7.1-1H2. This kit addresses the following problems: Problems remedied in V2.4-5 o With V2.4-5 once a RAID member device (e.g. $1$DUA12, DSA6001) reaches mount verify timeout the DPA device for which this I/O failed enters mount verification. If the problem with the member device(s) is not fixed the DPA device eventually reaches mount verify timeout. At that point the DPA device can be dismounted and the array can be unbound. If a RAID member device is not dismounted automatically after an UNBIND it has to be dismounted manually with the DCL DISMOUNT command. Once all RAID member devices of an array have been dismounted the array can be rebound and the DPA devices can be mounted again. As long as the array is bound, the RAID driver tries to restart mount verification every MVTIMEOUT. This allows for the problem with the member device(s) to be fixed without unbinding the array. In some cases (e.g. zero member shadow set) mount verification cannot be restarted. If any member device stays in MntVerifyTimeout state the array has to be unbound and rebound once the problem with the member device(s) has been fixed. Problems Remedied in V2.4-4 (RAIDE04024): o When a member of a RAID 0 or 0+1 array reached mount verify timeout the DPA device does not enter MountVerifyTimeout. Now the DPA device times out but the member device restarts mount verify state. At this point the DCL command "SHOW DEVICE/FILE" can be used to find out which files are open on the DPA device. Once all files have been closed the DPA device can be dismounted. Now the array can be unbound or, if the problem with the member device has been corrected the DPA device can be re-mounted. Problems Remedied in V2.4-3: o After a cluster state transition DPA devices will not leave mount verify state and finally timeout mount verification. Access to the RAID set's member devices, i.e. DCL-DUMP/BLOCK=COUNT:1 Member_Disk:, returns disk block data without a hang. Problems Remedied in V2.4-2: o RAID CLONE command fails with RAID-I-FAILMOUNT and the relating message in RAID$DIAGNOSTICS_*.LOG file contains MOUNT-F-INCVOLLABEL. o RAID DPDRIVER crashes with SS$_IVLOCKID on top of stack. This can happen during a "RAID SHUTDOWN" command run in parallel on more than one node in a VMScluster. Problems Remedied in V2.4-1: o RAID ANALYZE/ARRAY or /UNITS for a RAID 0 or RAID 0+1 array with 32 members fails with an ACCVIO message. o A starting or restarting server fails to bind certain arrays or crashes with an ACCVIO message logged in the diagnostics file. o RAID ANALYZE/ARRAY or /UNITS does not list segment descriptors even though the array is in normal state. o RAID ANALYZE/ARRAY and /UNITS report forced error flags set at the unused high end of a container file. o A RAID ANALYZE/ARRAY/REPAIR reports to check for errors but no block number on a specific DPA device is listed. o System bugcheck with reason: SHADDETINCON, SHADOWING detects inconsistent state. Further crashdump analysis shows that the I/O request size is larger than the maximum byte count for this shadow virtual unit. Any other SHADDETINCON bugcheck reason is pointing to an inconsistency in shadowing software. o RAID 0+1 (striping plus shadowing) arrays go into mount verify state AND RAID ANALYZE/ERROR_LOG shows TIMEOUT errors for these DPA devices. This I/O timeout feature was intended for RAID 5 arrays only. I/Os to member devices of RAID 0 or RAID 0+1 arrays are no longer timed out after 30 seconds. This is done implicitly during a BIND and can also be set permanently using the RAID MODIFY/NOTIMEOUT command. INSTALLATION NOTES: After installing this kit, it is necessary to reboot your system. For a mixed-architecture configuration or other heterogeneous VMScluster configurations, the kit must be installed on each node. For more details consult the StorageWorks RAID Software OpenVMS Installation Guide for installation instructions. To obtain a copy of the Installation Guide use the following command; $ BACKUP RAIDE04024.B/SAVE/SELECT=RAID_INST_GUIDE.* * All trademarks are the property of their respective owners.



This patch can be found at any of these sites:

Colorado Site
Georgia Site



Files on this server are as follows:

raide05024.README
.CHKSUM
raide05024.a-dcx_axpexe
raide05024.a-dcx_vaxexe
raide05024.b-dcx_axpexe
raide05024.b-dcx_vaxexe
raide05024.c-dcx_axpexe
raide05024.c-dcx_vaxexe
raide05024.d-dcx_axpexe
raide05024.d-dcx_vaxexe
raide05024.e-dcx_axpexe
raide05024.e-dcx_vaxexe
raide05024.f-dcx_axpexe
raide05024.f-dcx_vaxexe
raide05024.g-dcx_axpexe
raide05024.g-dcx_vaxexe
raide05024.h-dcx_axpexe
raide05024.h-dcx_vaxexe
raide05024.CVRLET_

privacy and legal statement