This section provides a list of guidelines for working with RAID 0 concatenations, stripes, RAID 1 mirrors, RAID 5 volumes, state database replicas, and file systems constructed on volumes.
A concatenated volume uses less CPU time than striping.
Concatenation works well for small random I/O.
Avoid using physical disks with different disk geometries.
Disk geometry refers to how sectors and tracks are organized for each cylinder in a disk drive. The UFS organizes itself to use disk geometry efficiently. If slices in a concatenated metadevice have different disk geometries, Enhanced Storage uses the geometry of the first slice. This fact may decrease the UFS file system efficiency.
When constructing a concatenation, distribute slices across different controllers and buses. Cross-controller and cross-bus slice distribution can help balance the overall I/O load.
Set the stripe's interlace value correctly.
The more physical disks in a striped metadevice, the greater the I/O performance. (The MTBF, however, will be reduced, so consider mirroring striped volumes.)
Do not mix differently sized slices in the striped volume. A striped volume's size is limited by its smallest slice.
Avoid using physical disks with different disk geometries.
Distribute the striped volume across different controllers and buses.
Striping cannot be used to encapsulate existing file systems.
Striping performs well for large sequential I/O and for random I/O distributions.
Striping uses more CPU cycles than concatenation. However, it is usually worth it.
Striping does not provide any redundancy of data.
Mirroring may improve read performance; write performance is always degraded.
Mirroring improves read performance only in threaded or asynchronous I/O situations; if there is just a single thread reading from the volume, performance will not improve.
Mirroring degrades write performance by about 15-50 percent, because two copies of the data must be written to disk to complete a single logical write. If an application is write intensive, mirroring will degrade overall performance. However, the write degradation with mirroring is substantially less than the typical RAID 5 write penalty (which can be as much as 70 percent).
Note that the UNIX operating system implements a file system cache. Since read requests frequently can be satisfied from this cache, the read/write ratio for physical I/O through the file system can be significantly biased toward writing.
For example, an application I/O mix might be 80 percent reads and 20 percent writes. However, if many read requests can be satisfied from the file system cache, the physical I/O mix might be quite different--perhaps only 60 percent reads and 40 percent writes. In fact, if there is a large amount of memory to be used as a buffer cache, the physical I/O mix can even go the other direction: 80 percent reads and 20 percent writes might turn out to be 40 percent reads and 60 percent writes.
RAID 5 volumes can withstand only a single device failure.
A mirrored volume can withstand multiple device failures in some cases (for example, if the multiple failed devices are all on the same submirror). A RAID 5 volume can only withstand a single device failure. Striped and concatenated volumes cannot withstand any device failures.
RAID 5 volumes provide good read performance if no error conditions, and poor read performance under error conditions.
When a device fails in a RAID 5 volume, read performance suffers because multiple I/O operations are required to regenerate the data from the data and parity on the existing drives. Mirrored volume do not suffer the same degradation in performance when a device fails.
RAID 5 volumes can cause poor write performance.
In a RAID 5 volume, parity must be calculated and both data and parity must be stored for each write operation. Because of the multiple I/O operations required to do this, RAID 5 write performance is generally reduced. In mirrored volumes, the data must be written to multiple mirrors, but mirrored performance in write-intensive applications is still much better than in RAID 5 volumes.
RAID 5 volumes involves a lower hardware cost than mirroring.
RAID 5 volumes have a lower hardware cost than mirroring. Mirroring requires twice the disk storage (for a two-way mirror). In a RAID 5 volume, the amount required to store the parity is: 1/#-disks.
RAID 5 volumes can not be used for existing file systems.
You can not encapsulate an existing file system in a RAID 5 volume (you must backup and restore).
All replicas are written when the configuration changes.
Only two replicas (per mirror) are updated for mirror dirty region bitmaps.
A good average is two replicas per three mirrors.
Use two replicas per one mirror for write intensive applications.
Use two replicas per 10 mirrors for read intensive applications.