LSM supports a variety of LSM configurations including concatenated disks, mirroring, striping, and multiple disk configurations. This chapter describes these configurations, and presents some options you should consider when planning your LSM configuration.
Before setting up LSM volumes, plexes, and subdisks, you should consider the needs of your site, the hardware available to you, and the rationale for creating volumes and disk groups.
Table 2-1 presents some configuration options and describes the planning considerations that apply to LSM configurations.
Configuration | Description |
Concatenated Volumes | You concatenate multiple LSM disks to form a big volume. You can use a concatenated volume to store a large file or file systems that spans more than one disk. Disk concatenation frees you from being limited by the actual physical sizes of individual disks so that you can combine the storage potential of several devices. Use the default disk group, rootdg, to create a concatenated volume out of the public regions available. You can also add more LSM disks and create volumes out of the new disks you added. |
Mirrored Volumes | You associate multiple plexes with the same volume to create a mirrored volume. If you are concerned about the availability of your data, then plan to mirror data on your system. You should map plexes that are associated with the same volume, to different physical disks. For systems with multiple disk controllers, it is best to map a volume's plexes to different controllers. |
The volassist command will fail if you specify a device that is already in the volume as the mirrored plex, whereas the bottom-up commands will not fail. | |
Striped Volumes | For faster read/write throughput, use a volume with a striped plex. On a physical disk drive, the drive performs only one I/O operation at a time. On an LSM volume with its data striped across multiple physical disks, multiple I/Os (one for each physical disk) can be performed simultaneously. |
The basic components of a striped plex are the stripe width, the number of stripes, and the size of the plex in multiples of the stripe width that was used. Stripe blocks of the stripe width size are interleaved among the subdisks resulting in an even distribution of accesses between the subdisks. The stripe width defaults to 128 sectors, but you can tune the size to specific application needs. The volassist command automatically rounds up the volume length to multiples of the stripe width. | |
Mirrored and Striped Volumes | Use mirrored and striped volumes when speed and availability are important. LSM supports mirroring of striped plexes. This configuration offers the improved I/O performance of striping while also providing data availability. |
Note that the different striped plexes in a mirrored volume do not have to be symmetrical. For instance, a three-way striped plex can be mirrored with a two-way striped plex, as long as the plex size is the same. |
In addition, note that reads can be serviced by any plex in a mirrored volume. Thus, a mirrored volume provides increased read performance. However, LSM issues mirrored writes to all plexes in a mirrored volume. Because the writes are issued in parallel, there is a small amount of additional overhead as a result of write I/O to a mirrored volume.
Disk concatenation involves arranging subdisks both sequentially and contiguously in the address space of a plex. With concatenation, subdisks are linked together into the logical address space. Data is then accessed from each of the subdisks in sequence.
Figure 2-1 gives an example of a concatenated disk.
Concatenated volumes consist of subdisks from one or more disks. Concatenated volumes with subdisks from more than one disk are also referred to as spanned volumes because the volume spans multiple physical disks. The advantages of using each type of volume are as follows:
The manner in which storage space is allocated to file systems and databases has a direct impact on disk head movements and on the distribution of the I/O load between disk drives. An optimal allocation minimizes head movements and distributes the I/O load evenly between disk drives.
Striping involves spreading data across several physical disks. By supporting striping in addition to concatenation as a storage-allocation scheme for plexes, LSM makes it possible to evenly distribute the I/O load for a plex across a number of disk drives.
Stripes are relatively small, equally-sized fragments that are allocated alternately and evenly to the subdisks of a single plex. A striped plex consists of a number of equally-sized subdisks, each located on a separate disk drive. There should be at least two subdisks in a striped plex, each of which should exist on a different disk.
Data is stored on the subdisks in stripe blocks of a fixed size (referred to as the stripe width). Stripe blocks are interleaved between the subdisks as shown in Figure 2-2, resulting in an even distribution of accesses between the subdisks.
By allocating storage evenly across multiple disks, striping helps to balance I/O load in cases where high traffic areas exist on certain subdisks. Throughput increases with the number of disks across which a plex is striped. The increase in throughput depends on the applications and file systems being used, and on the number of users using them at the same time.
The effect of striping on performance depends on the choice of the stripe width and on application characteristics. LSM uses a default stripe width of 128 sectors, which works well in most environments.
In a system without LSM, failures of a physical disk result in the loss of the data on that disk. To recover from such an event, the data needs to be restored from a backup and all changes made to the data since that backup have to be reapplied. This is a time-consuming process, during which applications have no access to the data.
LSM makes it possible to protect critical data against disk failures by maintaining multiple copies (called mirrors) of the data in a volume. The LSM object that corresponds to a mirror is a plex. In the event of a physical disk failure, the plex on the failed disk becomes temporarily unavailable, but the system continues to operate using the unaffected plexes. Note the following rules when using LSM plexes to mirror disks:
All plexes are kept up to date as updates are made to the contents of the volume. If a read to a plex fails, other plexes are used to correct or mask the error. Users of a volume are shielded against any failures unless all plexes fail.
If your applications perform an equal proportion of read and write operations, or if your applications perform more writes, you are not likely to gain performance (in fact, you might lose performance) if you mirror the data. However, if your files or applications perform significantly more read operations, you can improve performance with mirroring.
Figure 2-3 shows a mirrored LSM configuration.
In LSM, each subdisk maps to a physical disk offset and length. This means that different LSM volumes can have subdisks that map to different areas of the same physical disk. For example, as shown in Figure 2-4, the mirrored volume V1 can use disks rz4 and rz6, and the concatenated volume V2 can also use rz6 plus rz8.
LSM provides a high degree of flexibility in the way volumes can be mapped to disk and partition devices. For example, you can use LSM to build combinations of plexes with subdisks, as shown in Figure 2-5. This flexibility allows you to optimize performance, change volume size, add plexes, and perform backups or other administrative tasks without interrupting system applications and users.
LSM permits dynamic reconfiguration of the volumes, making it easy to adapt to changes in I/O load and application needs, and to maximize system availability. See Section 13.3 for more information about implementing configuration changes.