Glossary

The following are LSM terms and definitions.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

C

concatenated plex

A plex that uses subdisks on one or more disks to create a virtual contiguous region of storage space that is accessed linearly. If LSM reaches the end of a subdisk while writing data, it continues to write data to the next subdisk, which can physically exist on the same disk or a different disk. This layout allows you to use space on several regions of the same disk, or regions of several disks, to create a single big pool of storage.

See also RAID 5 plex, striped plex

configuration database

A small database that contains all volume, plex, subdisk, and disk media records. These databases are replicated onto some or all disks in the disk group, often with two copies on each disk. Because these databases pertain to disk groups, record associations cannot span disk groups. Thus, you cannot define a subdisk on a disk in one disk group and associate it with a volume in another disk group.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

D

description set

A set of files that are saved by using the volsave command and can be used to restore an LSM configuration. By default, an LSM description set is saved in a time-stamped directory under the /usr/var/lsm/db directory.

Dirty Region Log (DRL)

See log plex

disk

Disks exist as two entities:

The difference is that a physical disk presents the image of a device with a definable geometry with a definable number of cylinders, heads, and so on, while an LSM disk is simply a unit of allocation with a name and a size.

Disks used by LSM usually contain two special regions: a private region and a public region. Typically, each region is formed from a complete partition of the disk, resulting in a sliced disk; however, the private and public regions can be allocated from the same partition, resulting in a simple disk. A disk used by LSM can also be a nopriv disk, which has only a public region and no private region. Nopriv disks are created as the result of encapsulating a disk or disk partition.

See also disk group, nopriv disk, simple disk, sliced disk, subdisk, volume

disk access record

A configuration record that defines the path to a disk. Disk access records most often include a unit number. LSM uses the disk access records stored in a system to find all disks attached to the system. Disk access records do not identify particular physical disks.

Through the use of disk IDs, LSM allows you to move disks between controllers or to different locations on a controller. When you move a disk, a different disk access record is used to access the disk, although the disk media record continues to track the actual physical disk.

On some systems, LSM builds a list of disk access records automatically, based on the list of devices attached to the system. On these systems, it is not necessary to define disk access records explicitly. On other systems, you must define disk access records with the /sbin/voldisk define command. Specialty disks, such as RAM disks or floppy disks, are likely to require explicit /sbin/voldisk define commands.

Disk access records are identified by their disk access names (also known as DA names).

See also disk ID, disk media record, volboot file

disk group

A group of disks that share a common configuration database. A configuration database consists of a set of records describing objects including disks, volumes, plexes, and subdisks that are associated with one particular disk group. Each disk group has an administrator-assigned name that you use to reference that disk group. Each disk group has an internally defined unique disk group ID, which differentiates two disk groups with the same administrator-assigned name.

Disk groups provide a method to partition the configuration database, so that the database size is not too large and so that database modifications do not affect too many drives. They also allow LSM to operate with groups of physical disk media that can be moved between systems.

Disks and disk groups have a circular relationship: disk groups are formed from disks, and disk group configuration databases are stored on disks. All disks in a disk group are stamped with a disk group ID, which is a unique identifier for naming disk groups. Some or all disks in a disk group also store copies of the configuration database for the disk group.

See also disk group ID, root disk group (rootdg)

disk group ID

A 64-byte universally unique identifier that is assigned to a disk group when the disk group is created with the /sbin/voldg init command. This identifier is in addition to the disk group name, which you assigned. The disk group ID differentiates disk groups that have the same administrator-assigned name.

disk header

A block stored in a private region of a disk that defines several properties of the disk, such as the:

disk ID

A 64-byte universally unique identifier that is assigned to a physical disk when its private region is initialized with the /sbin/voldisk init command. The disk ID is stored in the disk media record so that the physical disk can be related to the disk media record at system startup.

See also disk media record

disk media record

A reference to a physical disk or possibly a disk partition. This record can be thought of as a physical disk identifier for the disk or partition. Disk media records are configuration records that provide a name (known as the disk media name or DM name) that you use to reference a particular disk independent of its location on the system's various disk controllers. Disk media records reference particular physical disks through a disk ID, which is a unique identifier that is assigned to a disk when it is initialized for use with the LSM software.

Operations are provided to set or remove the disk ID stored in a disk media record. Such operations have the effect of removing or replacing disks, with any associated subdisks being removed or replaced along with the disk.

See also disk access record

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

H

host ID

A name, usually assigned by you, that identifies a particular host. Host IDs are used to assign ownership to particular physical disks. When a disk is part of a disk group that is in active use by a particular host, the disk is stamped with that host's host ID. If another host attempts to access the disk, it detects that the disk has a nonmatching host ID and disallows access until the host with ownership discontinues use of the disk. Use the /sbin/voldisk clearimport command to clear the host ID stored on a disk.

If a disk is a member of a disk group and has a host ID that matches a particular host, then that host will import the disk group as part of system startup.

hot-spare disk, hot-sparing

A hot-spare disk is an LSM disk that you designate to automatically replace a disk that fails while in use by a volume. You enable and disable the hot-sparing feature with the volwatch command. You can designate a hot-spare disk when you initialize it for LSM use or later, as long as the disk is not being used by other LSM objects (volumes or subdisks).

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

K

kernel log

A log kept in the private region on the disk that is written by the LSM kernel. The log contains records describing the state of volumes in the disk group. This log provides a mechanism for the kernel to persistently register state changes, so that the vold daemon can detect the state changes even in the event of a system failure.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

L

log plex

A plex that keeps track of write activity for a mirrored or RAID 5 volume. The log plex for a mirrored volume is called a Dirty Region Log (DRL). A DRL maintains a bitmapped representation of the regions of the volume, and marks as dirty all regions being written to. A DRL reduces the time required to restore synchronization of data for all the plexes in the volume when the system restarts after a failure. A volume can have more than one DRL for redundancy. A DRL does not provide any benefit in the event of a disk failure affecting the volume.

The log plex for a RAID 5 volume is called a RAID 5 log plex. A RAID 5 log plex maintains a bitmapped representation of the regions of the volume and marks as dirty all regions being written to. In addition, the RAID 5 log plex stores a copy of the data and parity for a predefined number of writes. In the event of a system failure, a RAID 5 log reduces the time required to restore synchronization of data for all the plexes in the volume when the system restarts. When the system restarts after a failure, the write operations that did not complete before the failure are restarted.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

M

mirror

See plex

mirrored volume

A volume that has more than one concatenated or striped data plex and, typically, at least one log plex.

See also RAID 5 volume, simple volume

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

N

nopriv disk

A disk that is configured for use by LSM and has only a public region and no private region. The public region represents the space that LSM can use to create subdisks for data storage. A nopriv disk is typically created as a result of encapsulating existing data in a disk or disk partition.

See also simple disk, sliced disk

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

P

plex

A copy of a volume's logical data address space; also known as a mirror. A volume can have up to 32 plexes associated with it. Each plex is, at least conceptually, a copy of the volume that is maintained consistently in the presence of volume I/O and changes to the LSM configuration. Plexes represent the primary means of configuring storage for a volume. Plexes can have a concatenated, striped, or RAID 5 organization (layout).

See also concatenated plex, RAID 5 plex, striped plex

plex consistency

If the plexes of a volume contain different data, then the plexes are said to be inconsistent. This is a problem only if LSM is unaware of the inconsistencies, as the volume can return differing results for consecutive reads.

Plex inconsistency is a serious compromise of data integrity. This inconsistency is caused by write operations that start around the time of a system failure, if parts of the write complete on one plex but not the other. If the plexes are not first synchronized to contain the same data, plexes are inconsistent after creation of a mirrored volume. An important role of LSM is to ensure that consistent data is returned to any application that reads a volume. This might require that plex consistency of a volume be "recovered" by copying data between plexes, so that they have the same contents. Alternatively, you can put a volume into a state such that reads from one plex are automatically written back to the other plexes, making the data consistent for that volume offset.

private region

The private region of a disk contains on-disk structures that are used by LSM for internal purposes. Each private region is typically 4096 blocks long and begins with a disk header that identifies the disk and its disk group. Private regions can also contain copies of a disk group's configuration database and copies of the disk group's kernel log.

See also disk header, kernel log, public region

public region

The public region of a disk is the space reserved for allocating subdisks. Subdisks are defined with offsets that are relative to the beginning of the public region of a disk. Only one contiguous region of a disk can form the public region for a disk.

See also private region

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

R

RAID 5 plex

A plex that places data and parity evenly across each of its associated subdisks. A plex has a characteristic number of stripe columns (represented by the number of associated subdisks) and a characteristic stripe width. The stripe width defines how much data with a particular address is allocated to one of the associated subdisks. The parity data is the result of an XOR operation on the data in each stripe unit. The parity data is written to a different column (presumed to be a different disk) for each stripe, left-shifted by one column, so that no one column contains all the parity for the volume. Therefore, if a disk in a RAID 5 plex fails, the volume is still recoverable by recreating the missing data or parity for each stripe.

See also concatenated plex, striped plex

RAID 5 volume

A volume that uses a RAID 5 plex and, typically, at least one RAID 5 log plex. A RAID 5 volume has only one RAID 5 plex.

See also mirrored volume, simple volume

read policy

A configurable policy for switching between plexes for volume reads. When a volume has more than one enabled associated plex, LSM distributes reads between the plexes to distribute the I/O load and thus increase total possible bandwidth of reads through the volume. You set the read policy. Read policy choices include:

root disk group (rootdg)

LSM creates and requires one special disk group called rootdg. This group is generally the default for most utilities. In addition to defining the regular disk group information, the configuration database for the root disk group contains local information that is specific to a disk group.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

S

simple disk

A disk that is configured for use by LSM and has a public region and a private region that occupy the same disk partition. The public region represents the space that LSM can use to create subdisks for data storage. The private region is used by LSM to store a copy of the configuration database and kernel log for the disk group to which the disk belongs. A simple disk is created by initializing a disk partition, instead of the entire disk, for LSM use.

See also nopriv disk, sliced disk

simple volume

A volume that uses only one concatenated plex. This type of volume provides no data redundancy and no protection from system failure but does permit you to create a volume using space on multiple disks (enabling creation of storage that is not bounded by disks or disk partitions), move data to other LSM disks, and perform online volume management. Without an LSM license, you can create only simple volumes.

See also mirrored volume, RAID 5 volume

sliced disk

A disk that is configured for use by LSM and has a separate public region (typically the g partition of the disk) and a private region (typically the h partition). The public region represents the space that LSM can use to create subdisks for data storage. The private region is used by LSM to store a copy of the configuration database and kernel log for the disk group to which the disk belongs.

See also nopriv disk, simple disk

striped plex

A plex that places data evenly across each of its associated subdisks. A plex has a characteristic number of stripe columns (represented by the number of associated subdisks) and a characteristic stripe width. The stripe width defines how much data with a particular address is allocated to one of the associated subdisks. Given a stripe width of 128 blocks and two stripe columns, the first group of 128 blocks is allocated to the first subdisk, the second group of 128 blocks is allocated to the second subdisk, the third group to the first subdisk, and so on.

See also concatenated plex, RAID 5 plex

subdisk

A region of storage allocated on a disk for use by a volume. Subdisks are associated with volumes through plexes. You organize one or more subdisks to form plexes based on a plex layout: concatenated, striped, or RAID 5. Subdisks are defined relative to disk media records.

Click letter for quick access:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

V

volboot file

The volboot file is a special file (usually stored in /etc/vol/volboot) that is used to bootstrap the root disk group and to define a system's host ID. In addition to a host ID, the volboot file contains a list of disk access records. On system startup, this list of disks is scanned to find a disk that is a member of the rootdg disk group and that is stamped with this system's host ID. When such a disk is found, its configuration database is read and is used to get a more complete list of disk access records that are used as a second-stage bootstrap of the root disk group and to locate all other disk groups.

See also disk access record

volume

A virtual disk device that looks to applications and file systems like a physical disk partition device. Volumes present block and raw device interfaces that are compatible in their use. A volume can use mirrors, span several disk drives, and be moved to different disks. You can change the configuration of a volume without causing disruption to applications or file systems that are using the volume.