1    Overview

The Logical Storage Manager (LSM) software is an optional integrated, host-based disk storage management application. LSM uses Redundant Arrays of Independent Disks (RAID) technology to enable you to configure storage devices into a virtual pool of storage to protect against data loss, maximize disk use, improve performance, provide high data availability, and manage storage without disrupting users or applications accessing data on those disks.

This chapter introduces LSM features, concepts, and terminology. The volintro(8) reference page also provides information on LSM terms and commands.

LSM allows you to manage all of your storage devices, such as disks, partitions, or RAID sets, as a flexible pool of storage from which you create LSM volumes. You configure new file systems, databases, and applications, or encapsulate existing ones, to use an LSM volume instead of a disk partition. The benefits of using an LSM volume instead of a disk partition include:

1.1    LSM Object Hierarchy

LSM uses the following hierarchy of objects to organize storage:

The following sections describe LSM objects in more detail.

1.1.1    LSM Disk

An LSM disk is a Tru64 UNIX supported storage device, including disks, disk partitions, and RAID sets, that you configure exclusively for use by LSM. LSM views the storage in the same way as the Tru64 UNIX operating system software views it. For example, if the Tru64 UNIX operating system software considers a RAID set as a single storage device, so does LSM. See the Tru64 UNIX Software Product Description (SPD) for a list of supported storage devices.

Figure 1-1 shows a typical hardware configuration that LSM supports.

Figure 1-1:  Typical LSM Hardware Configuration

A storage device becomes an LSM disk when you initialize it for use by LSM. There are three types of LSM disks:

1.1.2    Disk Group

A disk group is an object that represents a grouping of LSM disks. LSM disks in a disk group share a common configuration database that identifies all the LSM objects in the disk group. LSM automatically creates and maintains copies of the configuration database in the private region of multiple LSM sliced or simple disks in each disk group.

LSM distributes these copies across all controllers for redundancy. If LSM disks in a disk group are located on the same controller, LSM distributes the copies across several disks. LSM automatically records changes to the LSM configuration and, if necessary, changes the number and location of copies of the configuration database for a disk group.

You cannot have a disk group of only LSM nopriv disks, because an LSM nopriv disk does not have a private region to store copies of the configuration database.

You must create an LSM volume within a disk group, and volumes cannot use disks from more than one disk group. By default, the LSM software creates a default disk group called rootdg. You can create all of your volumes in the rootdg disk group or you can create other disk groups. For example, if you dedicate disks to store financial data, you can create and assign those disks to a disk group called finance.

When you add an LSM disk to a disk group, LSM assigns it a disk media name . By default, the disk media name is the same as the disk access name , which the operating system software assigns to a storage device. For example, the disk media name and disk access name might be dsk1.

You do not have to use the default disk media name. You can assign a disk media name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ). For example, you could assign a disk media name of finance_data_disk.

LSM associates the disk media name with the operating system's disk access name. The disk media name provides insulation from operating system naming conventions. This allows LSM to find the device should you move it to a new location (for example, connect a disk to a different controller).

1.1.3    Subdisk

A subdisk is an object that represents a contiguous set of blocks in an LSM disk's public region that LSM uses to store data.

By default, LSM assigns a subdisk name using the LSM disk media name followed by a dash (-) and an ascending two-digit number beginning with 01. For example, dsk1-01 is the subdisk name on an LSM disk with a disk media name of dsk1.

You do not have to use the default subdisk name. You can assign a subdisk name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ). For example, you could assign a subdisk media name of finance_disk01.

A subdisk can be:

1.1.4    Plex

A plex is an object that represents a subdisk or collection of subdisks in the same disk group to which LSM writes a copy of volume data or log information. There are three types of plexes:

By default, LSM assigns a plex name using the volume name followed by a dash (-) and an ascending two-digit number beginning with 01. For example, volume1-01 is the name of a plex for a volume called volume1.

You do not have to use the default plex name. You can assign a plex name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ). For example, you could assign a plex media name of finance_plex01.

1.1.4.1    Concatenated Data Plex

In a concatenated data plex, LSM creates a contiguous address space on the subdisks and sequentially writes volume data in a linear manner. If LSM reaches the end of a subdisk while writing data, it continues to write data to the next subdisk as shown in Figure 1-5.

Figure 1-5:  Concatenated Data Plex

A single subdisk failure in a concatenated data plex will result in LSM volume failure. To prevent this type of failure, you can create multiple mirror (duplicate) plexes on different subdisks. LSM continuously maintains the data in the mirrors. If a plex becomes unavailable because of a subdisk failure, the volume continues operating using a mirror plex.

Using subdisks on different SCSI buses for mirror plexes speeds read requests, because data can be simultaneously read from multiple plexes.

LSM creates a DRL plex when you mirror plexes. A DRL plex divides the data plexes into a set of consecutive regions and tracks regions that change due to I/O writes. When the system restarts after a failure, only the changed regions are recovered.

If you do not use a DRL plex and the system restarts after a failure, LSM must copy and resynchronize all of the data to each plex to restore the plex consistency. Although this process occurs in the background and the volume is still available, it can be a lengthy procedure and can result in unnecessarily recovering data.

You can create up to 32 plexes, which can be any combination of data or DRL plexes. Mirror plexes consume more disk space than other types of plexes, because there is a DRL plex and because volume data is written to each plex.

Figure 1-6 shows a concatenated plex with one mirror.

Figure 1-6:  Concatenated Data Plex with One Mirror

1.1.4.2    Striped Data Plex

In a striped data plex, LSM separates the data into units of equal size (64 KB by default) and writes the data units on two or more columns of subdisks, creating a stripe of data on the columns. LSM can simultaneously write the data units if there are two or more units and the subdisks are on different SCSI buses.

Figure 1-7 shows how a write request of 384 KB of data is separated into six 64 KB units and written to three columns as two complete stripes.

Figure 1-7:  Writing Data to a Striped Plex

If a write request does not complete a stripe, then the first data unit of the next write request starts in the next column. For example, Figure 1-8 shows how 320 KB of data is separated into five 64 KB units and written to three columns. The first data unit of the next write request will start in the third column.

Figure 1-8:  Incomplete Striped Data Plex

As in a concatenated data plex, a single subdisk failure in a striped data plex will result in volume failure. To prevent this type of failure, you can create multiple mirror (duplicate) plexes on different subdisks. LSM continuously maintains the data in the mirrors. If a plex becomes unavailable because of a subdisk failure, the volume continues operating using a mirror plex.

Using subdisks on different SCSI buses for mirror plexes speeds read requests, because data can be simultaneously read from multiple plexes.

LSM creates a DRL plex when you mirror plexes. A DRL plex divides the data plexes into a set of consecutive regions and tracks regions that change due to I/O writes. When the system restarts after a failure, only the changed regions are recovered.

If you do not use a DRL plex and the system restarts after a failure, LSM must copy and resynchronize all of the data to each plex. Although this process occurs in the background and the volume is still available, it can be a lengthy procedure and can result in unnecessarily recovering data.

You can create up to 32 plexes, which can be any combination of data or DRL plexes. Mirror plexes consume more disk space than other types of plexes, because there is a DRL plex and because volume data is written to each plex.

Figure 1-9 shows a striped data plex with one mirror.

Figure 1-9:  Striped Data Plex with One Mirror

1.1.4.3    RAID 5 Data Plex

In a RAID 5 data plex, LSM calculates a parity value for each stripe of data, then separates the stripe of data and parity into units of equal size (16 KB by default) and writes the data and parity units on three or more columns of subdisks, creating a stripe of data across the columns. LSM can simultaneously write the data units if there are three or more units and the subdisks are on different SCSI buses. If a subdisk in a column fails, LSM continues operating using the data and parity information in the remaining columns to reconstruct the missing data.

In a RAID 5 data plex, LSM writes both data and parity across columns, writing the parity in a different column for each stripe of data. The first parity unit is located in the last column. Each successive parity unit is located in the next column, left-shifted one column from the previous parity unit location. If there are more stripes than columns, the parity unit placement begins again in the last column.

Figure 1-10 shows how data and parity information are written in a RAID 5 data plex with three columns.

Figure 1-10:  Data and Parity Placement in a Three-Column RAID 5 Data Plex

In Figure 1-10, the first stripe of data contains data units 1 and 2 and parity unit P0. The second stripe contains data units 3 and 4 and parity unit P1. The third stripe contains units 5 and 6 and parity unit P2.

By default, creating a RAID 5 data plex creates a RAID 5 log plex. A RAID 5 log plex keeps track of data and parity blocks being changed due to I/O writes. When the system restarts after a failure, the write operations that did not complete before the failure are restarted.

Note

You cannot mirror a RAID 5 data plex.

The TruCluster software does not support RAID 5 data plexes.

1.1.5    LSM Volumes

A volume is an object that represents a hierarchy of plexes, subdisks, and LSM disks. Applications and file systems make read and write requests to the LSM volume. The LSM volume depends on the underlying LSM objects to satisfy the request.

An LSM volume can use storage from only one disk group.

As with all storage devices, an LSM volume has a block device interface and a character device interface.

Because these interfaces support the standard UNIX open, close, read, write, and ioctl calls, databases, file systems, applications, and secondary swap use an LSM volume in the same manner as a disk partition as shown in Figure 1-11.

Figure 1-11:  Using LSM Volumes Like Disk Partitions

1.2    LSM Interfaces

You create, display, and manage LSM objects using any of the following interfaces:

You can use the LSM interfaces interchangeably. That is, LSM objects created by one interface are manageable through and compatible with LSM objects created by other LSM interfaces.

1.2.1    LSM Command Interface

LSM provides a range of commands that allow you to display and manage LSM objects.

Table 1-1 lists the LSM commands and their functions.

Table 1-1:  LSM Commands

Command Function
volsetup Initialize the LSM software
volencap Encapsulate disks or disk partitions
volreconfig Create LSM volumes from the encapsulated disks
volrootmir Mirror the root and swap volumes
voldiskadd Interactively create LSM disks
voldisksetup Add one or more disks for use with LSM (with -i option)
volassist Create, mirror, back up, and move volumes automatically
voldisk Administer LSM disks
voldg Administer disk groups
volplex Administer plexes
volume Administer volumes
volsd Administer subdisks
volmake Create LSM objects manually
volmirror Mirror a plex
voledit Create, modify, and remove LSM records
volprint Display LSM configuration information
volsave Back up the LSM configuration database
volrestore Restore the LSM configuration database
volmend Mend simple problems in configuration records
volnotify Display LSM configuration events
volwatch Monitor LSM for failure events and perform hot-sparing if enabled
volstat Display LSM statistics
voldctl Control daemon operations
voltrace Trace I/O operations on volumes
volevac Evacuate all volume data from a disk
volrecover Synchronize plexes after a crash or disk failure
volinstall Customize the LSM environment
voldiskadm Start the interactive menu interface
lsmsa Start the LSM Storage Administrator GUI

For more information on a command, see the reference page corresponding to its name. For example, for more information on the volassist command, enter:

# man volassist