The Logical Storage Manager (LSM) software is an optional integrated, host-based disk storage management application. LSM uses Redundant Arrays of Independent Disks (RAID) technology to enable you to configure storage devices into a virtual pool of storage to protect against data loss, maximize disk use, improve performance, provide high data availability, and manage storage without disrupting users or applications accessing data on those disks.
This chapter introduces LSM features, concepts, and terminology.
The
volintro(8)
reference page also provides information on LSM terms and commands.
LSM allows you to manage all of your storage devices, such as disks, partitions, or RAID sets, as a flexible pool of storage from which you create LSM volumes. You configure new file systems, databases, and applications, or encapsulate existing ones, to use an LSM volume instead of a disk partition. The benefits of using an LSM volume instead of a disk partition include:
Data loss protection
LSM can automatically store and maintain multiple copies (mirrors) of data or data and parity information. If a storage device fails, LSM:
Continues operating using either the mirrors or the remaining data and parity information, without disrupting users or applications, shutting down the system, or backing up and restoring data
Can automatically transfer the data from the failed storage device to a designated spare disk, or to free disk space, and send you mail about the relocation
You can also use LSM to encapsulate the boot disk partitions into LSM volumes, then create mirrors of those volumes. By doing so, you create copies of the boot disk partitions from which the system can boot if the original boot disk fails.
Maximize disk usage
You can configure LSM to seamlessly join together storage devices to appear as a single storage device to users and applications.
Performance improvements
You can configure LSM to separate data into units of equal size, then write the data units to two or more storage devices. LSM simultaneously writes the data units if the storage devices are on different SCSI buses.
Data availability
You can configure LSM in a TruCluster environment. TruCluster software makes AlphaServer systems appear as a single system on the network. The AlphaServer systems running the TruCluster software become members of the cluster and share resources and data storage. This sharing allows applications, such as LSM, to continue uninterrupted if the cluster member on which it was running fails.
LSM uses the following hierarchy of objects to organize storage:
LSM disk--An object that represents a storage device that is initialized exclusively for use by LSM
Subdisk--An object that represents a contiguous set of blocks on an LSM disk that LSM uses to write volume data
Disk Group--An object that represents a collection of LSM disks and subdisks for use by an LSM volume
Plex--An object that represents a subdisk or collection of subdisks to which LSM writes a copy of the volume data or log information
Volume--An object that represents a hierarchy of LSM objects, including LSM disks, subdisks, and plexes in a disk group. Applications and file systems make read and write requests to the LSM volume.
The following sections describe LSM objects in more detail.
1.1.1 LSM Disk
An LSM disk is a Tru64 UNIX supported storage device, including disks, disk partitions, and RAID sets, that you configure exclusively for use by LSM. LSM views the storage in the same way as the Tru64 UNIX operating system software views it. For example, if the Tru64 UNIX operating system software considers a RAID set as a single storage device, so does LSM. See the Tru64 UNIX Software Product Description (SPD) for a list of supported storage devices.
Figure 1-1
shows a typical hardware configuration that LSM
supports.
Figure 1-1: Typical LSM Hardware Configuration
A storage device becomes an LSM disk when you initialize it for use by LSM. There are three types of LSM disks:
A
sliced disk
, which initializes an entire disk for LSM use.
This type of initialization organizes the storage into two regions on separate
partitions--a large public region used for storing data and a private
region for storing LSM internal metadata, such as LSM configuration information.
The default size of the private region is 4096 blocks.
Figure 1-2
shows a sliced disk:
Figure 1-2: LSM Sliced Disk
A
simple disk
, which initializes a disk partition.
This type
of initialization organizes the storage into two regions on the same partition--a
large public region used for storing data and a private region for storing
LSM internal metadata, such as LSM configuration information.
The default
size of the private region is 4096 blocks.
Figure 1-3
shows
a simple disk:
Figure 1-3: LSM Simple Disk
Whenever possible, initialize the entire disk as a sliced disk instead of configuring individual disk partitions as simple disks. This ensures that the disk's storage is used efficiently and avoids using space for multiple private regions on the same disk.
A
nopriv disk
, which initializes a disk partition that contains
data you want to encapsulate.
This type of initialization creates only a public
region for the data and no private region.
Figure 1-4
shows
a nopriv disk:
Figure 1-4: LSM Nopriv Disk
A disk group is an object that represents a grouping of LSM disks. LSM disks in a disk group share a common configuration database that identifies all the LSM objects in the disk group. LSM automatically creates and maintains copies of the configuration database in the private region of multiple LSM sliced or simple disks in each disk group.
LSM distributes these copies across all controllers for redundancy. If LSM disks in a disk group are located on the same controller, LSM distributes the copies across several disks. LSM automatically records changes to the LSM configuration and, if necessary, changes the number and location of copies of the configuration database for a disk group.
You cannot have a disk group of only LSM nopriv disks, because an LSM nopriv disk does not have a private region to store copies of the configuration database.
You must create an LSM volume within a disk group, and volumes cannot use disks from more than one disk group. By default, the LSM software creates a default disk group called rootdg. You can create all of your volumes in the rootdg disk group or you can create other disk groups. For example, if you dedicate disks to store financial data, you can create and assign those disks to a disk group called finance.
When you add an LSM disk to a disk group, LSM assigns it a disk media name . By default, the disk media name is the same as the disk access name , which the operating system software assigns to a storage device. For example, the disk media name and disk access name might be dsk1.
You do not have to use the default disk media name. You can assign a disk media name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ). For example, you could assign a disk media name of finance_data_disk.
LSM associates the disk media name with the operating system's disk
access name.
The disk media name provides insulation from operating system
naming conventions.
This allows LSM to find the device should you move it
to a new location (for example, connect a disk to a different controller).
1.1.3 Subdisk
A subdisk is an object that represents a contiguous set of blocks in an LSM disk's public region that LSM uses to store data.
By default, LSM assigns a subdisk name using the LSM disk media name followed by a dash (-) and an ascending two-digit number beginning with 01. For example, dsk1-01 is the subdisk name on an LSM disk with a disk media name of dsk1.
You do not have to use the default subdisk name. You can assign a subdisk name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ). For example, you could assign a subdisk media name of finance_disk01.
A subdisk can be:
The entire public region. The following figure shows that the entire public region of an LSM disk was configured as a subdisk called dsk1-01:
A portion of the public region. The following figure shows a public region of an LSM disk was configured as two subdisks called dsk2-01 and dsk2-02:
A plex is an object that represents a subdisk or collection of subdisks in the same disk group to which LSM writes a copy of volume data or log information. There are three types of plexes:
Data plex
A data plex contains volume data. There are three types of data plexes. The data plex that you choose depends on how you want LSM to store volume data on subdisks. The following lists the three types of data plexes:
In a concatenated data plex, LSM writes volume data in a linear manner.
In a striped data plex, LSM separates and writes volume data in a striped manner.
In a RAID 5 data plex, LSM separates and writes volume data in a striped manner with parity.
Log plex
A log plex contains information about activity in a volume. In the event of a failure, LSM recovers only those areas of the volume identified in the log plex. There are two types of log plexes:
In a DRL plex, LSM logs regions in a mirrored concatenated or striped data plex.
In a RAID 5 log plex, LSM logs blocks being changed in a RAID 5 data plex and stores a temporary copy of the data and parity being written.
Data and log plexes (for compatibility with LSM Version 4.0)
By default, LSM assigns a plex name using the volume name followed by a dash (-) and an ascending two-digit number beginning with 01. For example, volume1-01 is the name of a plex for a volume called volume1.
You do not have to use the default plex name.
You can assign a plex
name of up to 31 alphanumeric characters that cannot include spaces or the
forward slash ( / ).
For example, you could assign a plex media name of finance_plex01.
1.1.4.1 Concatenated Data Plex
In a concatenated data plex, LSM creates a contiguous address space
on the subdisks and sequentially writes volume data in a linear manner.
If
LSM reaches the end of a subdisk while writing data, it continues to write
data to the next subdisk as shown in
Figure 1-5.
Figure 1-5: Concatenated Data Plex
A single subdisk failure in a concatenated data plex will result in LSM volume failure. To prevent this type of failure, you can create multiple mirror (duplicate) plexes on different subdisks. LSM continuously maintains the data in the mirrors. If a plex becomes unavailable because of a subdisk failure, the volume continues operating using a mirror plex.
Using subdisks on different SCSI buses for mirror plexes speeds read requests, because data can be simultaneously read from multiple plexes.
LSM creates a DRL plex when you mirror plexes. A DRL plex divides the data plexes into a set of consecutive regions and tracks regions that change due to I/O writes. When the system restarts after a failure, only the changed regions are recovered.
If you do not use a DRL plex and the system restarts after a failure, LSM must copy and resynchronize all of the data to each plex to restore the plex consistency. Although this process occurs in the background and the volume is still available, it can be a lengthy procedure and can result in unnecessarily recovering data.
You can create up to 32 plexes, which can be any combination of data or DRL plexes. Mirror plexes consume more disk space than other types of plexes, because there is a DRL plex and because volume data is written to each plex.
Figure 1-6
shows a concatenated plex with one
mirror.
Figure 1-6: Concatenated Data Plex with One Mirror
In a striped data plex, LSM separates the data into units of equal size (64 KB by default) and writes the data units on two or more columns of subdisks, creating a stripe of data on the columns. LSM can simultaneously write the data units if there are two or more units and the subdisks are on different SCSI buses.
Figure 1-7
shows how a write request of 384 KB of
data is separated into six 64 KB units and written to three columns as two
complete stripes.
Figure 1-7: Writing Data to a Striped Plex
If a write request does not complete a stripe, then the first data unit
of the next write request starts in the next column.
For example,
Figure 1-8
shows how 320 KB of data is separated into five 64 KB units and written to
three columns.
The first data unit of the next write request will start in
the third column.
Figure 1-8: Incomplete Striped Data Plex
As in a concatenated data plex, a single subdisk failure in a striped data plex will result in volume failure. To prevent this type of failure, you can create multiple mirror (duplicate) plexes on different subdisks. LSM continuously maintains the data in the mirrors. If a plex becomes unavailable because of a subdisk failure, the volume continues operating using a mirror plex.
Using subdisks on different SCSI buses for mirror plexes speeds read requests, because data can be simultaneously read from multiple plexes.
LSM creates a DRL plex when you mirror plexes. A DRL plex divides the data plexes into a set of consecutive regions and tracks regions that change due to I/O writes. When the system restarts after a failure, only the changed regions are recovered.
If you do not use a DRL plex and the system restarts after a failure, LSM must copy and resynchronize all of the data to each plex. Although this process occurs in the background and the volume is still available, it can be a lengthy procedure and can result in unnecessarily recovering data.
You can create up to 32 plexes, which can be any combination of data or DRL plexes. Mirror plexes consume more disk space than other types of plexes, because there is a DRL plex and because volume data is written to each plex.
Figure 1-9
shows a striped data plex with
one mirror.
Figure 1-9: Striped Data Plex with One Mirror
In a RAID 5 data plex, LSM calculates a parity value for each stripe of data, then separates the stripe of data and parity into units of equal size (16 KB by default) and writes the data and parity units on three or more columns of subdisks, creating a stripe of data across the columns. LSM can simultaneously write the data units if there are three or more units and the subdisks are on different SCSI buses. If a subdisk in a column fails, LSM continues operating using the data and parity information in the remaining columns to reconstruct the missing data.
In a RAID 5 data plex, LSM writes both data and parity across columns, writing the parity in a different column for each stripe of data. The first parity unit is located in the last column. Each successive parity unit is located in the next column, left-shifted one column from the previous parity unit location. If there are more stripes than columns, the parity unit placement begins again in the last column.
Figure 1-10
shows how data and parity information are
written in a RAID 5 data plex with three columns.
Figure 1-10: Data and Parity Placement in a Three-Column RAID 5 Data Plex
In Figure 1-10, the first stripe of data contains data units 1 and 2 and parity unit P0. The second stripe contains data units 3 and 4 and parity unit P1. The third stripe contains units 5 and 6 and parity unit P2.
By default, creating a RAID 5 data plex creates a RAID 5 log plex. A RAID 5 log plex keeps track of data and parity blocks being changed due to I/O writes. When the system restarts after a failure, the write operations that did not complete before the failure are restarted.
Note
You cannot mirror a RAID 5 data plex.
The TruCluster software does not support RAID 5 data plexes.
A volume is an object that represents a hierarchy of plexes, subdisks, and LSM disks. Applications and file systems make read and write requests to the LSM volume. The LSM volume depends on the underlying LSM objects to satisfy the request.
An LSM volume can use storage from only one disk group.
As with all storage devices, an LSM volume has a block device interface and a character device interface.
A volume's block device interface is located in the
/dev/vol/diskgroup
directory.
A volume's character device interface is located in the
/dev/rvol/diskgroup/volume
directory.
Because these interfaces support the standard UNIX
open,
close,
read,
write, and
ioctl
calls, databases, file systems, applications, and secondary
swap use an LSM volume in the same manner as a disk partition as shown in
Figure 1-11.
Figure 1-11: Using LSM Volumes Like Disk Partitions
You create, display, and manage LSM objects using any of the following interfaces:
A Java-based graphical user interface (GUI) called LSM Storage Administrator that displays a hierarchical view of LSM objects and their relationships.
The Storage Administrator provides dialog boxes in which you enter information to create or manage LSM objects. Completing a dialog box can be the equivalent of entering several command-line commands. The Storage Administrator allows you to manage local or remote systems on which LSM is running. You need an LSM license to use the Storage Administrator. See Appendix A for more information on using the Storage Administrator.
A menu-based, interactive interface called
voldiskadm.
To perform a procedure, you choose an operation from the main menu and
the
voldiskadm
interface prompts you for information.
The
voldiskadm
interface provides default values when possible.
You
can press Return to use the default value or enter a new value or enter
?
at any time to view online help.
See
Appendix C
and the
voldiskadm(8)
reference page for more information.
A bit-mapped GUI called Visual Administrator.
The Visual Administrator allows you to view and manage disks and volumes and perform limited file system administration. The Visual Administrator displays windows in which LSM objects are represented as icons. This GUI requires a bit-mapped display, the Basic X Environment software subset, and an LSM license. See Appendix D for more information on the Visual Administrator.
LSM commands that you enter at the system prompt. The examples in this guide use LSM commands.
You can use the LSM interfaces interchangeably.
That is, LSM objects
created by one interface are manageable through and compatible with LSM objects
created by other LSM interfaces.
1.2.1 LSM Command Interface
LSM provides a range of commands that allow you to display and manage LSM objects.
Table 1-1 lists the LSM commands and their functions.
| Command | Function |
| volsetup | Initialize the LSM software |
| volencap | Encapsulate disks or disk partitions |
| volreconfig | Create LSM volumes from the encapsulated disks |
| volrootmir | Mirror the root and swap volumes |
| voldiskadd | Interactively create LSM disks |
| voldisksetup | Add one or more disks for use with LSM (with -i option) |
| volassist | Create, mirror, back up, and move volumes automatically |
| voldisk | Administer LSM disks |
| voldg | Administer disk groups |
| volplex | Administer plexes |
| volume | Administer volumes |
| volsd | Administer subdisks |
| volmake | Create LSM objects manually |
| volmirror | Mirror a plex |
| voledit | Create, modify, and remove LSM records |
| volprint | Display LSM configuration information |
| volsave | Back up the LSM configuration database |
| volrestore | Restore the LSM configuration database |
| volmend | Mend simple problems in configuration records |
| volnotify | Display LSM configuration events |
| volwatch | Monitor LSM for failure events and perform hot-sparing if enabled |
| volstat | Display LSM statistics |
| voldctl | Control daemon operations |
| voltrace | Trace I/O operations on volumes |
| volevac | Evacuate all volume data from a disk |
| volrecover | Synchronize plexes after a crash or disk failure |
| volinstall | Customize the LSM environment |
| voldiskadm | Start the interactive menu interface |
| lsmsa | Start the LSM Storage Administrator GUI |
For more information on a command, see the reference page corresponding
to its name.
For example, for more information on the
volassist
command, enter:
#man volassist