This chapter describes how to manage LSM objects using LSM commands. You can also accomplish the tasks described in this chapter using:
The Storage Administrator (Appendix A)
The
voldiskadm
menu interface (Appendix C)
The Visual Administrator (Appendix D)
For more information on an LSM command, see the reference page corresponding
to its name.
For example, for more information on the
volassist
command, enter:
#
man volassist
The following sections describe how to use LSM commands to manage LSM
disks.
5.1.1 Creating an LSM Disk
You create an LSM disk when you initialize a disk or partition for LSM use. When you initialize a disk or partition for LSM use, LSM:
Destroys data on the disk
Formats the disk as an LSM disk
Assigns it a disk media name
Writes a new disk label
You can configure an LSM disk in a disk group or as a spare disk. If you configure the LSM disk in a disk group, LSM uses it to store data. If you configure an LSM disk as a spare, LSM uses it as a replacement for a failed LSM disk that contains a mirror or RAID 5 plex.
If the disk
is new to the system, enter the
voldctl enable
command
after entering the
hwmgr
-scan
scsi
command
to make LSM recognize the disk.
To initialize a disk or partition as an LSM disk, you can use:
The
voldiskadd
script as described in
Section 4.1.2.
The
voldisksetup
command.
#
voldisksetup -i {disk|partition}
Note
By default, LSM initializes each disk with one copy of the configuration database. If a disk group will have fewer than four disks, you should initialize each disk to have two copies of the disk group's configuration database to ensure that the disk group has multiple copies in case one or more disks fail. You must use the
voldisksetup
command to enable multiple copies of the configuration database on a disk.
Specifying a disk access name initializes the entire disk as an LSM sliced disk. Specifying a partition name initializes the partition as an LSM simple disk.
To initialize one or more disks, optionally setting the number of configuration copies to 2:
#
voldisksetup -i disk ... [nconfig=2]
After you initialize a disk or disk partition as an LSM disk, you can add it to a disk group. See Section 5.2.2 for information on creating a disk group or Section 5.2.3 for information on adding an LSM disk to an existing disk group.
5.1.2 Displaying LSM Disk Information
To display detailed information for an LSM disk, enter:
#
voldisk list disk
The following example contains information for an LSM disk called dsk5:
Device: dsk5 devicetag: dsk5 type: sliced hostid: servername disk: name=dsk5 id=942260116.1188.servername group: name=dg1 id=951155418.1233.servername flags: online ready autoimport imported pubpaths: block=/dev/disk/dsk5g char=/dev/rdisk/dsk5g privpaths: block=/dev/disk/dsk5h char=/dev/rdisk/dsk5h version: n.n iosize: min=512 (bytes) max=2048 (blocks) public: slice=6 offset=16 len=2046748 private: slice=7 offset=0 len=4096 update: time=952956192 seqno=0.11 headers: 0 248 configs: count=1 len=2993 logs: count=1 len=453 Defined regions: config priv 17- 247[ 231]: copy=01 offset=000000 enabled config priv 249- 3010[ 2762]: copy=01 offset=000231 enabled log priv 3011- 3463[ 453]: copy=01 offset=000000 enabled
When you initialize an LSM disk, you can assign it a disk media name or use the default disk media name, which is the same as the disk access name assigned by the operating system software.
Caution
Each disk in a disk group must have a unique name. To avoid confusion, you might want to ensure that no two disk groups contain disks with the same name. For example, both the rootdg disk group and another disk group could contain disks with a disk media name of disk03. Because most LSM commands operate on the rootdg disk group unless you specify otherwise, you might perform operations on the wrong disk if multiple disk groups contain identically named disks.
The
voldisk list
command displays a list of all the LSM disks in all disk groups on the system.
To rename an LSM disk, enter:
#
voledit rename old_disk_media_name new_disk_media_name
For example, to rename an LSM disk called disk03 to disk01, enter:
#
voledit rename disk03 disk01
5.1.4 Placing an LSM Disk Off Line
You can place an LSM disk off line to:
Prevent LSM from accessing it
Enable you to move the disk to a different physical location and have the disk retain its LSM identity
Placing a disk off line closes its device file. You cannot place an LSM disk off line if it is in use.
To place an LSM disk off line:
Remove the LSM disk from its disk group:
#
voldg -g disk_group rmdisk disk
Place the LSM disk off line:
#
voldisk offline disk
5.1.5 Placing an LSM Disk On Line
To restore access to an LSM disk that you placed off line, you must place it on line. The LSM disk is placed in the free disk pool and is accessible to LSM again. After placing an LSM disk on line, you must add it to a disk group before an LSM volume can use it. If the disk belonged to a disk group previously, you can add it to the same disk group.
To place an LSM disk on line, enter:
#
voldisk online disk
See
Section 5.2.3
for information on adding an LSM
disk to a disk group.
5.1.6 Moving Data from an LSM Disk
You can move (evacuate) LSM volume data to other LSM disks in the same disk group if there is sufficient free space. If you do not specify a target LSM disk, LSM uses any available LSM disk in the disk group that has sufficient free space. Moving data off an LSM disk is useful in the event of disk failure or to move a volume to simple or sliced disks after encapsulating a disk or partition.
Note
Do not move the contents of an LSM disk to another LSM disk that contains data from the same volume. If the volume is redundant (uses mirror plexes or a RAID 5 plex), the resulting layout might not preserve the volume's redundancy.
To move data off an LSM disk, enter:
#
volevac [-g disk_group] source_disk target_disk
For example, to move data in the rootdg disk group from LSM disk dsk8 and to dsk9, enter:
#
volevac dsk8 dsk9
5.1.7 Removing an LSM Disk from LSM Control
You can remove a disk from LSM control if you removed the disk from its disk group or deported its disk group.
See Section 5.2.8 for information on removing an LSM disk from a disk group. See Section 5.2.5 for information on deporting a disk group.
To remove an LSM disk, enter:
#
voldisk rm disk
For example, to remove an LSM disk called dsk8, enter:
#
voldisk rm dsk8
If you want to use the disk after it is removed from LSM control, you
must initialize it using the
disklabel
command.
See the
disklabel
(8)
reference page for more information.
5.2 Managing Disk Groups
The following sections describe how to use LSM commands to manage disk
groups.
5.2.1 Displaying Disk Group Information
There are three common ways to display disk group information. You can display:
A list of all disks on the system. See Section 5.2.1.1.
A list of disks in all disk groups and the free space on each. See Section 5.2.1.2.
The maximum size volume you can create in a disk group. See Section 5.2.1.3.
5.2.1.1 Displaying LSM Disks in All Disk Groups
To display a list of all LSM disks and the disk group to which each belongs, enter:
#
voldisk list
Information similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 dg1 online dsk7 sliced dsk7 dg1 online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg2 online dsk10 sliced dsk10 dg2 online dsk11 sliced dsk11 dg2 online dsk12 sliced - - unknown dsk13 sliced - - unknown
The following list describes the preceding information categories:
DEVICE |
The disk access name assigned by the operating system software. |
TYPE |
The LSM disk type: sliced, simple, or nopriv. |
DISK |
The LSM disk media name. An LSM disk media name is displayed only if the disk is in a disk group. |
GROUP |
The disk group to which the disk belongs. A disk group name is displayed only if the disk is in a disk group. |
STATUS |
The status of the LSM disk:
|
5.2.1.2 Displaying Free Space in Disk Groups
To display the free space in one or all disk groups, enter:
#
voldg [-g disk_group] free
Information similar to the following is displayed:
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS rootdg dsk2 dsk2 dsk2 2097217 2009151 - rootdg dsk3 dsk3 dsk3 2097152 2009216 - rootdg dsk4 dsk4 dsk4 0 4106368 - rootdg dsk5 dsk5 dsk5 0 4106368 - dg1 dsk6 dsk6 dsk6 0 2046748 - dg1 dsk8 dsk8 dsk8 0 2046748 -
The value in the
LENGTH
column indicates the amount
of free disk space in 512-byte blocks.
(2048 blocks equal 1 MB.)
5.2.1.3 Displaying the Maximum Size for an LSM Volume in a Disk Group
To display the maximum size for an LSM volume that you can create in a disk group, enter:
#
volassist [-g disk_group] maxsize
The following example displays the maximum size for an LSM volume that you can create in a disk group called dg1:
#
volassist -g dg1 maxsize
Maximum volume size: 6139904 (2998Mb)
The default rootdg disk group is created when you install LSM and always exists on a system running LSM. You can create additional disk groups to organize your disks into logical sets. Each disk group that you create must have a unique name and contain at least one simple or sliced LSM disk. An LSM disk can belong to only one disk group. An LSM volume can use disks from only one disk group.
If you want to initialize LSM disks and create a new disk group at the
same time, you can use the
voldiskadd
script.
(See
Section 4.1.2
for more information.)
Note
By default, LSM initializes each disk with one copy of the configuration database. If a disk group will have fewer than four disks, you should initialize each disk to have two copies of the disk group's configuration database to ensure that the disk group has multiple copies in case one or more disks fail. You must use the
voldisksetup
command to enable more than one copy of the configuration database (Section 5.1.1).
To create a new disk group using LSM disks, enter:
#
voldg init new_disk_group disk ...
For example, to create a disk group called newdg using LSM disks called dsk100, dsk101, and dsk102, enter:
#
voldg init newdg dsk100 dsk101 dsk102
5.2.3 Adding a Disk to a Disk Group
To add a disk to an existing disk group, enter:
#
voldg [-g disk_group] adddisk disk
For example, to add the disk called dsk10 to a disk group called dg1, enter:
#
voldg -g dg1 adddisk dsk10
Renaming a disk group involves deporting and then importing the disk group. You cannot rename a disk group while it is in use. All activity on all volumes in the disk group must stop, and the volumes in the disk group are inaccessible while the disk group is deported.
Because renaming a disk group involves an interruption of service to
the volumes, you might want to perform this task during a planned shutdown
or maintenance period.
Choose the new disk group name carefully, and ensure
that the new name is easy to remember and use.
Renaming a disk group updates
the
/etc/fstab
file.
Note
You cannot rename the rootdg disk group.
To rename a disk group:
Deport the disk group, assigning it a new name. See Section 5.2.5.
Import the disk group using its new name. See Section 5.2.6.
Deporting a disk group makes its volumes inaccessible. You can deport a disk group to:
Rename the disk group.
Reuse the disks for other purposes.
Move the disk group to another system (Section 5.2.7).
You cannot deport the rootdg disk group.
Caution
The
voldisk list
command displays the disks in a deported disk group as available (with a status ofonline
). However, removing or reusing the disks in a deported disk group can result in data loss.
To deport a disk group:
If volumes in the disk group are in use, stop the volumes:
#
volume [-g disk_group] stopall
Deport the disk group:
To deport the disk group with no changes, enter:
#
voldg deport disk_group
To deport the disk group and assign it a new name, enter:
#
voldg [-n newname] deport disk_group
See the
voldg
(8)
reference page for more information on assigning a new name
to a disk group.
You must import a disk group before you can use it. See Section 5.2.6 for information on importing a disk group.
If you no longer need the disk group, you can:
Add the disks to different disk groups (Section 5.2.3).
Use the disks to create new disk groups (Section 5.2.2).
Remove the disks from LSM control (Section 5.1.7).
Importing a disk group makes the disk group and its volumes accessible. You cannot import a disk group if you used any of its associated disks while it was deported.
To import a disk group and restart its volumes:
Import the disk group:
#
voldg import disk_group
Start all volumes within the disk group:
#
volume [-g disk_group] startall
5.2.7 Moving a Disk Group to Another System
You can move a set of disks from one system to another and retain the LSM objects and data on those disks. You can move any disk group to another system; however, to move the rootdg disk group, the following must be true:
You have stopped LSM running on the original system. (You stopped the LSM daemons or shut down the system before disconnecting the rootdg disk group to move it.)
The new system must be running LSM, which means it has a rootdg disk group; therefore, you must import the former rootdg disk group with a different name on the new system.
Moving a disk group between systems results in the new host system assigning new disk access names to the disks. For LSM nopriv disks (created when you encapsulate disks or partitions), the association between the original disk access name and its disk media name might be lost, or might be reassociated incorrectly. To prevent this, you must manually reassociate the disk media names with the new disk access names. For LSM simple and sliced disks, LSM manages this reassociation.
If possible, before moving the disk group, migrate the data from nopriv disks to simple or sliced disks, which have a private region and will be reassociated automatically. See Section 5.1.6 for more information on moving data to a different disk.
If you cannot move the data to simple or sliced disks, follow these steps to help ensure you can correctly reassociate the nopriv disks on the new host system:
On the original system, identify all the nopriv disks in the disk group by their current disk access name, disk media name, and a unique identifier (such as the disk's SCSI world-wide identifier) that will not change or can be tracked when the disk is connected to the new system.
If there is only one nopriv disk in the disk group, there is only one device to reassociate. As long as you are not connecting other devices to the new host at the same time, you might not need this information. For two or more noprivs, having precise identification beforehand is crucial.
Keep track of the before-and-after bus locations of each nopriv disk as you move it between systems. Then when you scan for the disks on the new host, you will know which new disk access name associated with the new bus location belongs to which disk media name. You can move each disk individually and have the new host scan for it each time to be sure.
You can change the disk group's name or host ID when you move it to the new host:
If the disk group's name is similar to a disk group on the system receiving it, you can change its name to reduce the chance for confusion.
Note
If the disk group has the same name as a disk group on the system receiving it, you must change its name.
If you want the system receiving the disk group to import it automatically the first time the system starts up, you can change the disk group's host ID to that of the receiving system as you deport it from the original system.
If you will import the disk group to a system that is already running, you do not need to change the disk group's host ID; it is changed as the disk group is imported.
To move a disk group to another system:
Stop all activity on the volumes in the disk group and unmount any file systems.
Deport the disk group from the originating system:
To deport the disk group with no changes, enter:
#
voldg deport disk_group
To deport the disk group and assign it a new host ID or a new name, enter:
#
voldg [-n newname] [-h newhostID] deport disk_group
Physically move the disks to the new host system.
Enter the following command on the new host system to scan for the disks:
#
hwmgr -scan scsi
The
hwmgr
command returns the prompt before it completes
the scan.
You need to know that the system has discovered the disks before
continuing.
See the
hwmgr
(8)
reference page for more information on how to trap the end
of a scan.
Make the
vold
daemon scan for the newly
added disks:
#
voldctl enable
Import the disk group to the new host. If the disk group contains nopriv disks whose disk media names no longer correspond to their original disk access names, you might need to use the force (-f) option.
#
voldg [-f] import disk_group
If applicable, associate the disk media names for the nopriv disks to their new disk access names:
#
voldg -g disk_group -k adddisk disk_media_name=disk_access_name ...
Recover and start all startable volumes in the imported disk group. The following command performs any necessary recovery operations as a background task after starting the volumes.
#
volrecover -g disk_group -sb
Optionally, check for any detached plexes.
#
volinfo -p
If the output lists any volumes as
Unstartable
, see
Section 6.4.3
for information on how to proceed.
If necessary, start the remaining
Startable
volumes:
#
volume -g disk_group start volume1 volume2 ...
5.2.8 Removing an LSM Disk from a Disk Group
You can remove an LSM disk from a disk group; however, you cannot remove:
The last disk in a disk group unless the disk group is deported. See Section 5.2.5 for information on deporting a disk group.
Any disk that is in use (for example, disks that contain active LSM volume data). If you attempt to remove a disk that is in use, LSM displays an error message and does not remove the disk.
See Section 5.1.6 for information on moving data from a disk. See Section 5.4.6 for information on removing LSM volumes.
To remove an LSM disk from a disk group:
Verify that the LSM disk is not in use by listing all subdisks:
#
volprint -st
Information similar to the following is displayed:
Disk group: rootdg SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE sd dsk1-01 klavol-01 dsk1 0 1408 0/0 dsk1 ENA sd dsk2-02 klavol-03 dsk2 0 65 LOG dsk2 ENA sd dsk2-01 klavol-01 dsk2 65 1408 1/0 dsk2 ENA sd dsk3-01 klavol-01 dsk3 0 1408 2/0 dsk3 ENA sd dsk4-01 klavol-02 dsk4 0 1408 0/0 dsk4 ENA sd dsk5-01 klavol-02 dsk5 0 1408 1/0 dsk5 ENA sd dsk6-01 klavol-02 dsk6 0 1408 2/0 dsk6 ENA
The disks in the
DISK
column are currently in use
by LSM volumes, and therefore you cannot remove those disks from a disk group.
Remove the LSM disk from the disk group:
#
voldg [-g disk_group] rmdisk disk
For example, to remove an LSM disk called dsk8 from the rootdg disk group, enter:
#
voldg rmdisk dsk8
The disk remains under LSM control. You can:
Add the disk to a different disk group. See Section 5.2.3.
Use the disk to create a new disk group. See Section 5.2.2.
Remove the disk from LSM control. See Section 5.1.7.
5.3 Managing the LSM Configuration Database
This section describes how to manage the LSM configuration database, including:
Backing up the configuration database
Restoring the configuration database from backup
Modifying the configuration database properties
5.3.1 Backing Up the LSM Configuration Database
One important responsibility in managing a system with LSM is to periodically make a backup copy of the LSM configuration database. This helps you:
Restore volumes from backup after a major system failure
Recreate LSM volumes after disk failures, if the failures resulted in the loss of all configuration database copies
The saved configuration database (also called a description set) is a record of the objects in the LSM configuration (the LSM disks, subdisks, plexes and volumes) and the disk group to which each object belongs.
Whenever you make a change to the LSM configuration, the backup copy becomes obsolete. As with any backup, the content is useful only as long as it accurately represents the current information. Any time the number, nature, or name of LSM objects change, consider making a backup of the LSM configuration database. The following list describes some of the changes that will invalidate a configuration database backup:
Creating disk groups
Adding or removing disks from disk groups or from LSM control
Creating or removing volumes
Changing the properties of volumes, such as the plex layout or number of logs
Note
Backing up the configuration database does not save the data in the volumes and does not save the configuration data for any volumes associated with the boot disk, if you encapsulated the boot disk.
Depending on the nature of a boot disk failure, you might need to restore the system partitions from backups or installation media to return to a state where the system partitions are not under LSM control. From there, you can redo the procedures to encapsulate the boot disk partitions into LSM volumes and add mirror plexes to those volumes.
See Section 6.5.6 for more information about recovering from a boot disk failure under LSM control.
See Section 5.4.2 for information on backing up volume data.
By default, LSM saves the entire configuration database to a time-stamped
directory called
/usr/var/lsm/db/LSM.date.hostname
.
You can specify a different location for
the backup, but the directory must not exist.
In the directory, the backup procedure creates:
A file called
header
, which contains host
ID and checksum information, and a list of the other files in this directory.
A copy of the
volboot
file.
A file called
voldisk.list
, which contains
a list of all LSM disks, their type (sliced, simple, nopriv), the size of
their private and public regions, their disk group, and other information.
A subdirectory called
rootdg.d
, which contains
the
allvol.DF
file.
The
allvol.DF
file contains detailed descriptions
of every LSM subdisk, plex, and volume, describing all their properties and
attributes.
To back up the LSM configuration database:
Enter the following command, optionally specifying a directory location other than the default to store the LSM configuration database:
#
volsave [-d directory]
Save the backup to tape or other removable media.
You can save multiple versions of the configuration database; each new
backup is saved in the
/usr/var/lsm/db
directory with its
own date and time stamp, as shown in the following example:
dr-xr-x--- 3 root system 8192 May 5 09:36 LSM.20000505093612.hostname dr-xr-x--- 3 root system 8192 May 10 10:53 LSM.20000510105256.hostname
5.3.2 Restoring the LSM Configuration Database from Backup
You use the
volrestore
command
to restore an LSM configuration database that you saved with the
volsave
command.
You can restore the configuration database of a
specific disk group or volume or the entire configuration (all disk groups
and volumes except those associated with the boot disk).
If you have saved
multiple versions of the configuration, you can choose a specific one to restore.
If you do not choose one, LSM restores the most recent version.
Notes
Restoring the configuration database does not restore data in the LSM volumes. See Section 5.4.3 for information on restoring volume data.
The
volrestore
command does not restore volumes associated with the root (/
),/usr
, and/var
file systems and the primary swap area. If volumes for these partitions are corrupted or destroyed, these partitions must be reencapsulated to use LSM volumes.See the Cluster Administration manual for information about using the
volrestore
command in a cluster.
To restore a backed-up LSM configuration database:
Optionally, display a list of all available database backups:
#
ls /usr/var/lsm/db
If you saved the configuration database to a different directory, specify that directory.
Restore the chosen configuration database:
To restore the entire configuration database, enter:
#
volrestore [-d directory]
To restore a specific disk group configuration database, enter:
#
volrestore [-d directory] -g disk_group
To restore a specific volume configuration database, enter:
#
volrestore [-d directory] -v volume
To restore a configuration database interactively, enabling you to select or skip specific objects, enter:
#
volrestore [-d directory] -i
Start the restored LSM volumes:
#
volume -g disk_group startall
If the volumes will not start, you might need to manually edit the plex state. See Section 6.4.3.
If necessary, restore the volume contents (data) from backup. See Section 5.4.3 for more information.
5.3.3 Changing the Size and Number of Configuration Database Copies
LSM maintains copies of the configuration database on separate physical disks within each disk group. When the disk group runs out of space in the configuration database, LSM displays a message similar to the following:
volmake: No more space in disk group configuration
This might happen because:
You imported an LSM configuration from a system running Tru64 UNIX Version 4.0, which used a smaller default private region size.
One or more disks in the disk group contain two copies of the configuration database. Whenever a configuration change occurs, all active copies are updated. If one disk's copies cannot be updated because they have grown too large, then none of the copies for the whole disk group can be updated.
If the configuration database runs out of disk space and you determine that one or more disks have two copies of the configuration database, you can remove one copy from each disk that has two. However, make sure that there are sufficient copies of the configuration database available for redundancy. For example, if the disk group has a total of four copies and two are on the same disk, you should remove one copy from that disk and enable a copy on another disk that does not have any.
If all copies of the configuration database are the same size and no disk has more than one copy, this could indicate that the private regions of the disks are too small (for example, the disks were initialized on a system running an earlier version of LSM, with a smaller default private region). To resolve this problem, you must add new disks to LSM, which will have the larger default private region size, add the new disks to the disk group, and delete the copies of the configuration database on the other disks.
To reduce the number of configuration database copies:
Display information about the disk group's configuration database:
#
voldg list disk_group
Example 5-1
shows output from the
voldg
command for a disk group with multiple copies of the configuration
database on one disk.
Example 5-2
shows output for
a disk group from a previous version of the operating system, in which some
disks have smaller private regions than the current default.
Example 5-1: Multiple Configuration Database Copies on a Disk
Group: rootdg dgid: 783105689.1025.lsm import-id: 0.1 flags: config: seqno=0.1112 permlen=173 free=166 templen=6 loglen=26 config disk dsk13 copy 1 len=173 state=clean online config disk dsk13 copy 2 len=173 state=clean online config disk dsk11g copy 1 len=347 state=clean online config disk dsk10g copy 1 len=347 state=clean online log disk dsk11g copy 1 len=52 log disk dsk13 copy 1 len=26 log disk dsk13 copy 2 len=26 log disk dsk10g copy 1 len=52
In Example 5-1:
The
len=
information in the lines beginning
with
config disk
and
log disk
is the
size in blocks of the amount of space available on the disk for copies of
the configuration database or the log.
The smallest length for a
config disk
or
log disk
limits the entire disk
group by limiting the length of the configuration or log in memory.
Disk dsk13 has two copies of the configuration database. This halves the total configuration space available in memory for the disk group and is therefore the limiting factor.
Example 5-2: Configuration Database Copy on a Disk with a Private Region Smaller Than the Current Default
Group: rootdg dgid: 921610896.1026.abc.xyz.com import-id: 0.1 flags: copies: nconfig=default nlog=default config: seqno=0.1081 permlen=347 free=341 templen=3 loglen=52 config disk dsk7 copy 1 len=347 state=clean online config disk dsk8 copy 1 len=2993 state=clean online config disk dsk9 copy 1 len=2993 state=clean online config disk dsk10 copy 1 len=2993 state=clean online log disk dsk7 copy 1 len=52 log disk dsk8 copy 1 len=453 log disk dsk9 copy 1 len=453 log disk dsk10 copy 1 len=453
In Example 5-2:
The
len=
information in the lines beginning
with
config disk
and
log disk
is the
size in blocks of the amount of space available on the disk for copies of
the configuration database or the log.
The smallest length for a
config disk
or
log disk
limits the entire disk
group by limiting the length of the configuration or log in memory.
Disk dsk7 has a smaller private region than the other disks,
which means there is less space to store copies of the configuration database
and log (in the line
config disk dsk7
there are 347 blocks
available versus 2993 blocks on the other disks; in the line
log
disk dsk7
there are 52 blocks available versus 453 blocks on the
other disks).
This restricts the disk group's ability to store additional
records, because the smallest private region sets the limit for the group.
Modify the number of configuration copies in the disk group:
To reduce the number of copies on a disk, enter the following command where n is the number of copies you want the disk to retain:
#
voldisk moddb disk nconfig=n
For example, to reduce the number of configuration copies on dsk13 from two to one, enter:
#
voldisk moddb dsk13 nconfig=1
To remove all copies from a disk, enter:
#
voldisk moddb disk nconfig=0
Optionally, display the new configuration by entering the following command:
#
voldg list rootdg
Information similar to the following is displayed. In this example, the output shows the change to the configuration in Example 5-2:
Group: rootdg dgid: 921610896.1026.abc.xyz.com import-id: 0.1 flags: copies: nconfig=default nlog=default config: seqno=0.1091 permlen=2993 free=2987 templen=3 loglen=453 config disk dsk7 copy 1 len=2993 state=clean online config disk dsk8 copy 1 len=2993 state=clean online config disk dsk9 copy 1 len=2993 state=clean online config disk dsk10 copy 1 len=2993 state=clean online log disk dsk7 copy 1 len=453 log disk dsk8 copy 1 len=453 log disk dsk9 copy 1 len=453 log disk dsk10 copy 1 len=453
To add a copy to another disk to maintain the appropriate number of copies for the disk group:
Display a list of all disks in the disk group:
#
voldisk [-g disk_group] list
Compare the disks listed in the output of the
voldisk
list
command to those listed in the output of the
voldg
list
command to identify a disk in the disk group that does not
have a copy of the configuration database.
Enable a copy on a disk that does not have one, using the disk access name:
#
voldisk moddb disk_access_name nconfig=1
The following sections describe how to use LSM commands to manage LSM
volumes.
See
Chapter 4
for information on creating
LSM volumes.
5.4.1 Displaying LSM Volume Information
The
volprint
command displays information about LSM
objects that make up an LSM volume.
The following table lists the abbreviations
used in
volprint
output:
Abbreviation | Specifies |
dg |
Disk group name |
dm |
Disk media name |
pl |
Plex name |
sd |
Subdisk name |
v |
LSM volume name |
To display LSM object information for an LSM volume, enter:
#
volprint [-g disk_group] -ht volume
Information similar to the following is displayed:
Disk group: rootdg [1] V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v klavol fsgen ENABLED ACTIVE 4096 SELECT - [2] pl klavol-01 klavol ENABLED ACTIVE 4224 STRIPE 3/128 RW [3] sd dsk1-01 klavol-01 dsk1 0 1408 0/0 dsk1 ENA [4] sd dsk2-01 klavol-01 dsk2 65 1408 1/0 dsk2 ENA sd dsk3-01 klavol-01 dsk3 0 1408 2/0 dsk3 ENA pl klavol-02 klavol ENABLED ACTIVE 4224 STRIPE 3/128 RW sd dsk4-01 klavol-02 dsk4 0 1408 0/0 dsk4 ENA sd dsk5-01 klavol-02 dsk5 0 1408 1/0 dsk5 ENA sd dsk6-01 klavol-02 dsk6 0 1408 2/0 dsk6 ENA pl klavol-03 klavol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk2-02 klavol-03 dsk2 0 65 LOG dsk2 ENA
This example shows output for a volume that uses a three-column, striped plex that has one mirror plex.
Disk group name. [Return to example]
Volume name (klavol), usage type (fsgen
),
state (ENABLED ACTIVE), and size (4096) information.
[Return to example]
Plex information. This volume has two data plexes, klavol-01 and klavol-02, and a DRL plex, klavol-03. [Return to example]
Subdisk information for the plex klavol-01. [Return to example]
One of the more common tasks of a system administrator is helping users recover lost or corrupted files. To perform that task effectively, you must set up procedures for backing up LSM volumes and the LSM configuration database at frequent and regular intervals. You will need the saved configuration database as well as the backed-up data if you need to restore a volume after a major failure. (For example, multiple disks in the same volume failed, and those disks contained the active configuration records for the disk group.)
See Section 5.3.1 for information on backing up the LSM configuration database.
Note
If the volume is part of an Advanced File System domain, use the
vdump
command to back up the volume. See AdvFS Administration for more information.
The way you back up an LSM volume depends on the number and type of plexes in the volume:
If the volume has only one concatenated or striped plex, see Section 5.4.2.1.
If the volume has mirror plexes, see Section 5.4.2.2.
If the volume has a RAID 5 plex, see Section 5.4.2.3.
5.4.2.1 Backing Up a Volume with a Single Concatenated or Striped Plex
To back up an LSM volume that has a single plex:
If necessary, select a convenient time and inform users to save files and refrain from using the volume (the application or file system that uses the volume) while you back it up.
Determine the size of the LSM volume and which disks it uses:
#
volprint -v [-g disk_group] volume
Ensure there is enough free space in the disk group to create a temporary copy of the LSM volume. The free space must be on disks that are not used in the volume you want to back up:
#
voldg [-g disk_group] free
If the volume contains a UNIX File System, unmount it.
Create a temporary mirror plex for the LSM volume, running this operation in the background:
#
volassist snapstart volume &
Create a new volume from the temporary plex.
(The
snapshot
keyword automatically uses the temporary plex to create
the new volume.)
#
volassist snapshot volume temp_volume
The following example creates a temporary LSM volume called vol1_backup for an LSM volume called vol1:
#
volassist snapshot vol1 vol1_backup
Remount and resume use of the original LSM volume.
Start the temporary LSM volume:
#
volume start temp_volume
Back up the temporary LSM volume to your default backup device:
#
dump 0 /dev/rvol/disk_group/temp_volume
The following example backs up an LSM volume called vol1_backup in the rootdg disk group:
#
dump 0 /dev/rvol/rootdg/vol1_backup
Stop and remove the temporary LSM volume:
#
volume stop temp_volume#
voledit -r rm temp_volume
See the
dump
(8)
reference page for more information about the
dump
command.
5.4.2.2 Backing Up a Volume with Mirror Plexes
Volumes with mirror plexes can remain in use while you back up their data, but any writes to the volume during the backup might result in inconsistency between the volume's data and the data that was backed up.
Caution
If the LSM volume has only two data plexes, redundancy is not available during the backup.
To back up an LSM volume that has mirror plexes:
Dissociate one of the volume's plexes, which leaves the plex as an image of the LSM volume at the time of dissociation:
#
volplex dis plex
The following example dissociates a plex called vol01-02:
#
volplex dis vol01-02
Create a temporary LSM volume using the dissociated plex:
#
volmake -U fsgen vol temp_volume plex=plex
The following example creates an LSM volume called vol01-temp using a plex called vol01-02:
#
volmake -U fsgen vol vol01-temp plex=vol01-02
Start the temporary volume:
#
volume start temp_volume
Back up the temporary LSM volume to your default backup device:
#
dump 0 /dev/rvol/disk_group/temp_volume
The following example backs up an LSM volume called vol1_backup in the rootdg disk group:
#
dump 0 /dev/rvol/rootdg/vol1_backup
Stop and remove the temporary LSM volume:
#
volume stop temp_volume#
voledit -r rm temp_volume
Reattach the dissociated plex to the original volume. If the volume is very large, you can run this operation in the background:
#
volplex att volume plex &
LSM automatically resynchronizes the plexes when you reattach the dissociated plex. This operation might take a long time, depending on the size of the volume. Running this process in the background returns control of the system to you immediately instead of after the resynchronization is complete.
See the
dump
(8)
reference page for more information about the
dump
command.
5.4.2.3 Backing Up a Volume with a RAID 5 Plex
You can back up a volume that uses a RAID 5 plex, but you must either stop all applications from using the volume while the backup is in process or allow the backup to occur while the volume is in use.
If the volume remains in use during the backup, the volume data might change before the backup completes, and therefore the backup data will not be an exact copy of the volume's contents.
To back up a volume with a RAID 5 plex, enter:
#
dump 0 /dev/rvol/disk_group/volume
5.4.3 Restoring an LSM Volume from Backup
The way you restore an LSM volume depends on what the volume is used for and if the volume is configured and active.
Note
If the volume is part of an AdvFS domain, use the
vrestore
command to restore it. See AdvFS Administration for more information.If the volume is used for an application such as a database, see that application's documentation for the recommended method for restoring backed-up data.
To restore a backed-up volume:
If the volume contains a UNIX File System and the volume still exists (for example, you replaced a failed disk), enter the following command to restore the data from the backup media:
#
restore -Yf backup_volume
If the volume does not exist:
Recreate the volume:
#
volrestore [-g disk_group] -v volume
Recreate the file system:
#
newfs /dev/rvol/disk_group/volume
Mount the file system:
#
mount /dev/vol/disk_group/volume /mount_point
Restore the volume data:
#
restore -Yrf backup_volume
LSM automatically starts all startable volumes when the system boots. You can manually start an LSM volume that:
You manually stopped
Belongs to a disk group that you manually imported
Stopped because of a disk failure or other problem that you have since resolved
To start an LSM volume, enter:
#
volume start [-g disk_group] volume
To start all volumes in a disk group (for example, after importing the disk group), enter:
#
volume [-g disk_group] startall
LSM automatically stops LSM volumes when the system shuts down. When you no longer need an LSM volume, you can stop it then remove it. You cannot stop an LSM volume if a file system is using it.
To stop an LSM volume:
If applicable, stop a file system from using the LSM volume.
For AdvFS, dissociate the volume from the domain:
#
rmvol LSM_volume domain
Data on the volume is automatically migrated to other volumes in the
domain, if available.
See the
AdvFS Administration
manual for more information
on the
rmvol
command.
For UFS, unmount the file system:
#
umount /dev/rvol/volume
Stop the LSM volume:
#
volume [-g disk_group] stop volume
For example, to stop an LSM volume called vol1 in the dg1 disk group, enter:
#
volume -g dg1 stop vol1
To stop all volumes, enter:
#
volume stopall
Removing an LSM volume destroys the data in that volume. Remove an LSM volume only if you are sure that you do not need the data in the LSM volume or the data is backed up elsewhere. When an LSM volume is removed, the space it occupied is returned to the free space pool.
The following procedure also unencapsulates UNIX File Systems.
Note for AdvFS Domains
To remove a volume that was created by encapsulating an AdvFS domain, see Section 5.4.6.1.
To remove an LSM volume:
If applicable, stop a file system from using the LSM volume.
For AdvFS, dissociate the volume from the domain:
#
rmvol LSM_volume domain
Data on the volume is automatically migrated to other volumes in the
domain.
See the
AdvFS Administration
manual for more information on the
rmvol
command.
For UFS, unmount the file system:
#
umount /dev/rvol/volume
Edit the necessary system files as follows:
If the volume was configured as secondary swap, remove references
to the LSM volume from the
vm:swapdevice
entry in the
sysconfigtab
file.
If the swap space was configured using the
/etc/fstab
file, update this file to change the swap entries back to disk
partitions instead of LSM volumes.
These changes are effective the next time the system restarts.
See the
System Administration
manual and the
swapon
(8)
reference
page for more information.
Stop the LSM volume:
#
volume [-g disk_group] stop volume
Remove the LSM volume:
#
voledit -r rm volume
This step removes the plexes and subdisks and the volume itself.
If the volume contained an encapsulated UNIX file system,
edit the
/etc/fstab
file to change the volume name to the
disk name.
For example, change
/dev/vol/rootdg/vol-dsk4g
to
/dev/dsk4g
.
5.4.6.1 Unencapsulating AdvFS Domains
When you encapsulate AdvFS domains into LSM volumes, LSM creates a script
that you can run to unencapsulate the domain.
The script contains some LSM
commands and some general commands and performs all the steps necessary to
remove the LSM volume, remove the disk from LSM control, and update the links
in the
/etc/fdmns
directory.
The script is created in the following directory, where dsknp is the disk access name of the disk the domain resides on:
/etc/vol/reconfig.d/disk.d/dsknp.encapdone/recover.sh
If you have encapsulated more than one AdvFS domain, the
/etc/vol/reconfig.d/disk.d
directory will contain a subdirectory for each disk.
Make sure
you run the correct script to unencapsulate the domain.
Note
Unencapsulating an AdvFS domain requires that you unmount the filesets.
To unencapsulate an AdvFS domain:
Display and unmount all the filesets in the domain.
For example,
to unmount the filesets in the
new_dom
domain, enter:
#
mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) var_domain#var on /var type advfs (rw) mhs:/work on /work type nfs (v3, rw, udp, hard, intr) new_dom#junk on /junk type advfs (rw) new_dom#stuff on /stuff type advfs (rw)#
umount /junk /stuff
Identify the name of the LSM volume for the domain and the name of the disk the domain is using:
#
showfdmn domain
Information similar to the following is displayed:
Id Date Created LogPgs Version Domain Name 3a65b2a9.0004cb3f Wed Jan 17 09:56:41 2001 512 4 new_dom Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 8380080 8371248 0% on 512 512 /dev/vol/rootdg/vol-dsk10c
Typically, the volume name is derived from the disk containing the encapsulated domain.
Stop the LSM volume:
#
volume stop volume
Run the unencapsulation script.
For example, to run the script
for the
new_dom
domain on disk dsk10c, enter:
#
sh /etc/vol/reconfig.d/disk.d/dsk10c.encapdone/recover.sh
If the script is not available, do the following:
Change directory to the domain directory:
#
cd /etc/fdmns/domain
Remove the link to the volume:
#
rm disk_group.volume
Replace the link to the disk device file:
#
ln -s /dev/disk/dsknp
Remount the filesets to the domain:
#
mount new_dom#junk /junk#
mount new_dom#stuff /stuff
The domain is available for use.
I/O to the domain goes through the
disk device path instead of the LSM volume.
You can confirm this by running
the
showfdmn
command again:
#
showfdmn new_dom
Id Date Created LogPgs Version Domain Name 3a65b2a9.0004cb3f Wed Jan 17 09:56:41 2001 512 4 new_dom Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 8380080 8371248 0% on 256 256 /dev/disk/dsk10c
5.4.6.2 Unencapsulating System Partitions
You can remove the LSM volumes for the system partitions and return to using physical disk partitions. This process is called unencapsulation and involves restarting the system.
Note for TruCluster Environments
To unencapsulate the cluster root domain, use the
volunmigrate
command. Seevolunmigrate
(8) and Cluster Administration for more information.
The unencapsulation process changes the following files:
If the
root file system is UFS, the
/etc/fstab
file is changed
to use disk partitions instead of LSM volumes.
If the root file system is AdvFS, the
/etc/fdmns/*
directory is updated to change domain directories that have disk partitions
associated with the boot disk.
The
/etc/sysconfigtab
file is changed to update the
swapdevice
entry to not use LSM volumes and to set the
lsm_rootdev_is_volume
entry to 0.
If the system volumes are mirrored, remove all but one plex. Leave only the plex that is using the disk you want the system partitions to use after the unencapsulation completes.
For example, you can remove plexes named
rootvol-01
and
rootvol-02
, leaving the
rootvol
volume with only plex
rootvol-03
, if that plex is on the
disk you want to unencapsulate.
The remaining plexes in each system volume can be on different disks
from each other; for example, the remaining
rootvol
plex
can be on dsk0 while the remaining
swapvol
plex can be
on dsk2.
To unencapsulate the system partitions:
If the system volumes (root, swap,
/usr
and
/var
) are mirrored, do the following.
If not, continue
with step 2.
Enter the following command to display volume information:
#
volprint -v
In the output, note the names of the plexes that you want to remove.
Remove all mirror plexes but the one on the disk that you want the system partitions to use after the unencapsulation process completes:
#
volplex -o rm dis plex-nn
For example, to remove secondary plexes for the volumes
rootvol
,
swapvol
and
vol-dsk0g
, enter:
#
volplex -o rm dis rootvol-02#
volplex -o rm dis swapvol-02#
volplex -o rm dis vol-dsk0g-02
Change the boot
disk environment variable to point to the physical boot disk (the disk containing
the remaining plex for
rootvol
) instead of the LSM volume:
#
consvar -s bootdef_dev boot_disk
Enter the following command to complete the unencapsulation. This command also removes the LSM private region from the system disks and prompts you to restart the system.
#
volunroot -a -A
Information similar to the following is displayed.
Enter
now
at the prompt.
This operation will convert the following file systems on the system/swap disk dsk0 from LSM volumes to regular disk partitions: Replace volume rootvol with dsk0a. Replace volume swapvol with dsk0b. Replace volume vol-dsk0g with dsk0g. Remove configuration database on dsk0h. This operation will require a system reboot. If you choose to continue with this operation, your system files will be updated to discontinue the use of the above listed LSM volumes. /sbin/volreconfig should be present in /etc/inittab to remove the named volumes during system reboot. Would you like to either quit and defer volunroot until later or commence system shutdown now? Enter either 'quit' or time to be used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] now
5.4.6.3 Cleaning Up the LSM Disks for Reuse
The disks that were used by the system volumes remain under LSM control as members of the rootdg disk group.
To reuse these disks within LSM or for other purposes, you must remove them from the rootdg disk group and remove the LSM partitions. Then you can remove them from LSM control and either reinitialize them as LSM disks (as sliced disks) or use them for purposes other than LSM.
To clean up the system-specific LSM disks:
Display the LSM disks:
#
voldisk list
Information similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1a nopriv root02 rootdg online dsk1b nopriv swap02 rootdg online dsk1g nopriv dsk1g-AdvFS rootdg online dsk1h simple dsk1h rootdg online dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online
Remove the disks from the rootdg disk group using their disk media names (in the DISK column):
#
voldg rmdisk root02 swap02 dsk1g-AdvFS dsk1h
Remove the disks from LSM control using their disk access names (in the DEVICE column):
#
voldisk rm dsk1a dsk1b dsk1g dsk1h
See
Section 4.1.2
for more information on reinitializing
these disks for LSM.
5.4.7 Recovering an LSM Volume
You might need to recover an LSM volume that has become disabled. Alert icons and the Alert Monitor window might provide information when an LSM volume recovery is needed. (See the System Administration manual for more information about the Alert Monitor.) Recovering an LSM volume starts the disabled volume and, if applicable, resynchronizes mirror plexes or RAID 5 parity.
To recover an LSM volume, enter the following command, specifying either the volume or a disk, if the disk is used by several volumes:
#
volrecover [-g disk_group] -sb volume|disk
The -s option starts all disabled volumes, and the -b option runs the command in the background.
For example, to recover an LSM volume called vol01, enter:
#
volrecover -sb vol01
To recover all LSM objects (subdisks, plexes, or volumes) that use a disk called dsk5, enter:
#
volrecover -sb dsk5
If you do not specify a disk group, LSM volume name, or disk name, all
volumes are recovered.
If recovery of an LSM volume is not possible, restore
the LSM volume from backup.
See
Section 5.4.3
for
more information.
5.4.8 Renaming an LSM Volume
You can rename an LSM volume.
The new LSM volume name must be unique
within the disk group.
If the LSM volume has a file system or is part of an
AdvFS domain, you must also update the
/etc/fstab
and
/etc/fdmn
files.
To rename an LSM volume, enter:
#
voledit rename old_volume new_volume
Note
If you do not update the relevant files in the
/etc
directory before the system is restarted, subsequent commands using a volume's previous name will fail.
You can increase or decrease the size of an LSM volume; for example, you can increase the size of the primary swap space volume. In LSM, increasing the size of a volume is called growing a volume and decreasing its size is called shrinking a volume.
Caution
You must be sure that the application using the LSM volume can respond appropriately to growing, or especially shrinking, LSM volumes. You might have to perform additional steps specific to the application using the volume, either before or after changing its size. Refer to the documentation for the application using the volume for more information.
You can increase the size of a volume by specifying either an amount to grow by or a size to grow to. The size of any log plexes remains unchanged.
Notes on File Systems
If the volume is used for an AdvFS file system, do not increase the space in the domain by growing an underlying LSM volume. Instead, create a new LSM volume and add it to the domain. See AdvFS Administration for more information on increasing the size of a domain.
If the volume is used for a file system other than AdvFS, you must perform additional steps specific to the file system type for the file system to take advantage of increased space. See System Administration for more information on increasing a file system other than AdvFS.
If an application other than a file system uses the volume, you must make any necessary application modifications after the grow operation is complete.
When growing a volume, you must use the -f option to force the change. You can use the -b option to perform the operation in the background. This is helpful if the growto or growby length specified is substantial and if the volume uses mirror plexes or RAID 5, because it will undergo resynchronization as a result of the grow operation.
To grow a volume:
By a specific amount, enter:
#
volassist [-g disk_group] -f [-b] growby volume length_change
To a new size, enter:
#
volassist [-g disk_group] -f [-b] growto volume new_length
You can decrease the size of a volume by specifying either an amount to shrink by, or a size to shrink to. The size of any log plexes remains unchanged.
Cautions
If the volume is used for an AdvFS file system, do not decrease the space in the domain by shrinking an underlying LSM volume. Instead, remove a volume from the domain (in AdvFS, a volume can be a disk, disk partition, or an LSM volume). See AdvFS Administration for more information on removing volumes from a domain.
If the volume is used for a file system other than AdvFS, you must perform additional steps specific to the file system type before shrinking the volume, so that the file system can recognize and safely adjust to the decreased space.
There is no direct way to shrink a UFS file system other than backing up the data, destroying the original file system, creating a new file system of the smaller size, and restoring the data into the new file system.
See System Administration for more information.
If an application other than a file system uses the volume, you must make any necessary application modifications before shrinking the LSM volume.
When shrinking a volume, you must use the -f option to force the change.
To shrink a volume:
By a specific amount, enter:
#
volassist [-g disk_group] -f [-b] shrinkby volume length_change
To a new size, enter:
#
volassist [-g disk_group] -f [-b] shrinkto volume new_length
5.4.10 Changing LSM Volume Permission, User, and Group Attributes
By default, the device special files for LSM volumes are created with read and write permissions granted only to the owner. Databases or other applications that perform raw I/O might require device special files to have other settings for the permission, user, and group attributes.
You must use LSM commands to change the permission, user, and group attributes for LSM volumes. The LSM commands ensure that settings for these attributes are stored in the LSM database, which keeps track of all settings for LSM objects.
Do not change the permission, user, or group attributes by using the
chmod
,
chown
, or
chgrp
commands
directly on the device special files associated with LSM volumes.
These standard
UNIX commands do not store the required values in the LSM configuration database.
To change Tru64 UNIX user, group, and permission attributes, enter:
#
voledit [-g disk_group] set \
user=username group=groupname mode=permission volume
The following example changes the user, group, and permission attributes for an LSM volume called vol1:
#
voledit set user=new_user group=admin mode=0600 vol1
The following sections describe how to use LSM commands to manage plexes.
5.5.1 Displaying Plex Information
You can display information about all plexes or about one specific plex.
5.5.1.1 Displaying General Plex Information
To display general information for all plexes, enter:
#
volprint -pt
Information similar to the following is displayed:
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE pl tst-01 tst ENABLED ACTIVE 262144 CONCAT - RW pl tst-02 tst DETACHED STALE 262144 CONCAT - RW pl vol5-01 vol5 ENABLED ACTIVE 409696 RAID 8/32 RW pl vol5-02 vol5 ENABLED LOG 2560 CONCAT - RW
5.5.1.2 Displaying Detailed Plex Information
To display detailed information about a specific plex, enter:
#
volprint -lp plex
Information similar to the following is displayed:
Disk group: rootdg Plex: p1 info: len=500 type: layout=CONCAT state: state=EMPTY kernel=DISABLED io=read-write assoc: vol=v1 sd=dsk4-01 flags: complete Plex: p2 info: len=1000 type: layout=CONCAT state: state=EMPTY kernel=DISABLED io=read-write assoc: vol=v2 sd=dsk4-02 flags: complete Plex: vol_mir-01 info: len=256 type: layout=CONCAT state: state=ACTIVE kernel=ENABLED io=read-write assoc: vol=vol_mir sd=dsk2-01 flags: complete Plex: vol_mir-02 info: len=256 type: layout=CONCAT state: state=ACTIVE kernel=ENABLED io=read-write assoc: vol=vol_mir sd=dsk3-01 flags: complete Plex: vol_mir-03 info: len=0 (sparse) type: layout=CONCAT state: state=ACTIVE kernel=ENABLED io=read-write assoc: vol=vol_mir sd=(none) flags: logging: logsd=dsk3-02 (enabled)
You can add a data plex to a volume to create a mirror data plex. You cannot create a mirror data plex on a disk that already contains a data plex for the volume.
The data from the original plex is copied to the added plex, and the
plexes are synchronized.
This process can take a long time depending on the
size of the volume, so you should run the command in the background (using
the
&
operator).
Note
Adding a data plex does not add a DRL plex to the volume. It is highly recommended that volumes with mirror plexes have a DRL plex. See Section 5.5.3 for more information on adding a log plex to a volume.
To add a data plex, enter:
#
volassist mirror volume [disk] &
You can add a log plex (DRL plex or RAID 5 log plex) to a volume that has mirrored data plexes or a RAID 5 data plex. However, if the volume is used for secondary swap, it should not have a DRL. You use the same command to add both DRL plexes and RAID 5 logs.
To improve
performance, the DRL plex should not be on the same disk as one of the volume's
data plexes.
To ensure that LSM does not create a DRL plex on the same disk
as a data plex, use the
volprint
-ht
command
to display volume information to identify an LSM disk that is not part of
the volume.
To add a log plex to a volume, enter:
#
volassist addlog volume [disk]
5.5.4 Moving Data to a New Plex
You can move the data from a striped or concatenated plex to a new plex to:
Move the LSM volume onto disks with better performance.
Move the LSM volume onto new plexes that use a different data layout type. For example, you can move data from a concatenated plex to a striped plex to improve performance.
Move the LSM volume to different plexes temporarily so you can repair or replace disks in the original plex.
You can perform this operation only on volumes that use concatenated or striped plexes. You cannot move data from a concatenated or striped plex to a RAID 5 plex or from a RAID 5 plex to a concatenated or striped plex.
For a move operation to be successful:
The old plex must be an active part of an active volume.
The new plex cannot be associated with another LSM volume and must be at least the same size as or larger than the original plex.
Note
If the new plex is larger than the original plex and the original plex contains a file system, the file system will not recognize and use the extra space after it is moved. You must recreate the file system on the new plex to take advantage of the extra space.
To move data from one plex to another:
Display the size of the plex you want to move:
#
volprint -ht volume
Information similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v DataVol fsgen ENABLED ACTIVE 204800 SELECT - pl DataVol-01 DataVol ENABLED ACTIVE 204800 STRIPE 8/128 RW sd dsk0-01 DataVol-01 dsk0 0 25600 0/0 dsk0 ENA sd dsk1-01 DataVol-01 dsk1 0 25600 1/0 dsk1 ENA sd dsk2-01 DataVol-01 dsk2 0 25600 2/0 dsk2 ENA sd dsk3-01 DataVol-01 dsk3 0 25600 3/0 dsk3 ENA sd dsk4-01 DataVol-01 dsk4 0 25600 4/0 dsk4 ENA sd dsk6-01 DataVol-01 dsk6 0 25600 5/0 dsk6 ENA sd dsk7-01 DataVol-01 dsk7 0 25600 6/0 dsk7 ENA sd dsk8-01 DataVol-01 dsk8 0 25600 7/0 dsk8 ENA pl DataVol-02 DataVol ENABLED ACTIVE 204800 STRIPE 8/128 RW sd dsk10-01 DataVol-02 dsk10 0 25600 0/0 dsk10 ENA sd dsk11-01 DataVol-02 dsk11 0 25600 1/0 dsk11 ENA sd dsk12-01 DataVol-02 dsk12 0 25600 2/0 dsk12 ENA sd dsk13-01 DataVol-02 dsk13 0 25600 3/0 dsk13 ENA sd dsk14-01 DataVol-02 dsk14 0 25600 4/0 dsk14 ENA sd dsk15-01 DataVol-02 dsk15 0 25600 5/0 dsk15 ENA sd dsk18-01 DataVol-02 dsk18 0 25600 6/0 dsk18 ENA sd dsk19-01 DataVol-02 dsk19 0 25600 7/0 dsk19 ENA pl DataVol-03 DataVol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk20-02 DataVol-03 dsk20 0 65 LOG dsk20 ENA
In this example, the volume has two striped data plexes of 204800 sectors (100 MB).
Ensure there is enough space on other LSM disks to move the plex's data.
Create a new plex with the characteristics you want.
For a concatenated plex, see Section 4.2.1
For a striped plex, see Section 4.2.2
For a striped plex that uses disks on different buses, see Section 4.2.3
Enter the following command line (set to run in the background) to attach the new plex to the volume and move the data from the old plex to the new plex, optionally removing the old plex upon successful completion of the move:
#
volplex [-o rm] mv old_plex new_plex &
The volume remains active and usable during this operation.
If you removed a plex from a volume and did not recursively remove it and its objects, you can reattach it to the volume.
To reattach a plex to a volume, enter:
#
volplex att volume plex
You can remove a plex from an LSM volume to reduce the number of plexes in a volume.
Note
The following restrictions apply:
You cannot remove a RAID 5 data plex from a volume, because that is the only data plex. However, you can remove a volume completely (Section 5.4.6).
If the volume has mirror plexes and you remove all but one, the volume's data is no longer redundant.
If you remove the DRL plex from a volume that has mirror plexes and the system fails, LSM will have to resynchronize the entire contents of the plexes when the system restarts.
If you remove the RAID 5 log plex from a volume that uses a RAID 5 plex and the system fails, LSM will have to read back all the volume data, regenerate the parity for each stripe, and rewrite each stripe in the plex.
To remove a data plex from a volume with mirror plexes:
Dissociate the plex from its volume, and optionally remove the old plex after successful completion of the dissociation:
#
volplex [-o rm] dis plex
If you did not use the option in step 1, remove the plex:
#
voledit -r rm plex
Removing the plex also removes all associated subdisks in that plex. The disks remain under LSM control, and you can use them for other volumes or remove them from LSM control.
To remove the log plex from a RAID 5 volume:
Dissociate the log plex from the RAID 5 volume (using the -f option):
#
volplex -f dis log_plex
Remove the plex and its subdisks:
#
voledit -r rm log_plex
The following sections describe how to use LSM commands to manage subdisks.
5.6.1 Displaying Subdisk Information
You can display information about all subdisks or one specific subdisk.
5.6.1.1 Displaying General Subdisk Information
To display general information for all subdisks, enter:
#
volprint -st
Information similar to the following is displayed:
Disk group: rootdg SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE sd dsk2-01 vol_mir-01 dsk2 0 256 0 dsk2 ENA sd dsk3-02 vol_mir-03 dsk3 0 65 LOG dsk3 ENA sd dsk3-01 vol_mir-02 dsk3 65 256 0 dsk3 ENA sd dsk4-01 p1 dsk4 17 500 0 dsk4 ENA sd dsk4-02 p2 dsk4 518 1000 0 dsk4 ENA
5.6.1.2 Displaying Detailed Subdisk Information
To display detailed information about a specific subdisk, enter:
#
volprint -l subdisk
The following example shows information about a subdisk called dsk12-01:
Disk group: rootdg Subdisk: dsk12-01 info: disk=dsk12 offset=0 len=2560 assoc: vol=vol5 plex=vol5-02 (offset=0) flags: enabled device: device=dsk12 path=/dev/disk/dsk12g diskdev=82/838
You can join two or more subdisks to form a single, larger subdisk. Subdisks can be joined only if they belong to the same plex and occupy adjacent regions of the same disk. For a volume with striped plexes, the subdisks must be in the same column. The joined subdisk can have a new subdisk name or retain the name of one of the subdisks being joined.
To join subdisks, enter:
#
volsd join subdisk1 subdisk2 new_subdisk
You can divide a subdisk into two smaller subdisks. Once split, you can move the data in the smaller subdisks to different disks. This is useful for reorganizing volumes or for improving performance. The new, smaller subdisks occupy adjacent regions within the same region of the disk that the original subdisk occupied.
You must specify a size for the first subdisk; the second subdisk consists of the rest of the space in the original subdisk.
If the subdisk to be split is associated with a plex, both of the resultant subdisks are associated with the same plex. You cannot split a log subdisk.
To split a subdisk and assign each subdisk a new name, enter:
#
volsd -s size split original_subdisk new_subdisk1 new_subdisk2
To split a subdisk and retain the original name for the first subdisk and assign a new name to the second subdisk, enter:
#
volsd -s size split original_subdisk new_subdisk
5.6.4 Moving Subdisks to a Different Disk
You can move the data in subdisks to a different disk to improve performance. The disk space occupied by the data in the original subdisk is returned to the free space pool.
Ensure that the following conditions are met before you move data in a subdisk:
Both source and destination subdisks must be the same size.
The source subdisk must be part of an active plex on an active volume.
The destination subdisk must not be associated with any other plex.
To move data from one subdisk to another, enter:
#
volsd mv source_subdisk target_subdisk
You can remove a subdisk that is not associated with or needed by an LSM volume. Removing a subdisk returns the disk space to the free space pool in the disk group. To remove a subdisk, you must dissociate the subdisk from a plex, then remove it.
To remove a subdisk:
Display information about the subdisk to identify any volume or plex associations:
#
volprint -l subdisk
If the subdisk is associated with a volume, information similar to the following is displayed:
Disk group: rootdg Subdisk: dsk9-01 info: disk=dsk9 offset=0 len=2048 assoc: vol=newVol plex=myplex (column=1 offset=0) flags: enabled device: device=dsk9 path=/dev/disk/dsk9g diskdev=82/646
If the subdisk has no associations to any plex or volume, information similar to the following is displayed:
Disk group: dg1 Subdisk: dsk5-01 info: disk=dsk5 offset=0 len=2046748 assoc: vol=(dissoc) plex=(dissoc) flags: enabled device: device=dsk5 path=/dev/disk/dsk5g diskdev=79/390
Do one of the following to remove the subdisk:
If the subdisk is associated with a volume, enter:
#
volsd -o rm dis subdisk
If the subdisk is not part of a volume and has no associations, enter:
#
voledit [-g disk_group] rm subdisk