This chapter describes how to:
Prepare an existing LSM configuration on a system running Tru64 UNIX Version 4.0 or higher for reuse when the system is upgraded to Tru64 UNIX Version 5.1 or higher (Section 3.1), which includes:
Increasing the size of BCLs
Deporting disk groups that you do not want to upgrade
Backing up the current LSM configuration
Install or upgrade the LSM software (Section 3.2)
Install the LSM license (Section 3.3), which is necessary to:
Create volumes with striped or RAID 5 plexes
Create volumes with mirror plexes
Use the other LSM interfaces
Perform the following optional LSM postinstallation tasks:
Initialize the LSM software (Section 3.4.1)
Optimize the configuration database (Section 3.4.2)
Create an alternate boot disk (Section 3.4.3)
Configure the LSM hot-sparing feature (Section 3.4)
If you are currently using LSM on a system running Tru64 UNIX Version 4.0D or higher and you want to preserve your current LSM configuration for use with Tru64 UNIX Version 5.0 or higher, you must:
Increase the size of any block-change logs (BCLs) to at least two blocks
Optionally, deport any disk groups that you do not want to upgrade
Back up the current LSM configuration
3.1.1 Increasing the Size of BCLs
The dirty-region logging (DRL) feature is a replacement for the block-change logging (BCL) feature that was supported in Tru64 UNIX Version 4.0D. This section applies only if you are upgrading a system with an existing LSM configuration from Tru64 UNIX Version 4.0D to Version 5.0 or higher.
When you perform an upgrade installation, BCLs are automatically converted to DRLs if the BCL subdisk is at least two blocks. If the BCL subdisk is one block, logging is disabled after the upgrade installation.
Before you upgrade, increase the size of the BCLs to at least two blocks
(for standalone systems) or 65 blocks (for a TruCluster environment).
If this
is not possible, then after the upgrade you can enable DRL in those volumes
with the
volassist addlog
command (Section 5.5.3).
The
volassist addlog
command creates a DRL of at least
65 blocks by default.
3.1.2 Deporting Disk Groups
LSM Version 5.1 or higher has an internal metadata format that is not compatible with the metadata format of LSM Version 4.0D. If LSM detects an older metadata format during the upgrade procedure, LSM automatically upgrades the old format to the new format. If you do not want certain disk groups to be upgraded, you must deport them before you upgrade LSM.
To deport a disk group, enter:
#voldg deport disk_group ...
If you later import a deported disk group, LSM upgrades the metadata
format.
3.1.3 Backing Up a Previous LSM Configuration
Backing up the LSM configuration creates a file that describes all the LSM objects in all disk groups. In case of a catastrophic failure, LSM can use this file to restore the LSM configuration. You might also want to back up the volume data before performing the upgrade. See Section 5.4.2 for information on backing up volumes.
To back up an LSM configuration:
Start the backup procedure:
#volsave [-d dir]
Information similar to the following is displayed:
LSM configuration being saved to /usr/var/lsm/db/LSM.date.LSM_hostidname LSM Configuration saved successfully to /usr/var/lsm/db/LSM.date.LSM_hostidname
By default, LSM configuration information is saved to a time-stamped
file
called a description set
in
the
/usr/var/lsm/db
directory.
In the previous example,
date
is the current date and
LSM_hostidname
is, by default, the host name.
Make a note of the location and name of the
file.
You will need this information to restore the LSM configuration after
you upgrade the LSM software and the Tru64 UNIX operating system software.
Optionally, confirm that the LSM configuration was saved:
#ls /usr/var/lsm/db/LSM.date.LSM_hostidname
Information similar to the following is displayed:
header rootdg.d volboot voldisk.list
Save the LSM configuration to tape or other removable media.
3.2 Installing or Upgrading LSM
The LSM software resides in three optional subsets. These are located on the CD-ROM containing the base operating system software for the Tru64 UNIX product kit. In the following list of subset names, nnn indicates the operating system version:
OSFLSMBINnnn
Provides the kernel modules to build the kernel with LSM drivers. This software subset supports uniprocessor, SMP, and realtime configurations. This subset requires Standard Kernel Modules.
OSFLSMBASEnnn
Contains the LSM administrative commands and tools required to manage LSM. This subset is mandatory if you install LSM during a Tru64 UNIX Full Installation. This subset requires LSM Kernel Build Modules.
OSFLSMX11nnn
Contains the LSM Motif-based graphical user interface (GUI) management tool and related utilities. This subset requires the Basic X Environment.
You can install the LSM subsets either at the same time or after you install the mandatory operating system software.
If you install the system's root file system and
/usr,
/var, and
swap
partitions directly into LSM volumes,
the LSM subsets are installed automatically.
See the Installation Guide for more information on installing and upgrading the LSM software and the operating system software.
Note
If a file system was configured in an LSM volume, you must start LSM and its volumes after booting the system to single-user mode, before proceeding with the Tru64 UNIX upgrade installation.
3.3 Installing the LSM License
The LSM license that comes with the base operating system allows you to create LSM volumes that use a single concatenated plex. All other LSM features, such as creating LSM volumes that use striped, mirrored, and RAID 5 plexes and using the LSM GUIs, require an LSM license.
The LSM license is supplied in the form of a product authorization key
(PAK) called
LSM-OA.
You load the
LSM-OA
PAK into the Tru64 UNIX License Management Facility (LMF).
If you need to order an LSM license, contact your service representative.
See the
lmf(8)
reference page for more information on the License Management
Facility.
3.4 Performing Postinstallation Tasks
After you install or upgrade LSM:
If applicable, initialize LSM. See Section 3.4.1 for more information.
Optionally, create an alternate boot disk. This converts the boot disk partitions to LSM volumes. See Section 3.4.3.2 for more information.
Optionally, enable the LSM hot-sparing feature. This directs LSM to transfer data (in a mirrored or RAID 5 plex) from a failed disk to a spare disk or to free disk space. See Section 3.4.4 for more information.
If you upgraded from Version 4.x of the operating system, you might want to modify the configuration database layout to take advantage of the automatic configuration database management in Version 5.x. See Section 3.4.2 for more information.
If you performed a full installation with the root file system and
/usr,
/var, and
swap
partitions
installed directly into LSM volumes, or you performed an upgrade installation
on a system that was previously running LSM, then LSM is automatically established.
If you were running LSM previously and performed a full installation
but did not install the root file system and
/usr,
/var, and
swap
partitions installed directly
onto LSM volumes, then you must initialize LSM.
Initializing LSM:
Creates the rootdg disk group. You should configure at least two unused disks or partitions in the rootdg disk group to ensure there are multiple copies of the LSM configuration database. You do not have to use the rootdg disk group for your volumes, but it must exist before you can create other disk groups. See Chapter 2 if you need help choosing disks or partitions for the rootdg disk group.
If available, reestablishes an existing LSM configuration.
Adds entries to the
/etc/inittab
to automatically
start LSM if the system boots.
Creates the
/etc/vol/volboot
file, which
contains LSM configuration information.
Creates LSM files and directories. (See Section 3.4.1.2 for a description of these files and directories.)
Starts the
vold
and
voliod
daemons.
To initialize LSM:
Verify that the LSM subsets are installed:
#setld -i | grep LSM
LSM subset information similar to the following should display, where nnn indicates the operating system revision:
OSFLSMBASEnnn installed Logical Storage Manager (System Administration)
OSFLSMBINnnn installed Logical Storage Manager Kernel Modules (Kernel
Build Environment)
OSFLSMX11nnn installed Logical Storage Manager GUI (System Administration)
If the LSM subsets do not display with a status of
installed, use the
setld
command to install them.
See
the
Installation Guide
for more information on installing software subsets.
Verify LSM drivers are configured into the kernel:
#devswmgr -getnum driver=LSM
LSM driver information similar to the following is displayed:
Device switch reservation list
(*=entry in use)
driver name instance major
------------------------------- -------- -----
LSM 4 43
LSM 3 42
LSM 2 41*
LSM 1 40*
If LSM driver information is not displayed, you must rebuild the kernel
using the
doconfig
command.
See the
Installation Guide
for
more information on rebuilding the kernel.
Initialize LSM with the
volsetup
command.
To reestablish an existing configuration, enter:
#volsetup
If there is no existing LSM configuration, specify at least two disks or partitions for LSM to use for the rootdg disk group:
#volsetup {disk|partition} {disk|partition ...}
For example, to initialize LSM and use disks called dsk4 and dsk5 to create the rootdg disk group, enter:
#volsetup dsk4 dsk5
If you omit a disk or partition name, the
volsetup
script prompts you for it.
If the
volsetup
command displays
an error message that the initialization failed, you might need to reinitialize
the disk.
See the Disk Configuration GUI for more information about reinitializing
a disk.
You run the
volsetup
script only once.
To add more
disks to the rootdg disk group, use the
voldiskadd
command.
See
Section 5.2.2
for information on adding disks to a disk
group.
3.4.1.1 Verifying That LSM Is Initialized (Optional)
Normally, you do not need to verify that LSM was initialized. If the initialization fails, the system displays error messages indicating the problem.
If you want to verify that LSM is initialized:
Verify that the disks were added to the rootdg disk group:
#volprint
Information similar to the following is displayed that shows dsk4 and dsk5 are part of the rootdg disk group:
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg rootdg rootdg - - - - - - dm dsk4 dsk4 - 1854536 - - - - dm dsk5 dsk5 - 1854536 - - - -
Verify that the/etc/inittab
file was
modified to include LSM entries:
#grep LSM /etc/inittab
Information similar to the following is displayed:
lsmr:s:sysinit:/sbin/lsmbstartup -b /dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup -n /dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n /dev/console 2>&1 ##LSM
Verify that the
/etc/vol/volboot
file
was created:
#/sbin/voldctl list
Information similar to the following is displayed:
Volboot file version: 3/1 seqno: 0.4 hostid: test.abc.xyz.com entries:
Verify that the
vold
daemon is enabled:
#voldctl mode
Information similar to the following is displayed:
mode: enabled
Verify that two or more
voliod
daemons
are running:
#voliod
Information similar to the following is displayed:
2 volume I/O daemons are running
There should be one daemon for each CPU in the system or a minimum of two. If the output shows only one daemon running, enter the following command, where n is the number of daemons to set:
#voliod set n
3.4.1.2 LSM Files, Directories, Drivers, and Daemons
After you install and initialize LSM, several new files, directories,
drivers, and daemons are present on the system.
These are described in following
sections.
3.4.1.2.1 LSM Files
The
/dev
directory contains the device special files
(Table 3-1) that LSM uses to communicate with the
kernel.
Table 3-1: LSM Device Special Files
| Device Special File | Function |
/dev/volconfig |
Allows the
vold
daemon to make configuration
requests to kernel |
/dev/volevent |
Used by the
voliotrace
command to
view and collect events |
/dev/volinfo |
Used by the
volprint
command to collect
LSM object status information |
/dev/voliod |
Provides an interface between the volume extended I/O
daemon (voliod) and the kernel |
The
/etc/vol
directory contains the
volboot
file and the subdirectories (Table 3-2)
for LSM use.
Table 3-2: LSM /etc/vol/ Subdirectories
| Directory | Function |
reconfig.d |
Provides temporary storage during encapsulation of existing file systems. Instructions for the encapsulation process are created here and used during the reconfiguration. |
tempdb |
Used by the volume configuration daemon (vold) while creating the configuration database during startup and while
updating configuration information.
|
vold_diag |
Creates a socket portal for diagnostic commands to communicate
with the
vold
daemon. |
vold_request |
Provides a socket portal for LSM commands to communicate
with the
vold
daemon. |
The
/dev
directory contains the subdirectories (Table 3-3) for volume block and character devices.
Table 3-3: LSM Block and Character Device Subdirectories
| Directory | Contains |
/dev/rvol |
LSM raw volumes for the root disk group rootdg and for root and user disk groups. |
/dev/vol |
LSM block device volumes for the root disk group rootdg and directories for root and user disk groups. |
There are two LSM device drivers:
volspec--The volume special device
driver.
Communicates with the LSM device special files. This is not a loadable driver; it must be present at boot time.
voldev--The volume device driver.
Communicates with LSM volumes. Provides an interface between LSM and the physical disks.
There are two LSM daemons:
vold--The Volume Configuration Daemon.
This daemon is responsible for maintaining configurations of disks and disk
groups.
It also:
Takes requests from other utilities for configuration changes
Communicates change requests to the kernel
Modifies configuration information stored on disk
Initializes LSM when the system starts
voliod--The Volume Extended I/O Daemon.
This daemon performs the functions of a utility and a daemon.
As a utility,
voliod:
Returns the number of running volume I/O daemons
Starts more daemons when necessary
Removes some daemons from service when they are no longer needed
As a daemon,
voliod:
Schedules I/O requests that must be retried
Schedules writes that require logging (for DRL and RAID 5 log plexes)
3.4.2 Optimizing the LSM Configuration Databases (Optional)
If you restored a previous (Version 4.x) LSM configuration on a system that you upgraded to Version 5.1, you can modify the configuration databases to allow LSM to automatically manage their number and placement.
Note
This procedure is an optimization and is not required.
In Version 4.x, you had to explicitly configure between four and eight disks per disk group to have enabled databases. In Version 5.x, all disks should be configured to contain copies of the database and LSM will automatically maintain the appropriate number of enabled copies. The distinction between enabled and disabled copies is as follows:
Disabled--The disk's private region is configured to contain a copy of the configuration database, but this copy might be dormant (inactive). LSM enables them as needed; for example, when a disk with an enabled copy is removed or fails.
Enabled--The disk's private region is configured to contain a copy of the configuration database, and this copy is active. All LSM configuration changes are recorded in these copies of the configuration database as they occur.
You should configure the private regions on all your LSM disks to contain one copy of the configuration database unless you have a specific reason for not doing so, such as:
The disk is very old or slow.
The disk is on a bus that is very heavily used.
The private region is too small (less than 4096 blocks) to contain a copy of the configuration database (such as disks that have been migrated from earlier releases of LSM, with much smaller private regions).
There is some other significant reason why you determine the disk should not contain a copy.
Enabling the configuration database does not use additional space on the disk; it merely sets the number of enabled copies in the private region to 1.
To set the number of configuration database copies to 1, enter:
#voldisk moddb disk nconfig=1
Disk groups containing three or fewer disks should be configured so that each disk contains two copies of the configuration database to provide sufficient redundancy. This is especially important for systems with a small rootdg disk group and one or more larger secondary disk groups.
To set the number of configuration database copies to 2, enter:
#voldisk moddb disk nconfig=2
3.4.3 Creating an Alternate Boot Disk
You can use LSM to create an alternate boot disk from which the system can boot if the primary boot disk fails. To do so, you must:
Use the LSM encapsulation procedure to configure each partition on the boot disk into an LSM volume that uses a concatenated plex. You must also encapsulate the swap space partition if it is not on the boot disk. Encapsulation converts each partition to an LSM volume.
Add a mirror plex to the volume to create copies of the data in the boot disk partitions
Note
To facilitate recovery of environments that use LSM, you can use the bootable tape utility. This utility enables you to build a bootable standalone system kernel on magnetic tape. The bootable tape preserves your local configuration and provides a basic set of the LSM commands you will use during restoration. Refer to the
btcreate(8) reference page and the System Administration guide or the online help for the SysMan Menuboot_tapeoption.
The following restrictions apply when you create LSM volumes for boot disk partitions:
The system cannot be part of a TruCluster cluster.
You must create LSM volumes for the root file system and the primary swap space partition at the same time. They do not have to be on the same disk.
The LSM volumes are created in the rootdg disk group and have the following names:
rootvol--Assigned to the volume created
for the root file system.
Do not change this name, move the rootvol volume
out of the rootdg disk group, or change the assigned minor device number of
0.
swapvol--Assigned to the volume created
for the swap space partition.
Do not change this name, move the swapvol volume
out of the rootdg disk group, or change the assigned minor device number of
1.
All other partitions are assigned an LSM volume name that matches the original partition name.
The partition tables for the boot disk (and swap disk, if
the swap space partition is not on the boot disk) must have at least one unused
partition for the LSM private region, which cannot be the
a
or
c
partitions.
LSM requires only the partition-table
entry; it does not need the disk space associated with the partition.
You need a separate disk (preferable on a different bus) to create the mirror plexes. You should not create mirror plexes on the dame disk. By definition, creating a mirror plex copies the data onto a different disk for redundancy in case of primary disk failure. The disk you use for the mirror plexes:
Must not be under LSM control.
Must have a disk label with all partitions marked unused.
See the
disklabel(8)
reference page for more information.
Must be as large as the partitions on the boot disk plus the size of the private region, which by default is 4096 blocks.
If the swap space partition is not on the boot disk, you need an additional disk for the swap space mirror plex that meets the first two requirements and is as large as the swap space partition plus the size of the private region, which by default is 4096 blocks.
See Section 2.3 if you need help choosing a disk to use for the mirror.
3.4.3.2 Encapsulating Boot Disk Partitions
Encapsulating the boot disk configures each partition on the boot disk in an LSM volume that uses concatenated plexes. The steps to encapsulate the boot disk partitions are the same whether you are using the UNIX File System (UFS) or the Advanced File System (AdvFS). The encapsulation process changes the following files:
If you are using UFS, the
/etc/fstab
file is changed to use LSM volumes instead of disk partitions.
If you are
using AdvFS, the
/etc/fdmns/*
directory is updated to change
domain directories that have disk partitions associated with the boot disk.
The
/etc/sysconfigtab
file is changed to update the
swapdevice
entry to use LSM volumes and to set the
lsm_rootdev_is_volume
entry to 1.
The
bootdef_dev
environment variable is changed to reflect
the alternate boot disk (volume).
Note
The boot disk encapsulation procedure requires that you reboot the system.
To encapsulate the boot disk:
Log in as
root.
Identify the name of the boot disk:
#sizer -r
Information similar to the following is displayed:
dsk0
Identify the name of the disk on which the swap space partition is located:
#swapon -s
Information similar to the following is displayed:
Swap partition /dev/disk/dsk0b (default swap):
Allocated space: 20864 pages (163MB)
In-use space: 234 pages ( 1%)
Free space: 20630 pages ( 98%)
Total swap allocation:
Allocated space: 20864 pages (163.00MB)
Reserved space: 7211 pages ( 34%)
In-use space: 234 pages ( 1%)
Available space: 13653 pages ( 65%)
In the previous example, the swap space partition is located in the
b
partition on disk dsk0.
Encapsulate the boot disk and swap disk if the swap space partition is not on the boot disk:
#volencap boot_disk [swap_disk]
For example, if dsk0 is the name of the boot disk and the swap space
partition is located in the
b
partition on dsk0, enter:
#volencap dsk0
Information similar to the following is displayed:
Setting up encapsulation for dsk0.
- Creating simple disk dsk0d for config area (privlen=4096).
- Creating nopriv disk dsk0a for rootvol.
- Creating nopriv disk dsk0b for swapvol.
- Creating nopriv disk dsk0g.
The following disks are queued up for encapsulation or use by LSM:
dsk0d dsk0a dsk0b dsk0g
You must now run /sbin/volreconfig to perform actual encapsulations.
Optionally, send a warning to users alerting them of the impending system shutdown.
Perform the actual encapsulation, and enter
now
when prompted to shut down the system:
#volreconfig
Information similar to the following is displayed:
The system will need to be rebooted in order to continue with
LSM volume encapsulation of:
dsk0d dsk0a dsk0b dsk0g
Would you like to either quit and defer encapsulation until later
or commence system shutdown now? Enter either 'quit' or time to be
used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] now
The system shuts down, performs the encapsulation, and automatically
reboots the system.
3.4.3.3 Creating Mirror Plexes for Boot Disk Volumes
After you encapsulate the boot disk, each partition is converted to an LSM volume with a single concatenated plex. There is still only one copy of the boot disk data. To complete the process of creating an alternate boot disk, you must add a mirror plex to each boot disk volume. Preferably, the disks for the mirror plexes should be on different buses than the disks that contain the original boot disk volumes.
The following procedure does not add a log plex (DRL) to the root and swap volumes, nor should you add a log plex manually. When the system reboots after a failure, the system automatically recovers the rootvol volume by doing a complete resynchronization. Attaching a log plex degrades the rootvol write performance and provides no benefit in recovery time after a system failure.
To create mirror plexes, do one of the following:
If the swap space partition is on the boot disk, enter:
#volrootmir -a boot_mirror_disk
For example, to create the mirror plex on a disk called dsk3, enter:
#volrootmir -a dsk3
If the swap space partition is not on the boot disk, enter:
#volrootmir -a swap=swap_mirror_disk boot_mirror_disk
See the
volrootmir(8)
reference page for more information.
3.4.3.4 Displaying Information for Boot Disk Volumes
To display information for the boot disk and root file system volumes, enter:
#volprint -ht
Information similar to the following is displayed:
Disk group: rootdg DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE dg rootdg default default 0 942157566.1026.hostname . . . v rootvol root ENABLED ACTIVE 262144 ROUND - pl rootvol-01 rootvol ENABLED ACTIVE 262144 CONCAT - RW sd root01-01p rootvol-01 root01 0 16 0 dsk0a ENA sd root01-01 rootvol-01 root01 16 262128 16 dsk0a ENA pl rootvol-02 rootvol ENABLED ACTIVE 262144 CONCAT - RW sd root02-02p rootvol-02 root02 0 16 0 dsk3a ENA sd root02-02 rootvol-02 root02 16 262128 16 dsk3a ENA v swapvol swap ENABLED ACTIVE 333824 ROUND - pl swapvol-01 swapvol ENABLED ACTIVE 333824 CONCAT - RW sd swap01-01 swapvol-01 swap01 0 333824 0 dsk0b ENA pl swapvol-02 swapvol ENABLED ACTIVE 333824 CONCAT - RW sd swap02-02 swapvol-02 swap02 0 333824 0 dsk3b ENA v vol-dsk0g fsgen ENABLED ACTIVE 1450796 SELECT - pl vol-dsk0g-01 vol-dsk0g ENABLED ACTIVE 1450796 CONCAT - RW sd dsk0g-01 vol-dsk0g-01 dsk0g-AdvFS 0 1450796 0 dsk0g ENA pl vol-dsk0g-02 vol-dsk0g ENABLED ACTIVE 1450796 CONCAT - RW sd dsk3g-01 vol-dsk0g-02 dsk3g-AdvFS 0 1450796 0 dsk3g ENA
The previous example shows that there are three volumes:
rootvol
swapvol
vol-dsk0g (which contains the
/usr
and
/var
partitions)
Each volume has two plexes (listed in the rows labeled
pl),
indicating that the plexes were successfully mirrored on a disk called dsk3.
The subdisks labeled root01-01p and root02-02p are phantom
subdisks.
Each is 16 sectors long, and they provide write-protection for block
0, which prevents accidental destruction of the boot block and disk label.
3.4.3.5 Displaying AdvFS File Domain Information
If the root file system is AdvFS, the encapsulation process automatically changes the file domain information to reflect volume names instead of disk partitions.
To display the changed names:
Change to the
fdmns
directory:
#cd /etc/fdmns
Display attributes of all AdvFS file domains:
#showfdmn *
Information similar to the following is displayed that shows the volume name for each AdvFS file domain:
Id Date Created LogPgs Version Domain Name
381dc24d.000f2b60 Mon Nov 1 11:39:41 1999 512 4 root_domain
Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 262144 80016 69% on 32768 32768 /dev/vol/rootdg/rootvol
Id Date Created LogPgs Version Domain Name
381dc266.0009fe30 Mon Nov 1 11:40:06 1999 512 4 usr_domain
Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 1450784 851008 41% on 32768 32768 /dev/vol/rootdg/vol-dsk0g
3.4.3.6 Displaying UFS File System Information
If the root file system is UFS, the encapsulation process automatically changes the mount information to reflect volume names instead of disk partitions.
To display the volume names for the root file system, enter:
#mount
Information similar to the following is displayed.
File systems of the
form
/dev/vol/disk_group/volume
indicate that the file system is encapsulated
into LSM volumes.
/dev/vol/rootdg/rootvol on / type ufs (rw) /dev/vol/rootdg/vol-dsk2g on /usr type ufs (rw) /proc on /proc type procfs (rw)
3.4.3.7 Displaying Swap Volume Information
To display the volume information for the swap space, enter:
#swapon -s
Information similar to the following is displayed:
Swap partition /dev/vol/rootdg/swapvol (default swap):
Allocated space: 20864 pages (163MB)
In-use space: 234 pages ( 1%)
Free space: 20630 pages ( 98%)
Total swap allocation:
Allocated space: 20864 pages (163.00MB)
Reserved space: 7211 pages ( 34%)
In-use space: 234 pages ( 1%)
Available space: 13653 pages ( 65%)
3.4.3.8 Unencapsulating the Boot Disk
You can unencapsulate the boot disk to convert LSM volumes on the boot disk back to partitions. This process involves rebooting the system.
The unencapsulation process changes the following files:
If the root file system is
UFS, the
/etc/fstab
file is changed to use disk partitions
instead of LSM volumes.
If the root file system
is AdvFS, the
/etc/fdmns/*
directory is updated to change
domain directories that have disk partitions associated with the boot disk.
The
/etc/sysconfigtab
file is changed to update the
swapdevice
entry to not use LSM volumes and to set the
lsm_rootdev_is_volume
entry to 0.
Before You Begin
If your boot disk is mirrored, you must know which mirror you boot from. Before unencapsulating, you must reboot using this disk.
To unencapsulate the boot disk:
If the boot disk is mirrored, do the following. If not, continue with step 2.
Enter the following command to display volume information:
#volprint -ht
In the output, note the names of the secondary plexes (usually those
with the
-02
suffix).
Remove the secondary plexes for volumes relating to the boot disk:
#volplex -o rm dis volume-02
For example, to remove the secondary plexes for the rootvol, swapvol and vol-dsk0g volumes, enter:
#volplex -o rm dis rootvol-02#volplex -o rm dis swapvol-02#volplex -o rm dis vol-dsk0g-02
The disks that the secondary plexes used remain under LSM control, as members of the rootdg disk group.
Change the boot disk environment variable to point to the remaining boot disk:
#consvar -s bootdef_dev boot_disk
Convert the LSM boot disk volumes back to boot disk partitions. This command reboots the system:
#volunroot -a
3.4.4 Configuring the Automatic Data Relocation (Hot-Sparing) Feature
You can enable the LSM hot-sparing feature to configure LSM to automatically relocate data from a failed disk in a volume that uses either a RAID 5 plex or mirrored plexes. LSM relocates the data to either a reserved disk that you configured as a spare disk or to free disk space in the disk group. LSM does not use a spare disk for normal data storage unless you specify otherwise.
During the hot-sparing procedure, LSM:
Sends mail to the
root
account (and other
specified accounts) with notification about the failure and identifies the
affected LSM objects.
Determines which LSM objects to relocate.
Relocates the LSM objects from the failed disk to a spare disk or to free disk space in the disk group. However, LSM will not relocate data if redundancy cannot be preserved. For example, LSM will not relocate data to a disk that contains a mirror of the data.
When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), data on the failed portion of the disk is relocated and data in the unaffected portions of the disk remain accessible.
Updates the configuration database with the relocation information.
Ensures that the failed disk space is not recycled as free disk space.
Sends mail to the
root
account (and other
specified accounts) about the action taken.
If you choose not to use the hot-spare feature, you must investigate
and resolve disk failures manually.
See
Section 6.5
for more information.
3.4.4.1 Enabling the Hot-Sparing Feature
The hot-sparing feature is part of the
volwatch
daemon.
The
volwatch
daemon has two modes:
Mail-only, which is the default. You can reset the daemon to this mode with the -m option.
Mail-and-spare, which you set with the -s option.
You can specify mail addresses with either option.
To enable the hot-sparing feature, enter:
#volwatch -s [mail-address...]
Note
Only one
volwatchdaemon can run on a system or cluster node at any time. The daemon's setting applies to the entire system or node; you cannot specify some disk groups to use hot-sparing but not others.
To return the
volwatch
daemon to mail-only mode,
enter:
#volwatch -m [mail-address...]
3.4.4.2 Configuring and Deconfiguring a Spare Disk
You should configure at least one spare disk in each disk group that contains volumes with mirror plexes or a RAID 5 plex.
To configure a disk as a spare, enter:
#voledit [-g disk_group] set spare=on disk
For example, to configure a spare disk called dsk5 in the rootdg disk group, enter:
#voledit set spare=on dsk5
To deconfigure a spare disk, enter:
#voledit [-g disk_group] set spare=off disk
For example, to deconfigure a spare disk called dsk5 in the rootdg disk group, enter:
#voledit set spare=off dsk5
3.4.4.3 Setting Up Mail Notification for Exception Events
The
volwatch
daemon monitors LSM for exception events.
If an exception event occurs, mail is sent to the
root
account and to other accounts that you specify:
When you use the
rcmgr
command to set the
VOLWATCH_USERS
variable in the
/etc/rc.config.common
file.
See the
rcmgr(8)
reference page for more information on the
rcmgr
command.
On the command line with the
volwatch
command.
There is a 15-second delay before the event is analyzed and the message is sent. This delay allows a group of related events to be collected and reported in a single mail message.
Example 3-1
shows a sample mail notification
sent when LSM detects an exception event.
Example 3-1: Sample Mail Notification
Failures have been detected by the Logical Storage Manager: failed disks: disk
.
.
.
failed plexes: plex
.
.
.
failed log plexes: plex
.
.
.
failing disks: disk
.
.
.
failed subdisks: subdisk
.
.
.
The Logical Storage Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes.
The following describes the sections of the mail notification:
The
disk
under
failed disks
specifies disks that appear to have failed completely.
The
plex
under
failed plexes
shows plexes that are detached due to I/O failures
experienced while attempting to do I/O to subdisks they contain.
The
plex
under
failed log plexes
indicates RAID 5 or dirty region log (DRL) plexes
that have experienced failures.
The
disk
under
failing disks
indicates a partial disk failure or a disk that is
in the process of failing.
When a disk has failed completely, the same
disk
appears under both
failed disks
and
failing disks.
The
subdisk
under
failed subdisks
indicates a subdisk in a RAID 5 volume
that is detached due to I/O errors.
Example 3-2
shows the mail message sent if a
disk completely fails.
Example 3-2: Complete Disk Failure Mail Notification
To: root Subject: Logical Storage Manager failures on servername.com Failures have been detected by the Logical Storage Manager failed disks: disk02 failed plexes: home-02 src-02 mkting-01 failing disks: disk02
This message shows that a disk called disk02 was failing, then detached by a failure and that plexes called home-02, src-02 and mkting-01 were also detached (probably due to the disk failure).
Example 3-3
shows the mail message sent if a disk
partially fails.
Example 3-3: Partial Disk Failure Mail Notification
To: root Subject: Logical Storage Manager failures on servername.com Failures have been detected by the Logical Storage Manager: failed disks: disk02 failed plexes: home-02 src-02
Example 3-4
shows the mail message sent if
data relocation is successful and data recovery is in progress.
Example 3-4: Successful Data Relocation Mail Notification
Volume volume Subdisk subdisk relocated to new_subdisk, but not yet recovered.
If the data recovery is successful, the following message is sent:
Recovery complete for volume volume in disk group disk_group.
If the data recovery is unsuccessful, the following message is sent:
Failure recovering volume in disk group disk_group.
Example 3-5
shows the mail message sent if
relocation cannot occur because there is no spare or free disk space.
Example 3-5: No Spare or Free Disk Space Mail Notification
Relocation was not successful for subdisks on disk disk in volume volume in disk group disk_group. No replacement was made and the disk is still unusable. The following volumes have storage on disk: volumename
.
.
.
These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID5 volumes with storage on the failed disk may become unusable in the face of further failures.
Example 3-6
shows the mail message that is
sent if data relocation fails.
Example 3-6: Data Relocation Failure Mail Notification
Relocation was not successful for subdisks on disk disk in volume volume in disk group disk_group. No replacement was made and the disk is still unusable. error message
In this output,
error message
is a message indicating why the data relocation failed.
Example 3-7
shows the mail message
sent if volumes not using RAID 5 plexes are made unusable due to disk failure.
Example 3-7: Unusable Volume Mail Notification
The following volumes: volumename
.
.
.
have data on disk but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored.
Example 3-8
shows the mail message
sent if volumes using RAID 5 plexes are made unusable due to disk failure.
Example 3-8: Unusable RAID 5 Volume Mail Notification
The following RAID5 volumes: volumename
.
.
.
have storage on disk and have experienced other failures. These RAID5 volumes are now unusable and data on them is unavailable. These RAID5 volumes must have their data restored.
3.4.4.4 Moving Relocated LSM Objects
When the hot-sparing procedure occurs, the new locations of LSM objects might not provide the same performance or data layout that existed before the hot-sparing procedure occurred. After a hot-spare procedure, you might want to move the relocated LSM objects to improve performance, to keep the spare disk space free for future hot-sparing needs, or to restore the LSM configuration to its previous state.
Note
This procedure assumes you have identified and initialized a new disk to replace the hot-spare disk. See Section 6.5.5 for information on replacing a failed disk. See Section 4.1.2 for more information on adding disks for LSM use.
To move a subdisk that was relocated as the result of a hot-sparing procedure:
Note the characteristics of the LSM objects before they were relocated.
This information is available from the mail notification sent to
root.
For example, look for a mail notification similar to the following:
To: root Subject: Logical Storage Manager failures on host teal Attempting to relocate subdisk disk02-03 from plex home-02. Dev_offset 0 length 1164 dm_name disk02 da_name dsk2. The available plex home-01 will be used to recover the data.
Note the new location for the relocated LSM object.
This information is available from the mail notification sent to
root.
For example, look for a mail notification similar to the following:
To: root Subject: Attempting LSM relocation on host teal Volume home Subdisk disk02-03 relocated to disk05-01, but not yet recovered.
Move the relocated data to the desired location:
#volevac [-g disk_group] hot_spare_disk new_disk
Move the LSM volume off the hot-spare disk onto the new disk.
In this command, you must use the
!
prefix to indicate
the source disk:
#volassist [-g disk_group] move volume !hot_spare new_disk