3    Installing, Upgrading, or Uninstalling the LSM Software

This chapter describes how to:

3.1    Preparing to Upgrade LSM

If you are currently using LSM on a system running Tru64 UNIX Version 4.0 and you want to preserve your current LSM configuration for use with Tru64 UNIX Version 5.0 or higher, you must:

3.1.1    Increasing the Size of BCLs

The dirty-region logging (DRL) feature is a replacement for the block-change logging (BCL) feature that was supported in Tru64 UNIX Version 4.0. This section applies only if you are upgrading a system with an existing LSM configuration from Tru64 UNIX Version 4.0 to Version 5.0 or higher.

When you perform an upgrade installation, BCLs are automatically converted to DRLs if the BCL subdisk is at least two blocks. If the BCL subdisk is one block, logging is disabled after the upgrade installation.

Note

The conversion of BCLs to DRLs is not reversible.

Before you upgrade, increase the size of the BCLs to at least two blocks for standalone systems or 65 blocks for a TruCluster environment. If this is not possible, then after the upgrade you can enable DRL in those volumes with the volassist addlog command (Section 5.5.3). The volassist addlog command creates a DRL of 65 blocks by default.

3.1.2    Deporting Disk Groups

In Tru64 UNIX Version 5.0 and higher, LSM has an internal metadata format that is not compatible with the metadata format of LSM in Tru64 UNIX Version 4.0. If LSM detects an older metadata format during the upgrade procedure, LSM automatically upgrades the old format to the new format. If you do not want certain disk groups to be upgraded, you must deport them before you upgrade LSM.

To deport a disk group, enter:

# voldg deport disk_group ...

If you later import a deported disk group, LSM upgrades the metadata format.

3.1.3    Backing Up a Previous LSM Configuration

Backing up the LSM configuration creates a file that describes all the LSM objects in all disk groups. In case of a catastrophic failure, LSM can use this file to restore the LSM configuration.

Caution

The following procedure backs up only the configuration, not the volume data. You might also want to back up the volume data before performing the upgrade. See Section 5.4.2 for information on backing up volumes.

To back up an LSM configuration:

  1. Start the backup procedure:

    # volsave [-d dir]
    

    Information similar to the following is displayed:

    LSM configuration being saved to /usr/var/lsm/db/LSM.date.LSM_hostidname
    LSM Configuration saved successfully to /usr/var/lsm/db/LSM.date.LSM_hostidname
    

    By default, LSM configuration information is saved to a time-stamped file called a description set in the /usr/var/lsm/db directory. In the previous example, date is the current date and LSM_hostidname is, by default, the host name. Make a note of the location and name of the file. You will need this information to restore the LSM configuration after you upgrade the LSM software and the Tru64 UNIX operating system software.

  2. Optionally, confirm that the LSM configuration was saved:

    # ls /usr/var/lsm/db/LSM.date.LSM_hostidname
    

    Information similar to the following is displayed:

    header        rootdg.d      volboot       voldisk.list
    

  3. Save the LSM configuration to tape or other removable media.

3.2    LSM Software Subsets

The LSM software resides in three optional subsets. These are located on the CD-ROM containing the base operating system software for the Tru64 UNIX product kit. In the following list of subset names, nnn indicates the operating system version:

You can install the LSM subsets either at the same time or after you install the mandatory operating system software.

If you install the system's root file system and /usr, /var, and swap partitions directly into LSM volumes, the LSM subsets are installed automatically.

See the Installation Guide for more information on installing and upgrading the LSM software and the operating system software.

Note

If a file system was configured in an LSM volume, you must start LSM and its volumes after booting the system to single-user mode, before proceeding with the Tru64 UNIX upgrade installation.

3.3    Installing the LSM License

The LSM license that comes with the base operating system allows you to create LSM volumes that use a single concatenated plex (simple volumes). All other LSM features, such as creating LSM volumes that use striped, mirrored, and RAID 5 plexes and using the LSM GUIs, require an LSM license.

The LSM license is supplied in the form of a product authorization key (PAK) called LSM-OA. You load the LSM-OA PAK into the Tru64 UNIX License Management Facility (LMF).

If you need to order an LSM license, contact your service representative. See the lmf(8) reference page for more information on the License Management Facility.

3.4    Performing Postinstallation Tasks

After you install or upgrade LSM:

3.4.1    Initializing LSM

Initializing LSM:

If you performed a full installation where the root file system and /usr, /var, and swap partitions were installed directly into LSM volumes, or if you performed an upgrade installation on a system that was previously running LSM, then LSM is automatically initialized.

If you were running LSM previously and performed a full installation but did not install the root file system and /usr, /var, and swap partitions directly into LSM volumes, then you must initialize LSM.

To initialize LSM:

  1. Verify that the LSM subsets are installed:

    # setld -i | grep LSM
    

    LSM subset information similar to the following should display, where nnn indicates the operating system revision:

    OSFLSMBASEnnn  installed Logical Storage Manager (System Administration)
    OSFLSMBINnnn   installed Logical Storage Manager Kernel Modules (Kernel
                   Build Environment)
    OSFLSMX11nnn   installed Logical Storage Manager GUI (System Administration)
     
    

    If the LSM subsets do not display with a status of installed, use the setld command to install them. See the Installation Guide for more information on installing software subsets.

  2. Verify LSM drivers are configured into the kernel:

    # devswmgr -getnum driver=LSM
    

    LSM driver information similar to the following is displayed:

    Device switch reservation list
     
                                              (*=entry in use)
                driver name             instance   major
      -------------------------------   --------   -----
                                  LSM          4      43 
                                  LSM          3      42 
                                  LSM          2      41*
                                  LSM          1      40*
    

    If LSM driver information is not displayed, you must rebuild the kernel using the doconfig command. See the Installation Guide for more information on rebuilding the kernel.

  3. Initialize LSM with the volsetup command.

    Note

    To initialize LSM in a cluster, see the Cluster Administration manual.

    To add more disks to the rootdg disk group, use the voldiskadd command. See Section 5.2.3 for information on adding disks to a disk group.

3.4.1.1    Verifying That LSM Is Initialized (Optional)

Normally, you do not need to verify that LSM was initialized. If the initialization fails, the system displays error messages indicating the problem.

If you want to verify that LSM is initialized, do one or more of the following:

3.4.2    Optimizing the LSM Configuration Databases (Optional)

If you restored an LSM configuration on a system that you upgraded from Tru64 UNIX Version 4.0 to Tru64 UNIX Version 5.0 or higher, you can modify the configuration databases to allow LSM to automatically manage their number and placement.

Note

This procedure is an optimization and is not required.

On systems using LSM on Tru64 UNIX Version 4.0, you had to explicitly configure between four and eight disks per disk group to have enabled databases. In Version 5.0 and higher, by default all LSM disks are configured to contain copies of the database, and LSM automatically maintains the appropriate number of enabled copies. The distinction between an enabled and disabled copy is as follows:

You should configure the private regions on all your LSM disks to contain one copy of the configuration database unless you have a specific reason for not doing so, such as:

Enabling the configuration database does not use additional space on the disk; it merely sets the number of enabled copies in the private region to 1.

To set the number of configuration database copies to 1, enter:

# voldisk moddb disk nconfig=1

Disk groups containing three or fewer disks should be configured so that each disk contains two copies of the configuration database to provide sufficient redundancy. This is especially important for systems with a small rootdg disk group and one or more larger secondary disk groups.

See Section 5.3.3 for more information on modifying the LSM configuration databases.

3.4.3    Creating an Alternate Boot Disk

You can use LSM to create an alternate boot disk. If the primary boot disk fails, the system uses the alternate boot disk to remain running and can also boot from the alternate disk.

To create an alternate boot disk, you must:

  1. Use the LSM encapsulation procedure to configure each partition on the boot disk into an LSM volume that uses a concatenated plex. You must also encapsulate the swap space partition if it is not on the boot disk. Encapsulation converts each partition to an LSM volume.

  2. Add a mirror plex to the volumes to create copies of the data in the boot disk partitions.

Note

To facilitate recovery of environments that use LSM, you can use the bootable tape utility. This utility enables you to build a bootable standalone system kernel on magnetic tape. The bootable tape preserves your local configuration and provides a basic set of the LSM commands you will use during restoration. Refer to the btcreate(8) reference page and the System Administration manual or the online help for the SysMan Menu boot_tape option.

3.4.3.1    Restrictions

The following restrictions apply when you encapsulate the system partitions:

3.4.3.2    Encapsulating System Partitions

When you encapsulate the system partitions, each partition is converted to an LSM volume with a single concatenated plex. The steps to encapsulate the system partitions are the same whether you are using the UNIX File System (UFS) or the Advanced File System (AdvFS).

The encapsulation process changes the following files:

In addition, LSM creates a private region and stores in it a copy of the configuration database. If the system partitions are on different disks (for example, the boot partitions on dsk0 and the swap partition on dsk1), LSM creates a private region on each disk. Normally, when you encapsulate a disk or partition, LSM creates only an LSM nopriv disk for the area being encapsulated. However, because of the need to be able to boot the system even if the rest of the LSM configuration is corrupted or missing, LSM creates these special-case private regions.

Note

The encapsulation procedure requires that you restart the system.

To encapsulate the system partitions:

  1. Log in as root.

  2. Identify the name of the boot disk:

    # sizer -r
    

    Information similar to the following is displayed:

    dsk0
    

  3. Identify the name of the disk on which the swap space partition is located:

    # swapon -s
    

    Information similar to the following is displayed:

    Swap partition /dev/disk/dsk0b (default swap):
        Allocated space:        20864 pages (163MB)
        In-use space:             234 pages (  1%)
        Free space:             20630 pages ( 98%)
     
    Total swap allocation:
        Allocated space:        20864 pages (163.00MB)
        Reserved space:          7211 pages ( 34%)
        In-use space:             234 pages (  1%)
        Available space:        13653 pages ( 65%)
    

    In the previous example, the swap space partition is located in the b partition on disk dsk0.

  4. If the swap space partition is not on the boot disk, encapsulate the boot disk and swap disk (or disks):

    # volencap boot_disk [swap_disk ...]
    

    For example, if dsk0 is the name of the boot disk and the swap space partition is located in the b partition on dsk0, enter:

    # volencap dsk0
    

    Information similar to the following is displayed:

    Setting up encapsulation for dsk0.
        - Creating simple disk dsk0d for config area (privlen=4096).
        - Creating nopriv disk dsk0a for rootvol.
        - Creating nopriv disk dsk0b for swapvol.
        - Creating nopriv disk dsk0g.
     
    The following disks are queued up for encapsulation or use by LSM:
     dsk0d dsk0a dsk0b dsk0g
     
    You must now run /sbin/volreconfig to perform actual encapsulations.
    

  5. Optionally, send a warning to users alerting them of the impending system shutdown.

  6. Perform the actual encapsulation, and enter now when prompted to shut down the system:

    # volreconfig
    

    Information similar to the following is displayed:

    The system will need to be rebooted in order to continue with
    LSM volume encapsulation of:
     dsk0d dsk0a dsk0b dsk0g
     
    Would you like to either quit and defer encapsulation until later 
    or commence system shutdown now? Enter either 'quit' or time to be 
    used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit]  now
    

The system shuts down, performs the encapsulation, and automatically restarts the system.

3.4.3.3    Mirroring the System Volumes

When you encapsulate the system partitions, each partition is converted to an LSM volume with a single plex. There is still only one copy of the boot disk data. To complete the process of creating an alternate boot disk, you must add a mirror plex to each system volume.

Preferably, the disks for the mirror plexes should be on different buses than the disks that contain the original system volumes. In addition, the disks you choose for the mirrors must meet the following requirements:

Note

The following procedure does not add a log plex (DRL) to the root and swap volumes, nor should you add a log plex manually. When the system restarts after a failure, it automatically recovers the rootvol volume by doing a complete resynchronization. Attaching a log plex degrades the rootvol write performance and provides no benefit in recovery time after a system failure.

To create mirror plexes, do one of the following:

3.4.3.4    Displaying Information for System Volumes

To display information for the system volumes, enter:

# volprint -ht

Information similar to the following is displayed:

Disk group: rootdg
 
DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
V  NAME         USETYPE      KSTATE   STATE    LENGTH   READPOL   PREFPLEX
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
 
dg rootdg       default      default  0        942157566.1026.hostname
.
.
.
v  rootvol      root         ENABLED  ACTIVE   262144   ROUND     -
pl rootvol-01   rootvol      ENABLED  ACTIVE   262144   CONCAT    -        RW
sd root01-01p   rootvol-01   root01   0        16       0         dsk0a    ENA
sd root01-01    rootvol-01   root01   16       262128   16        dsk0a    ENA
pl rootvol-02   rootvol      ENABLED  ACTIVE   262144   CONCAT    -        RW
sd root02-02p   rootvol-02   root02   0        16       0         dsk3a    ENA
sd root02-02    rootvol-02   root02   16       262128   16        dsk3a    ENA
 
v  swapvol      swap         ENABLED  ACTIVE   333824   ROUND     -
pl swapvol-01   swapvol      ENABLED  ACTIVE   333824   CONCAT    -        RW
sd swap01-01    swapvol-01   swap01   0        333824   0         dsk0b    ENA
pl swapvol-02   swapvol      ENABLED  ACTIVE   333824   CONCAT    -        RW
sd swap02-02    swapvol-02   swap02   0        333824   0         dsk3b    ENA
 
v  vol-dsk0g    fsgen        ENABLED  ACTIVE   1450796  SELECT    -
pl vol-dsk0g-01 vol-dsk0g    ENABLED  ACTIVE   1450796  CONCAT    -        RW
sd dsk0g-01     vol-dsk0g-01 dsk0g-AdvFS 0     1450796  0         dsk0g    ENA
pl vol-dsk0g-02 vol-dsk0g    ENABLED  ACTIVE   1450796  CONCAT    -        RW
sd dsk3g-01     vol-dsk0g-02 dsk3g-AdvFS 0     1450796  0         dsk3g    ENA
 

The previous example shows that there are three volumes:

Each volume has two plexes (listed in the rows labeled pl). Each plex uses a different subdisk (listed in the rows labeled sd). The first plex in each volume uses a subdisk on dsk0 (the original disk) and the second plex uses a subdisk on dsk3, indicating that the plexes were successfully mirrored onto dsk3.

The subdisks labeled root01-01p and root02-02p are phantom subdisks. Each is 16 sectors long. They provide write-protection for block 0, which prevents accidental destruction of the boot block and disk label.

3.4.3.5    Displaying Encapsulated AdvFS Domain Information

If the root file system is AdvFS, the encapsulation process automatically changes the domain information to reflect volume names instead of disk partitions.

To display the changed names:

  1. Change to the fdmns directory:

    # cd /etc/fdmns
    

  2. Display attributes of all AdvFS domains:

    # showfdmn *
    

    Information similar to the following is displayed that shows the volume name for each AdvFS domain:

                 Id              Date Created  LogPgs  Version  Domain Name
    3a5e0785.000b567c  Thu Jan 11 14:20:37 2001     512        4  root_domain
     
     Vol   512-Blks      Free  % Used  Cmode  Rblks  Wblks  Vol Name
      1L     524288    339936     35%     on    256    256  /dev/vol/rootdg/rootvol
     
                 Id              Date Created  LogPgs  Version  Domain Name
    3a5e078e.000880dd  Thu Jan 11 14:20:46 2001     512        4  usr_domain
     
     Vol   512-Blks      Free  % Used  Cmode  Rblks  Wblks  Vol Name
      1L    2879312   1703968     41%     on    256    256  /dev/vol/rootdg/vol-dsk0g
     
                   Id              Date Created  LogPgs  Version  Domain Name
    3a5e0790.0005b501  Thu Jan 11 14:20:48 2001     512        4  var_domain
     
     Vol   512-Blks      Free  % Used  Cmode  Rblks  Wblks  Vol Name
      1L    2879312   2842160      1%     on    256    256  /dev/vol/rootdg/vol-dsk0h
     
    

3.4.3.6    Displaying Encapsulated UFS File System Information

If the root file system is UFS, the encapsulation process automatically changes the mount information to reflect volume names instead of disk partitions.

To display the volume names for the root file system, enter:

# mount

Information similar to the following is displayed. File systems of the form /dev/vol/disk_group/volume indicate that the file system is encapsulated into LSM volumes.

/dev/vol/rootdg/rootvol on / type ufs (rw)
/dev/vol/rootdg/vol-dsk2g on /usr type ufs (rw)
/proc on /proc type procfs (rw)

3.4.3.7    Displaying Encapsulated Swap Volume Information

To display the volume information for the swap space, enter:

# swapon -s

Information similar to the following is displayed:

Swap partition /dev/vol/rootdg/swapvol (default swap):
    Allocated space:        20864 pages (163MB)
    In-use space:             234 pages (  1%)
    Free space:             20630 pages ( 98%)
 
Total swap allocation:
    Allocated space:        20864 pages (163.00MB)
    Reserved space:          7211 pages ( 34%)
    In-use space:             234 pages (  1%)
    Available space:        13653 pages ( 65%)

3.4.4    Automatic Data Relocation Feature (Hot-Sparing)

You can enable the LSM hot-sparing feature to configure LSM to automatically relocate data from a failed disk in a volume that uses either a RAID 5 plex or mirrored plexes. LSM relocates the data to either a reserved disk that you configured as a spare disk or to free disk space in the disk group. LSM does not use a spare disk for normal data storage unless you specify otherwise.

During the hot-sparing procedure, LSM:

If you choose not to use the hot-spare feature, you must investigate and resolve disk failures manually. See Section 6.5 for more information.

3.4.4.1    Enabling the Hot-Sparing Feature

The hot-sparing feature is part of the volwatch daemon. The volwatch daemon has two modes:

You can specify mail addresses with either option.

To enable the hot-sparing feature, enter:

# volwatch -s [mail-address...]

Note

Only one volwatch daemon can run on a system or cluster node at any time. The daemon's setting applies to the entire system or node; you cannot specify some disk groups to use hot-sparing but not others.

To return the volwatch daemon to mail-only mode, enter:

# volwatch -m [mail-address...]

3.4.4.2    Configuring and Deconfiguring a Spare Disk

You should configure at least one spare disk in each disk group that contains volumes with mirror plexes or a RAID 5 plex.

To configure a disk as a spare, enter:

# voledit [-g disk_group] set spare=on disk

For example, to configure a spare disk called dsk5 in the rootdg disk group, enter:

# voledit set spare=on dsk5

To deconfigure a spare disk, enter:

# voledit [-g disk_group] set spare=off disk

For example, to deconfigure a spare disk called dsk5 in the rootdg disk group, enter:

# voledit set spare=off dsk5

3.4.4.3    Setting Up Mail Notification for Exception Events

The volwatch daemon monitors LSM for exception events. If an exception event occurs, mail is sent to the root account and to other accounts that you specify:

There is a 15-second delay before the event is analyzed and the message is sent. This delay allows a group of related events to be collected and reported in a single mail message.

Example 3-1 shows a sample mail notification sent when LSM detects an exception event.

Example 3-1:  Sample Mail Notification

Failures have been detected by the Logical Storage Manager:
 
failed disks:   
disk

.
.
.
failed plexes: plex
.
.
.
failed log plexes: plex
.
.
.
failing disks: disk
.
.
.
failed subdisks: subdisk
.
.
.
The Logical Storage Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes.  

The following describes the sections of the mail notification:

Example 3-2 shows the mail message sent if a disk completely fails.

Example 3-2:  Complete Disk Failure Mail Notification

To: root
Subject: Logical Storage Manager failures on servername.com
 
Failures have been detected by the Logical Storage Manager
 
failed disks:
 disk02
 
failed plexes:
  home-02
  src-02
  mkting-01
 
failing disks:
disk02

This message shows that a disk called disk02 was failing, then detached by a failure and that plexes called home-02, src-02 and mkting-01 were also detached (probably due to the disk failure).

Example 3-3 shows the mail message sent if a disk partially fails.

Example 3-3:  Partial Disk Failure Mail Notification

To: root
Subject: Logical Storage Manager failures on servername.com
 
Failures have been detected by the Logical Storage Manager:
 
failed disks:
 disk02
 
failed plexes:
  home-02
  src-02

Example 3-4 shows the mail message sent if data relocation is successful and data recovery is in progress.

Example 3-4:  Successful Data Relocation Mail Notification

Volume volume Subdisk subdisk relocated to new_subdisk, 
but not yet recovered.

If the data recovery is successful, the following message is sent:

Recovery complete for volume volume in disk group disk_group.

If the data recovery is unsuccessful, the following message is sent:

Failure recovering volume in disk group disk_group.

Example 3-5 shows the mail message sent if relocation cannot occur because there is no spare or free disk space.

Example 3-5:  No Spare or Free Disk Space Mail Notification

Relocation was not successful for subdisks on disk disk
in volume volume in disk group disk_group. 
No replacement was made and the disk is still unusable. 
 
The following volumes have storage on disk:
 
volume

.
.
.
These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID5 volumes with storage on the failed disk may become unusable in the face of further failures.

Example 3-6 shows the mail message that is sent if data relocation fails.

Example 3-6:  Data Relocation Failure Mail Notification

Relocation was not successful for subdisks on disk disk in
volume volume in disk group disk_group. 
No replacement was made and the disk is still unusable.
 
error message

In this output, error message is a message indicating why the data relocation failed.

Example 3-7 shows the mail message sent if volumes not using RAID 5 plexes are made unusable due to disk failure.

Example 3-7:  Unusable Volume Mail Notification

The following volumes:
 
volume

.
.
.
have data on disk but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored.

Example 3-8 shows the mail message sent if volumes using RAID 5 plexes are made unusable due to disk failure.

Example 3-8:  Unusable RAID 5 Volume Mail Notification

The following RAID5 volumes:
 
volume

.
.
.
have storage on disk and have experienced other failures. These RAID5 volumes are now unusable and data on them is unavailable. These RAID5 volumes must have their data restored.

3.4.4.4    Moving Relocated LSM Objects

When data is moved by the hot-sparing feature, the new locations of LSM objects might not provide the same performance or have the same data layout that existed before. After hot-sparing occurs, you might want to move the relocated LSM objects to improve performance, to keep the spare disk space free for future hot-sparing needs, or to restore the LSM configuration to its previous state.

Note

This procedure assumes you have identified and initialized a new disk to replace the hot-spare disk. See Section 4.1.2 for more information on adding disks for LSM use. See Section 6.5.5 for information on replacing a failed disk.

To move a subdisk that was relocated as the result of a hot-sparing procedure:

  1. Note the characteristics of the LSM objects before they were relocated.

    This information is available from the mail notification sent to root. For example, look for a mail notification similar to the following:

    To: root
    Subject: Logical Storage Manager failures on host teal
     
    Attempting to relocate subdisk disk02-03 from plex home-02.
    Dev_offset 0 length 1164 dm_name disk02 da_name dsk2.
    The available plex home-01 will be used to recover the data.
    

  2. Note the new location for the relocated LSM object.

    This information is available from the mail notification sent to root. For example, look for a mail notification similar to the following:

    To: root
    Subject: Attempting LSM relocation on host teal
     
    Volume home Subdisk disk02-03 relocated to disk05-01,
    but not yet recovered.
    

  3. Move the relocated data to the desired location:

    # volevac [-g disk_group] spare_disk new_disk
    

  4. Move the LSM volume from the hot-spare disk to the new disk. The ! prefix indicates the source disk. Use the appropriate shell quoting convention to correctly interpret the !.

    # volassist [-g disk_group] move volume !hot_spare new_disk
    

3.4.5    Importing and Converting Tru64 UNIX Version 4.0 Disk Groups

After you upgrade a system, you can import and convert disk groups that you deported before the upgrade.

Note

This section applies only to disk groups deported from a system running Tru64 UNIX Version 4.0, to be imported to a system running Tru64 UNIX Version 5.0 or higher.

Importing the disk group upgrades its metadata format to the current format. Converting the disk group changes all volumes that use BCLs to use DRLs instead. Use the vollogcnvt utility to manually perform this conversion.

The vollogcnvt utility runs automatically when the system is started, converting all volumes possible in imported disk groups. (See the vollogcnvt(8) reference page for more information.)

You can manually import a disk group, determine whether its volumes need to have BCLs converted to DRLs, and then run the vollogcnvt utility without having to shut down and restart the system.

To upgrade disk groups to the current metadata format and convert volumes from BCL to DRL:

  1. Import the disk group with the conversion option:

    # voldg -o convert_old import disk_group
    

    The disk group is imported. If any volumes in the disk group use BCLs, a message similar to the following is displayed:

    lsm:voldg:WARNING:Logging disabled on volume. Need to convert to DRL.
    lsm:voldg:WARNING:Run the vollogcnvt command to automatically convert logging.
     
    

    All the volumes in the disk group are usable, but logging is disabled for volumes that previously used BCL. If a disk in the volume fails or the system crashes, the entire volume will be resynchronized when the disk is replaced or the system restarts.

  2. Convert the volumes from BCL to DRL:

    # vollogcnvt [-o disk_group]
    

    All possible volumes are converted from using BCL to DRL.

    If the volume cannot be converted, logging is disabled but the volume is usable, and data continues to be written to all mirrors (plexes) in the volume.

  3. To restore logging, remove the disabled BCL subdisk and add a new DRL to the volume:

    1. Identify the disabled BCL subdisk:

      # volprint [-o disk_group] volume
      

    2. Remove the BCL subdisk:

      # volsd [-o rm] dis subdisk
      

See Section 5.5.3 for information on adding a DRL to a volume.

3.5    LSM Files, Directories, Device Drivers, and Daemons

After you install and initialize LSM, several new files, directories, device drivers, and daemons are present on the system. These are described in following sections.

3.5.1    LSM Files

The /dev directory contains the device special files (Table 3-1) that LSM uses to communicate with the kernel.

Table 3-1:  LSM Device Special Files

Device Special File Function
/dev/volconfig Allows the vold daemon to make configuration requests to kernel
/dev/volevent Used by the voliotrace command to view and collect events
/dev/volinfo Used by the volprint command to collect LSM object status information
/dev/voliod Provides an interface between the volume extended I/O daemon (voliod) and the kernel

3.5.2    LSM Directories

The /etc/vol directory contains the volboot file and the subdirectories (Table 3-2) for LSM use.

Table 3-2:  LSM /etc/vol Subdirectories

Directory Function
reconfig.d Provides temporary storage during encapsulation of existing file systems. Instructions for the encapsulation process are created here and used during the reconfiguration.
tempdb Used by the volume configuration daemon (vold) while creating the configuration database during startup and while updating configuration information.
vold_diag Creates a socket portal for diagnostic commands to communicate with the vold daemon.
vold_request Provides a socket portal for LSM commands to communicate with the vold daemon.

The /dev directory contains the subdirectories (Table 3-3) for volume block and character devices.

Table 3-3:  LSM Block and Character Device Subdirectories

Directory Contains
/dev/rvol Character device interfaces for LSM volumes.
/dev/vol Block device interfaces for LSM volumes.

3.5.3    LSM Device Drivers

There are two LSM device drivers:

3.5.4    LSM Daemons

There are two LSM daemons:

3.6    Uninstalling the LSM Software

This section describes how to completely remove the LSM software from a system. This process involves:

Caution

Uninstalling LSM causes any current data to be lost. You should unencapsulate and back up any needed data before proceeding.

  1. Reconfigure any system file systems and swap space so they are no longer on an LSM volume.

    1. If root and swap are configured under LSM, enter the volunroot command and restart the system.

      If additional swap space was configured using LSM volumes, remove those volumes (Section 5.4.6).

    2. Unencapsulate the /usr and /var file systems if these are configured under LSM. See Section 5.4.6.2 if /usr and /var are encapsulated under LSM.

  2. Unmount any other file systems that are using LSM volumes so all LSM volumes can be closed.

    1. Update the /etc/fstab file if necessary so that it no longer mounts any file systems on an LSM volume.

    2. Stop applications that are using raw LSM volumes and reconfigure them so that they no longer use LSM volumes.

  3. Identify the disks that are currently configured under LSM:

    # voldisk list
    

  4. Restart LSM in disabled mode:

    # vold -k -r reset -d 
    

    This command fails if any volumes are open.

  5. Stop all LSM volume and I/O daemons:

    # voliod -f set 0
    # voldctl stop
    

  6. Update the disk labels for the disks under LSM. See step 3.

  7. Remove the LSM directories:

    # rm -r /etc/vol /dev/vol /dev/rvol /etc/vol/volboot
    

  8. Delete the following LSM entries in the /etc/inittab file:

    lsmr:s:sysinit:/sbin/lsmbstartup -b </dev/console >/dev/console 2>&1 ##LSM
    lsm:23:wait:/sbin/lsmbstartup </dev/console >/dev/console 2>&1 ##LSM
    vol:23:wait:/sbin/vol-reconfig -n </dev/console >/dev/console 2>&1 ##LSM
     
    

  9. Display the installed LSM subsets:

    # setld -i | grep LSM
    

  10. Delete the installed LSM subsets:

    # setld -d OSFLSMBASEnnn OSFLSMBINnnn OSFLSMCLSMTOOLSnnn
    

  11. To deconfigure LSM from the kernel, replace the pseudo-device lsm 1 entry in the /sys/conf/hostname file to pseudo-device lsm 0.

    You can make this change either prior to running the doconfig command or while running the doconfig command; for example:

    # doconfig -c hostname
    

  12. Copy the new kernel to root (/) and restart the system by entering the following commands:

    # cp /sys/RIO/vmunix /
    # shutdown now
    

    When the system restarts, LSM will no longer be installed.