[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


4    Encapsulating Existing User Data to LSM Volumes

This chapter describes how to place existing user data under LSM control by using a process called encapsulation.


[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


4.1    Data Encapsulation

LSM supports data encapsulation from the following formats:

Note

LSM does not support encapsulation of user data on ULTRIX Disk Shadowing (UDS) volumes or ULTRIX Striping Driver stripe volumes.

During the encapsulation process, LSM transforms an LVM volume group, a UNIX style disk or disk partition, or an AdvFS storage domain into an LSM logical volume. Using a physical device name that you supply in an encapsulation command, LSM identifies how the device can be used for file systems and generates LSM volumes to cover those areas on the disk.

The following commands allow you to perform a one-time conversion of existing user data into LSM volumes:

See Chapter 5 for information about encapsulating the partitions used for the root file system and swap partition to LSM volumes.

See Section 7.11, Section C.20, and Section C.19 for information on unencapsulation procedures.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.2    Encapsulation Requirements

The following list describes requirements for performing encapsulation functions:


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.3    LVM Volume Encapsulation

The LVM encapsulation process uses the name of a volume group that you specify with the vollvmencap command, and transforms the LVM volumes into LSM volumes.

Note

The Logical Volume Manager (LVM) is no longer supported on Digital UNIX systems. Support for the LVM encapsulation tools will also be retired in a future release of Digital UNIX. At that time, any data still under LVM control will be lost.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.3.1    Overview of LVM Support in Digital UNIX

Encapsulation of LVM volumes is based on volume groups, which are collections of physical volumes, each of which contains the following:

In addition, the LVM data area is divided into physical extents, which are the basic building blocks of LVM volumes. The physical extents for all physical volumes in a volume group are all the same size.

Finally, a volume consists of a series of logical extents, each of which maps to one or more physical extents. Because Digital UNIX does not support mirroring for LVM, each logical extent can map to only one physical extent, except in the occasional event when LVM requires temporary mirrors to be added to a volume for the duration of the command execution. For encapsulation purposes, these transient conditions are not considered.

Note

The physical extent bad block directory is not used.

The /etc/lvmtab file defines all the volume groups and their associated physical volumes on a system. When a system reboots, LVM restarts based on the information defined in this file.

There is an LVM record at the beginning of each physical volume in a volume group. The LVM record contains a number, and the location and length of the metadata region. In the metadata region, there are entries for each logical volume defined in the volume group, and mappings of each physical extent to logical extents of the physical volumes.

A typical LVM configuration has a few physical device partitions in a volume group. An arbitrary number of volumes is defined by mapping logical extents to physical extents in a volume group. These volumes are used for UFS file systems. User data can also be accessed directly through the device interface.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.3.2    Encapsulating LVM Volumes

To begin the encapsulation process, you supply the name of an LVM volume group as input to the vollvmencap command. For example:

/usr/sbin/vollvmencap /dev/vg1

The vollvmencap command generates scripts containing the LSM commands needed to create LSM volumes. You execute the command scripts created by vollvmencap by running the /sbin/vol-lvm-reconfig command, as shown here:

/sbin/vol-lvm-reconfig

Note that the LVM volumes in the volume group that was encapsulated must not be in use when /sbin/vol-lvm-reconfig is executed. For example, all file systems using LVM volumes must be unmounted.

When the encapsulation is successful, a message is printed that indicates a name of a script that you must execute to remove LVM volumes. Run this script only after ensuring that the encapsulation was successful.

Note the following requirements for the LVM encapsulation process:

The encapsulation process creates an LSM subdisk for contiguous physical extents in physical volume mappings to a logical volume. Because LVM volumes in Digital UNIX are not mirrored, the LSM volume has only one plex. The plex consists of a set of subdisks obtained by mapping physical extents associated with each logical extent. The plex is used to create an LSM volume. The LSM volume name replaces the LVM volume name in the /etc/fstab file.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.3.3    Preserving Block 0

Block 0 on a Digital UNIX disk device is read-only by default. UFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for UFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name device-name _blk0 . If the volume is no longer needed, remove this nopriv disk from the LSM disk group and redefine the disk without block 0

Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.4    UNIX Style Partition Encapsulation

The encapsulation process for UNIX style disks and disk partitions uses the volencap command to change a disk or partition into an LSM disk.

The volencap command automatically encapsulates user data for common configuration layouts such as the following:


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.4.1    Overview of Digital UNIX Partitions

The partitions on a physical device are mapped by a partition table called the disklabel. The disk's partitions and disk label have the following characteristics:

Each available partition has a special device file in the /dev directory. Users and applications access storage through these special device files. The voldisk, voldisksetup, and voldiskadd utilities perform partition overlap checks to ensure that partitions being initialized to LSM do not have valid UFS, AdvFS, swap, or LSM data. If the fstype field of a partition indicates that there is valid data, the utilities issue a warning.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.4.2    Encapsulating UNIX Partitions

The volencap and vol-reconfig commands provide an easy way to encapsulate disks and partitions. However, if you need a finer degree of control, use the manual encapsulation procedure to tailor the encapsulation to the specific needs of your configuration. See Section 4.4.4 for information to help you encapsulate UNIX style partitions manually.

To begin the encapsulation process, you supply the name of a physical device (for example, rz3) or a partition name (for example, rz3g) as input to the volencap command. For example:

/usr/sbin/volencap rz3

The LSM encapsulation process uses information in the disk label and the /etc/fstab file to find out if a partition is in use, for example if it contains a UFS file system or a database. If the partition does not have information in a disk label or /etc/fstab file to indicate that it is being used by an application, the partition must be encapsulated using the partition name.

The /usr/sbin/volencap command generates scripts containing the necessary LSM commands and files to create LSM volumes. You run these scripts by executing the /sbin/vol-reconfig command, as shown here:

/sbin/vol-reconfig

If any partition or disk that has been encapsulated is still in use, reboot the system.

Instead of executing /sbin/vol-reconfig manually, you can add /sbin/vol-reconfig to the /etc/inittab file by running the volinstall command. Then, when the system is rebooted, the encapsulation commands generated by /usr/sbin/volencap take effect.

Use the above method if any disk or partition that was encapsulated was in use.


The results of the encapsulation process are as follows:

After the encapsulation, LSM converts each partition that is in use (for example, as a UFS file system) to a subdisk. LSM then uses the subdisk to create a plex and, in turn, uses the plex to generate an LSM volume. Entries in the /etc/fstab or /sbin/swapdefault file are changed to use the LSM volume name instead of the block device name of the physical disk partition.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.4.3    Preserving Block 0

Block 0 on a Digital UNIX disk device is read-only by default. UFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for UFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name device-name _blk0 . You should remove this nopriv disk from the LSM disk group and redefine it without block 0 if the volume is no longer needed.

Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.4.4    Encapsulating UNIX Partitions Using Individual Commands

You need to perform a manual encapsulation only when the automatic volencap encapsulation process does not apply to your configuration. Before beginning the encapsulation process, do the following:

  1. Ensure that all the disks or partitions that you intend to encapsulate are not in use. If a partition is currently mounted, it should be unmounted.

  2. Save the original /etc/fstab file. Change the file to use the LSM volume names instead of the partition names.

  3. Make sure the rootdg disk group exists and is active.

To encapsulate a partition follow these steps. This example encapsulates the /dev/rz3h partition that is being used as /usr/staff.

  1. The following example shows how rz3h appears in the /etc/fstab file:

    /dev/rz3h       /usr/staff    ufs rw 1 2
    

  2. Add the rz3h partition as a nopriv LSM disk. For example:

    voldisk -f init rz3h type=nopriv

  3. Add the rz3h partition to a disk group using the following instructions:

  4. Using the information in the disk label, find the size of partition h. Use the following command to display the disk label information for rz3:

    disklabel -r rz3:

    #   size    offset fstype [fsize bsize cpg]
    a:  131072       0 4.2BSD 1024 8192 16 # (Cyl.    0 - 164*
    b:  262144  131072 unused 1024 8192    # (Cyl.  164*- 492*)
    c: 2050860       0 unused 1024 8192    # (Cyl.    0 - 2569)
    d:  552548  393216 unused 1024 8192    # (Cyl.  492*- 1185*)
    e:  552548  945764 unused 1024 8192    # (Cyl. 1185*- 1877*)
    f:  552548 1498312 unused 1024 8192    # (Cyl. 1877*- 2569*)
    g:  819200  393216 4.2BSD 1024 8192 16 # (Cyl.  492*- 1519*)
    h:  838444 1212416 4.2BSD 1024 8192 16 # (Cyl. 1519*- 2569*)
    

    The example output shows that the size of partition h is 838444 sectors.

  5. Find a unique name for the LSM volume corresponding to partition rz3h. Use a volume name that uses the partition name (for example, vol-rz3h).

  6. Create an LSM volume. Provide the correct disk group name. The following example uses rootdg as the disk group name:

    /sbin/volassist -g rootdg  make vol-rz3h 838444s rz3h

  7. Change the /etc/fstab file to look as follows:

    /dev/vol/rootdg/vol-rz3h       /usr/staff    ufs rw 1 2
    

To encapsulate a complete disk to LSM and convert partitions that are in use into LSM volumes follow these steps.

In the following example for the disk rz4, two partitions store user data by means of a UFS file system, and one partition allows applications to directly access and store user data using the device interface.

  1. The /etc/fstab file includes the following lines that correspond to the rz4 disk:

    /dev/rz4d       /data1    ufs rw 1 2
    /dev/rz4e       /data2    ufs rw 1 2
    

    Partition rz4f is used by applications to store user data directly using the device interface. The following example shows the command and display for the disk label information for the rz4 disk:

    disklabel -r rz4

    #     size  offset  fstype [fsize bsize cpg]
    a:  131072       0  unused  1024 8192 16 # (Cyl.    0 - 164*
    b:  262144  131072  unused  1024 8192    # (Cyl.  164*- 492*)
    c: 2050860       0  unused  1024 8192    # (Cyl.    0 - 2569)
    d:  552548  393216  4.2BSD  1024 8192    # (Cyl.  492*- 1185*)
    e:  552548  945764  4.2BSD  1024 8192    # (Cyl. 1185*- 1877*)
    f:  552548 1498312  unused  1024 8192    # (Cyl. 1877*- 2569*)
    g:  819200  393216  unused  1024 8192 16 # (Cyl.  492*- 1519*)
    h:  838444 1212416  unused  1024 8192 16 # (Cyl. 1519*- 2569*)
    #
    

  2. Edit the disk label before beginning the encapsulation process.

    In the example for the rz4 disk, the output shows that partitions d and e are in use for UFS file system. The f partition is marked as unused even though it is in use. Applications that access data directly on partitions using the device interface must edit the disk label using the command disklabel -e to change the disk label before encapsulation. Because no appropriate disk label tags are provided, use of disk label tag 4.1BSD (shown in the following example) is suggested:

    #     size  offset fstype [fsize bsize cpg]
    a:  131072       0 unused 1024 8192 16 # (Cyl.    0 - 164*
    b:  262144  131072 unused 1024 8192    # (Cyl.  164*- 492*)
    c: 2050860       0 unused 1024 8192    # (Cyl.    0 - 2569)
    d:  552548  393216 4.2BSD 1024 8192    # (Cyl.  492*- 1185*)
    e:  552548  945764 4.2BSD 1024 8192    # (Cyl. 1185*- 1877*)
    f:  552548 1498312 4.1BSD 1024 8192    # (Cyl. 1877*- 2569*)
    g:  819200  393216 unused 1024 8192 16 # (Cyl.  492*- 1519*)
    h:  838444 1212416 unused 1024 8192 16 # (Cyl. 1519*- 2569*)
    

  3. An LSM disk can be created only when at least 512 sectors (or the length of the private region) are free either at the beginning or at the end of the disk.

    Use partition c to store the offset and size for the public and private regions of the LSM disk.

    If there is not enough space at either the beginning or end of the disk for the private region you must encapsulate rz4c as a nopriv disk.

    Note

    Block 0 on a disk is write-locked. Therefore, do not use block 0 either for the public or private region of the disk.

  4. Make sure that partition c covers the entire disk.

  5. Initialize rz4c and add it to the rootdg disk group as shown here. Note that in this example the private region is at the beginning of the disk, because there is no space for it at the end of the disk.

  6. Add the LSM disk to rootdg, as follows:

  7. Create LSM volumes for all partitions that are in use.

    For an LSM simple disk, perform the following:

    1. To convert partition d, specify the starting offset of partition d in the public region. Because the public region starts at block 513, subtract 513 from the offset. (Note that the calculation differs when the private region starts at the beginning of the disk.)

      The following example shows the conversion calculation for partition d:

      [ partition d offset - 513 ] or [ 393216 - 513 ]
      

      Enter the following command for partition d:

      volassist make vol-rz4c-01 552548 rz4c,392703

    2. To convert partition e, specify the starting offset [945764 - 513] in the public region. For example:

      volassist make vol-rz4c-02 552548 rz4c,945251

    3. To convert partition f, specify the starting offset [1498312 - 513] in the public region. For example:

      volassist make vol-rz4c-03 552548 rz4c,1497799

    For an LSM nopriv disk, the calculation for the partition offset does not need to be changed because nopriv disks do not contain any metadata. The LSM disk partitions are as follows:

  8. Change the /etc/fstab file as follows:



[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.5    AdvFS Domain Storage Encapsulation

Encapsulation of AdvFS user data is at the storage domain level. Each physical device in the domain is encapsulated into an LSM volume by changing the links in the domain tree to point to the LSM volumes.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.5.1    Overview of AdvFS Support on Digital UNIX

An AdvFS domain is a single-storage container consisting of one or more physical disk partitions. File systems, called filesets, are created and defined in the domain and can expand and contract within the domain if space is available. Storage devices can be added or removed from a domain even when filesets are mounted. Active filesets and domains are determined as follows:

Storage devices can be physical devices or logical volumes. Each domain has a directory tree in the /etc/fdmns file that describes the physical disk partitions that constitute the storage container. The root of the tree is the domain and each leaf node in the domain directory tree is a physical disk partition name. This is a soft link of the full-access path of the physical disk partition.

A typical system has a number of domains using a few physical devices as storage in each domain, and with many filesets created on these domains.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.5.2    Encapsulating AdvFS Domains

For most configurations, you can encapsulate user data automatically using the volencap command. However, if a finer degree of control is desired, use the manual encapsulation procedure to tailor the encapsulation to the specific needs of your configuration. See Section 4.5.4 for information.

Note

In previous releases of LSM, a separate command, voladvdomencap, was used for migrating AdvFS domains to LSM. Starting with Version 4.0, the functionality of this command is provided by the volencap command. The voladvdomencap command is supported only for backward compatibility and is scheduled to be withdrawn from a future release of LSM.

The goal of encapsulating AdvFS domains is to capture the data in the physical disk partitions of a domain into LSM volumes, and present the same data access to AdvFS by changing the soft links in the domain directory tree to point to the LSM volumes.

LSM volumes encapsulated from domain physical devices must reflect the exact data at the exact logical block number (LBN) location as the physical device. The entire LBN range of the LSM nopriv disk is defined as one LSM subdisk. A plex is created with this subdisk and an LSM volume is created with the plex.

No mount point changes are necessary during encapsulation, because the filesets that are mounted are abstractions to the domain. The domain can be activated normally after the encapsulation process completes. Once the domain is activated, the filesets remain unchanged and the encapsulation is transparent to users of the AdvFS domain.

To begin the encapsulation process, you supply the name of a domain as input to the volencap command. For example:

/usr/sbin/volencap dom1

The /usr/sbin/volencap command generates scripts containing the necessary LSM commands and files to create LSM volumes. You run these scripts by executing the /sbin/vol-reconfig command, as shown here:

/sbin/vol-reconfig

The domain should not be in use when you execute the /sbin/vol-reconfig command. All filesets in the domain should be unmounted.

Instead of executing the /sbin/vol-reconfig command manually, you can add /sbin/vol-reconfig to the /etc/inittab file by running the volinstall command. Then, when the system is rebooted, the encapsulation commands generated by /usr/sbin/volencap take effect.

The /etc/fdmns file is updated on successful creation of LSM volumes.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.5.3    Preserving Block 0

Block 0 on a Digital UNIX disk device is read-only by default. AdvFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for AdvFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name advfs_ device-name . You should remove this nopriv disk from the LSM disk group and redefine it without block 0 if the volume is no longer needed.

Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


4.5.4    Encapsulating AdvFS Domains using Individual Commands

This section describes how to manually encapsulate AdvFS storage domains to generate LSM volumes. You need to perform a manual encapsulation only when the automatic volencap encapsulation process does not work for your configuration. The following instructions describe the manual process:

  1. Check the AdvFS domain by entering the AdvFS domain inquiry command showfdmn on the domain that is to be encapsulated. You should encapsulate the domain only if the AdvFS indicates the domain is inactive. The possible outcomes of the showfdmn command are as follows:

  2. Check the LSM disk group by entering the LSM disk group inquiry command, voldg list, on the target disk group. For example, to encapsulate the AdvFS storage domain into the disk group dg1, check that rootdg is enabled as shown in the following example:

    voldg list

    NAME         STATE    ID
    rootdg       enabled  761416202.1025.chowfun.zk3.dec.com
    dg1          enabled  761416202.1034.chowfun.zk3.dec.com
    

  3. Save the following information in the event that a recovery is needed:

  4. Encapsulate the physical devices of the AdvFS domain into the target disk group. For example:

    ls -R /etc/fdmns/dom2

    rz3c    rz16g
    
    voldisk -f init rz3c type=nopriv
    voldisk -f init rz16g type=nopriv
    voldg -g dg1 adddisk advfs_rz3c=rz3c advfs_rz16g=rz16c

  5. Define volumes to represent the user data. For example:

    volprint -g dg1 -F "%len" -d advfs_rz3c

    4109967
    
    volprint -g dg1 -F "%len" -d advfs_rz16g
    301986
    
    volassist -g dg1 make vol_rz3c 4109967 advfs_rz3c
    volassist -g dg1 make vol_rz16g 301986 advfs_rz16g

  6. Change the AdvFS soft links. For example:

    rm /etc/fdmns/dom2/rz3c
    rm /etc/fdmns/dom2/rz16g
    ln -sf /dev/vol/dg1/vol-rz3c /etc/fdmns/dom2/vol-rz3c
    ln -sf /dev/vol/dg1/vol-rz16g /etc/fdmns/dom2/vol-rz16g

  7. Once the encapsulation is complete, mount filesets using their regular names, as shown:

    mount -t advfs dom2#fset2 /mnt

  8. If the encapsulation fails, try to recover the domain by restoring the soft links. For example:

    rm -rf /ec/fdmns/dom2/vol*
    cp -R /etc/vol/reconfig.d/domain.d/dom2.d/dom2 /etc/fdmns


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Chapter] [Index] [Help]


4.6    Using voldisk for Manual Encapsulations

In some cases, you may want to encapsulate a disk that does not have any space that can be used for an LSM private region partition. The voldisk utility can be used to encapsulate disks that do not have available space. This is done using special types of disk devices, called nopriv devices, that do not have private regions.

To perform this type of encapsulation, create a partition on the disk device that maps all parts of the disk that you want to be able to access. See disklabel(8).

Then, add the partition device for that partition using the following command syntax:

voldisk define partition-device type=nopriv

Here, partition-device is the basename of the device in the /dev directory. For example, to use partition h of disk device rz3, use the command:

voldisk define rz3h type=nopriv

To create volumes for other partitions on the disk drive, add the device to a disk group, figure out where those partitions reside within the encapsulation partition, then use volassist to create a volume with that offset and length.

A major drawback with using these special encapsulation partition devices is that LSM cannot track changes in the address or controller of the disk. Normally, LSM uses identifying information stored on the physical disk to track changes in the location of a physical disk. Because nopriv devices do not have identifying information stored on the physical disk, this cannot occur.

The best use of special encapsulation partition devices is to encapsulate a disk so that LSM can be used to move space off of the disk. When space is made available at the beginning or end of the disk, the special partition device can be removed and the disk can then be encapsulated as a standard disk device.

A disk group cannot be formed entirely from nopriv devices, because nopriv devices do not provide space for storing disk group configuration information. Configuration information must be stored on at least one disk in the disk group.