This chapter describes how to place existing user data under LSM control by using a process called encapsulation.
LSM supports data encapsulation from the following formats:
Note
LSM does not support encapsulation of user data on ULTRIX Disk Shadowing (UDS) volumes or ULTRIX Striping Driver stripe volumes.
During the encapsulation process, LSM transforms an LVM volume group, a UNIX style disk or disk partition, or an AdvFS storage domain into an LSM logical volume. Using a physical device name that you supply in an encapsulation command, LSM identifies how the device can be used for file systems and generates LSM volumes to cover those areas on the disk.
The following commands allow you to perform a one-time conversion of existing user data into LSM volumes:
Note
In previous releases of LSM, a separate command, voladvdomencap, was used for migrating AdvFS domains to LSM. Starting with Version 4.0, the functionality of this command is provided by the volencap command. The voladvdomencap command is supported only for backward compatibility and is scheduled to be withdrawn from a future release of LSM.
See Chapter 5 for information about encapsulating the partitions used for the root file system and swap partition to LSM volumes.
See Section 7.11, Section C.20, and Section C.19 for information on unencapsulation procedures.
The following list describes requirements for performing encapsulation functions:
Some configurations require that the encapsulation process create an LSM nopriv disk. This type of encapsulation uses rootdg to store the configuration data during encapsulation.
In the event that a failure occurs during encapsulation, you can restore the saved data to return data to its original state.
To minimize the risk of configuration changes during encapsulation, ensure the data remains off line during the encapsulation process.
The LVM encapsulation process uses the name of a volume group that you specify with the vollvmencap command, and transforms the LVM volumes into LSM volumes.
Note
The Logical Volume Manager (LVM) is no longer supported on Digital UNIX systems. Support for the LVM encapsulation tools will also be retired in a future release of Digital UNIX. At that time, any data still under LVM control will be lost.
Encapsulation of LVM volumes is based on volume groups, which are collections of physical volumes, each of which contains the following:
In addition, the LVM data area is divided into physical extents, which are the basic building blocks of LVM volumes. The physical extents for all physical volumes in a volume group are all the same size.
Finally, a volume consists of a series of logical extents, each of which maps to one or more physical extents. Because Digital UNIX does not support mirroring for LVM, each logical extent can map to only one physical extent, except in the occasional event when LVM requires temporary mirrors to be added to a volume for the duration of the command execution. For encapsulation purposes, these transient conditions are not considered.
Note
The physical extent bad block directory is not used.
The /etc/lvmtab file defines all the volume groups and their associated physical volumes on a system. When a system reboots, LVM restarts based on the information defined in this file.
There is an LVM record at the beginning of each physical volume in a volume group. The LVM record contains a number, and the location and length of the metadata region. In the metadata region, there are entries for each logical volume defined in the volume group, and mappings of each physical extent to logical extents of the physical volumes.
A typical LVM configuration has a few physical device partitions in a volume group. An arbitrary number of volumes is defined by mapping logical extents to physical extents in a volume group. These volumes are used for UFS file systems. User data can also be accessed directly through the device interface.
To begin the encapsulation process, you supply the name of an LVM volume group as input to the vollvmencap command. For example:
#
/usr/sbin/vollvmencap /dev/vg1
The vollvmencap command generates scripts containing the LSM commands needed to create LSM volumes. You execute the command scripts created by vollvmencap by running the /sbin/vol-lvm-reconfig command, as shown here:
#
/sbin/vol-lvm-reconfig
Note that the LVM volumes in the volume group that was encapsulated must not be in use when /sbin/vol-lvm-reconfig is executed. For example, all file systems using LVM volumes must be unmounted.
When the encapsulation is successful, a message is printed that indicates a name of a script that you must execute to remove LVM volumes. Run this script only after ensuring that the encapsulation was successful.
Note the following requirements for the LVM encapsulation process:
The encapsulation process creates an LSM subdisk for contiguous physical extents in physical volume mappings to a logical volume. Because LVM volumes in Digital UNIX are not mirrored, the LSM volume has only one plex. The plex consists of a set of subdisks obtained by mapping physical extents associated with each logical extent. The plex is used to create an LSM volume. The LSM volume name replaces the LVM volume name in the /etc/fstab file.
Block 0 on a Digital UNIX disk device is read-only by default. UFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for UFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name device-name _blk0 . If the volume is no longer needed, remove this nopriv disk from the LSM disk group and redefine the disk without block 0
Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.
The encapsulation process for UNIX style disks and disk partitions uses the volencap command to change a disk or partition into an LSM disk.
The volencap command automatically encapsulates user data for common configuration layouts such as the following:
The partitions on a physical device are mapped by a partition table called the disklabel. The disk's partitions and disk label have the following characteristics:
Each available partition has a special device file in the /dev directory. Users and applications access storage through these special device files. The voldisk, voldisksetup, and voldiskadd utilities perform partition overlap checks to ensure that partitions being initialized to LSM do not have valid UFS, AdvFS, swap, or LSM data. If the fstype field of a partition indicates that there is valid data, the utilities issue a warning.
The volencap and vol-reconfig commands provide an easy way to encapsulate disks and partitions. However, if you need a finer degree of control, use the manual encapsulation procedure to tailor the encapsulation to the specific needs of your configuration. See Section 4.4.4 for information to help you encapsulate UNIX style partitions manually.
To begin the encapsulation process, you supply the name of a physical device (for example, rz3) or a partition name (for example, rz3g) as input to the volencap command. For example:
#
/usr/sbin/volencap rz3
The LSM encapsulation process uses information in the disk label and the /etc/fstab file to find out if a partition is in use, for example if it contains a UFS file system or a database. If the partition does not have information in a disk label or /etc/fstab file to indicate that it is being used by an application, the partition must be encapsulated using the partition name.
The /usr/sbin/volencap command generates scripts containing the necessary LSM commands and files to create LSM volumes. You run these scripts by executing the /sbin/vol-reconfig command, as shown here:
#
/sbin/vol-reconfig
If any partition or disk that has been encapsulated is still in use, reboot the system.
Instead of executing /sbin/vol-reconfig manually, you can add /sbin/vol-reconfig to the /etc/inittab file by running the volinstall command. Then, when the system is rebooted, the encapsulation commands generated by /usr/sbin/volencap take effect.
Use the above method if any disk or partition that was encapsulated was in use.
The results of the encapsulation process are as follows:
After the encapsulation, LSM converts each partition that is in use (for example, as a UFS file system) to a subdisk. LSM then uses the subdisk to create a plex and, in turn, uses the plex to generate an LSM volume. Entries in the /etc/fstab or /sbin/swapdefault file are changed to use the LSM volume name instead of the block device name of the physical disk partition.
Block 0 on a Digital UNIX disk device is read-only by default. UFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for UFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name device-name _blk0 . You should remove this nopriv disk from the LSM disk group and redefine it without block 0 if the volume is no longer needed.
Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.
You need to perform a manual encapsulation only when the automatic volencap encapsulation process does not apply to your configuration. Before beginning the encapsulation process, do the following:
To encapsulate a partition follow these steps. This example encapsulates the /dev/rz3h partition that is being used as /usr/staff.
/dev/rz3h /usr/staff ufs rw 1 2
#
voldisk -f init rz3h type=nopriv
#
voldg -g rootdg adddisk rz3h
#
voldg -g dg1 adddisk rz3h
#
disklabel -r rz3:
# size offset fstype [fsize bsize cpg] a: 131072 0 4.2BSD 1024 8192 16 # (Cyl. 0 - 164* b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 4.2BSD 1024 8192 16 # (Cyl. 492*- 1519*) h: 838444 1212416 4.2BSD 1024 8192 16 # (Cyl. 1519*- 2569*)
The example output shows that the size of partition h is 838444 sectors.
#
/sbin/volassist -g rootdg make vol-rz3h 838444s rz3h
/dev/vol/rootdg/vol-rz3h /usr/staff ufs rw 1 2
To encapsulate a complete disk to LSM and convert partitions that are in use into LSM volumes follow these steps.
In the following example for the disk rz4, two partitions store user data by means of a UFS file system, and one partition allows applications to directly access and store user data using the device interface.
/dev/rz4d /data1 ufs rw 1 2 /dev/rz4e /data2 ufs rw 1 2
Partition rz4f is used by applications to store user data directly using the device interface. The following example shows the command and display for the disk label information for the rz4 disk:
#
disklabel -r rz4
# size offset fstype [fsize bsize cpg] a: 131072 0 unused 1024 8192 16 # (Cyl. 0 - 164* b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 4.2BSD 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 4.2BSD 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 16 # (Cyl. 492*- 1519*) h: 838444 1212416 unused 1024 8192 16 # (Cyl. 1519*- 2569*) #
In the example for the rz4 disk, the output shows that partitions d and e are in use for UFS file system. The f partition is marked as unused even though it is in use. Applications that access data directly on partitions using the device interface must edit the disk label using the command disklabel -e to change the disk label before encapsulation. Because no appropriate disk label tags are provided, use of disk label tag 4.1BSD (shown in the following example) is suggested:
# size offset fstype [fsize bsize cpg] a: 131072 0 unused 1024 8192 16 # (Cyl. 0 - 164* b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 4.2BSD 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 4.2BSD 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 4.1BSD 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 16 # (Cyl. 492*- 1519*) h: 838444 1212416 unused 1024 8192 16 # (Cyl. 1519*- 2569*)
Use partition c to store the offset and size for the public and private regions of the LSM disk.
If there is not enough space at either the beginning or end of the disk for the private region you must encapsulate rz4c as a nopriv disk.
Note
Block 0 on a disk is write-locked. Therefore, do not use block 0 either for the public or private region of the disk.
#
voldisk -f init rz4c type=simple privoffset=1 \
privlen=512 puboffset=513 publen=2050347
#
voldisk -f init rz4c type=nopriv
#
voldg -g adddisk rz4c
#
voldg -g adddisk rz4c
For an LSM simple disk, perform the following:
The following example shows the conversion calculation for partition d:
[ partition d offset - 513 ] or [ 393216 - 513 ]
Enter the following command for partition d:
#
volassist make vol-rz4c-01 552548 rz4c,392703
#
volassist make vol-rz4c-02 552548 rz4c,945251
#
volassist make vol-rz4c-03 552548 rz4c,1497799
For an LSM nopriv disk, the calculation for the partition offset does not need to be changed because nopriv disks do not contain any metadata. The LSM disk partitions are as follows:
#
volassist make vol-rz4c-01 552548 rz4c,393216
#
volassist make vol-rz4c-02 552548 rz4c,945764
#
volassist make vol-rz4c-03 552548 rz4c,1498312
/dev/vol/rootdg/vol-rz4c-01 /data1 ufs rw 1 2 /dev/vol/rootdg/vol-rz4c-02 /data2 ufs rw 1 2
/dev/vol/rootdg/vol-rz4c-01 /data1 ufs rw 1 2 /dev/vol/rootdg/vol-rz4c-02 /data2 ufs rw 1 2
Applications that were using /dev/rz4f should now use /dev/rvol/rootdg/vol-rz4c-03.
Encapsulation of AdvFS user data is at the storage domain level. Each physical device in the domain is encapsulated into an LSM volume by changing the links in the domain tree to point to the LSM volumes.
An AdvFS domain is a single-storage container consisting of one or more physical disk partitions. File systems, called filesets, are created and defined in the domain and can expand and contract within the domain if space is available. Storage devices can be added or removed from a domain even when filesets are mounted. Active filesets and domains are determined as follows:
Storage devices can be physical devices or logical volumes. Each domain has a directory tree in the /etc/fdmns file that describes the physical disk partitions that constitute the storage container. The root of the tree is the domain and each leaf node in the domain directory tree is a physical disk partition name. This is a soft link of the full-access path of the physical disk partition.
A typical system has a number of domains using a few physical devices as storage in each domain, and with many filesets created on these domains.
For most configurations, you can encapsulate user data automatically using the volencap command. However, if a finer degree of control is desired, use the manual encapsulation procedure to tailor the encapsulation to the specific needs of your configuration. See Section 4.5.4 for information.
Note
In previous releases of LSM, a separate command, voladvdomencap, was used for migrating AdvFS domains to LSM. Starting with Version 4.0, the functionality of this command is provided by the volencap command. The voladvdomencap command is supported only for backward compatibility and is scheduled to be withdrawn from a future release of LSM.
The goal of encapsulating AdvFS domains is to capture the data in the physical disk partitions of a domain into LSM volumes, and present the same data access to AdvFS by changing the soft links in the domain directory tree to point to the LSM volumes.
LSM volumes encapsulated from domain physical devices must reflect the exact data at the exact logical block number (LBN) location as the physical device. The entire LBN range of the LSM nopriv disk is defined as one LSM subdisk. A plex is created with this subdisk and an LSM volume is created with the plex.
No mount point changes are necessary during encapsulation, because the filesets that are mounted are abstractions to the domain. The domain can be activated normally after the encapsulation process completes. Once the domain is activated, the filesets remain unchanged and the encapsulation is transparent to users of the AdvFS domain.
To begin the encapsulation process, you supply the name of a domain as input to the volencap command. For example:
#
/usr/sbin/volencap dom1
The /usr/sbin/volencap command generates scripts containing the necessary LSM commands and files to create LSM volumes. You run these scripts by executing the /sbin/vol-reconfig command, as shown here:
#
/sbin/vol-reconfig
The domain should not be in use when you execute the /sbin/vol-reconfig command. All filesets in the domain should be unmounted.
Instead of executing the /sbin/vol-reconfig command manually, you can add /sbin/vol-reconfig to the /etc/inittab file by running the volinstall command. Then, when the system is rebooted, the encapsulation commands generated by /usr/sbin/volencap take effect.
The /etc/fdmns file is updated on successful creation of LSM volumes.
Block 0 on a Digital UNIX disk device is read-only by default. AdvFS does not use block 0 when putting data on device partitions. To preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As long as this disk is used for AdvFS volumes, this does not present a problem. However, if the disk is reused for other applications which write to block 0, then a write failure will occur. To help avoid such failures, earlier releases of LSM labeled the LSM nopriv disk with the unique administration name advfs_ device-name . You should remove this nopriv disk from the LSM disk group and redefine it without block 0 if the volume is no longer needed.
Starting with Version 4.0, the voldiskadd and voldisksetup utilities automatically map out block 0. Digital recommends that you use these utilities to add disks to LSM. Note that if volencap is used to add a disk to LSM, it will not preserve block 0. This can cause problems if an application writes to that part of the disk.
This section describes how to manually encapsulate AdvFS storage domains to generate LSM volumes. You need to perform a manual encapsulation only when the automatic volencap encapsulation process does not work for your configuration. The following instructions describe the manual process:
#
showfdmn dom2
showfdmn: unable to get info for domain 'dom2' showfdmn: error = No such file or directory
#
showfdmn dom2
Id Date Created LogPgs Domain Name 2d2b5782.0009cca0 Wed Jan 5 19:12:50 1994 512 dom2 Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 1024000 1015408 1% on 256 256 /dev/rz3c
#
showfdmn dom2
Id Date Created LogPgs Domain Name 2d2b5782.0009cca0 Wed Jan 5 19:12:50 1994 512 dom2 showfdmn: unable to display volume info; domain not active
#
voldg list
NAME STATE ID rootdg enabled 761416202.1025.chowfun.zk3.dec.com dg1 enabled 761416202.1034.chowfun.zk3.dec.com
#
mkdir -p /etc/vol/reconfig.d/domain.d/dom2.d
#
echo "dg1" > /etc/vol/reconfig.d/domain.d/dom2.d/dg
#
cat /etc/vol/reconfig.d/domain.d/dom2.d/dg
dg1
#
cp -R /etc/fdmns/dom2 \
/etc/vol/reconfig.d/domain.d/dom2.d
#
ls -R /etc/fdmns/dom2
rz3c rz16g# voldisk -f init rz3c type=nopriv
#
volprint -g dg1 -F "%len" -d advfs_rz3c
4109967# volprint -g dg1 -F "%len" -d advfs_rz16g
301986# volassist -g dg1 make vol_rz3c 4109967 advfs_rz3c
#
rm /etc/fdmns/dom2/rz3c
#
rm /etc/fdmns/dom2/rz16g
#
ln -sf /dev/vol/dg1/vol-rz3c /etc/fdmns/dom2/vol-rz3c
#
ln -sf /dev/vol/dg1/vol-rz16g /etc/fdmns/dom2/vol-rz16g
#
mount -t advfs dom2#fset2 /mnt
#
rm -rf /ec/fdmns/dom2/vol*
#
cp -R /etc/vol/reconfig.d/domain.d/dom2.d/dom2 /etc/fdmns
In some cases, you may want to encapsulate a disk that does not have any space that can be used for an LSM private region partition. The voldisk utility can be used to encapsulate disks that do not have available space. This is done using special types of disk devices, called nopriv devices, that do not have private regions.
To perform this type of encapsulation, create a partition on the disk device that maps all parts of the disk that you want to be able to access. See disklabel(8).
Then, add the partition device for that partition using the following command syntax:
voldisk define partition-device type=nopriv
Here, partition-device is the basename of the device in the /dev directory. For example, to use partition h of disk device rz3, use the command:
#
voldisk define rz3h type=nopriv
To create volumes for other partitions on the disk drive, add the device to a disk group, figure out where those partitions reside within the encapsulation partition, then use volassist to create a volume with that offset and length.
A major drawback with using these special encapsulation partition devices is that LSM cannot track changes in the address or controller of the disk. Normally, LSM uses identifying information stored on the physical disk to track changes in the location of a physical disk. Because nopriv devices do not have identifying information stored on the physical disk, this cannot occur.
The best use of special encapsulation partition devices is to encapsulate a disk so that LSM can be used to move space off of the disk. When space is made available at the beginning or end of the disk, the special partition device can be removed and the disk can then be encapsulated as a standard disk device.
A disk group cannot be formed entirely from nopriv devices, because nopriv devices do not provide space for storing disk group configuration information. Configuration information must be stored on at least one disk in the disk group.