An LSM volume is an object that represents a hierarchy of LSM objects that allocates space to, and stores data for, a file system or application. You create an LSM volume differently depending on whether the volume is for a new file system or application or an existing file system or application.
This chapter describes how to:
Create a disk group and check a disk group for space (Section 4.1)
Create an LSM volume for new data (Section 4.2)
Configure UFS or AdvFS file systems to use an LSM volume (Section 4.3)
Create an LSM volume for existing data (Section 4.4)
Use the information from the worksheets you filled out in
Chapter 2
to create disk groups and LSM volumes.
4.1 Creating Disk Groups
You must create an LSM volume in a disk group. By default, LSM creates volumes in the rootdg disk group, which was created when you installed LSM. You can create all LSM volumes in the rootdg disk group or you can create other disk groups. The following sections describe how to:
Display disk group information
Create a disk group
Add disks to a disk group
Create a backup copy of the disk label information
4.1.1 Displaying Disk Group Information
To display information about the rootdg disk group and other disk groups, enter:
#voldisk list
Information similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 dg1 online dsk7 sliced - - unknown dsk8 sliced dsk8 dg1 online dsk9 sliced - - unknown dsk10 sliced - - unknown dsk11 sliced - - unknown dsk12 sliced - - unknown dsk13 sliced - - unknown
DEVICE |
Specifies the disk access name assigned by the operating system software. |
TYPE |
Specifies the LSM disk type: sliced, simple, or nopriv. |
DISK |
Specifies the LSM disk media name. An LSM disk media name displays only if the disk is in a disk group. |
GROUP |
Specifies the disk group to which the device belongs. A group name displays only if a disk is in a disk group. |
STATUS |
Specifies the status of the LSM device. The status is one of the following:
|
To display the total usable space in a disk group, enter:
#volassist [-g disk_group] maxsize
The following command line displays the available space in a disk group called dg1:
#volassist -g dg1 maxsize
Information similar to the following is displayed:
Maximum volume size: 6139904 (2998Mb)
4.1.2 Creating a Disk Group or Adding Disks to a Disk Group
The
voldiskadd
script is an interactive script that
lets you:
Initialize disks or disk partitions for exclusive use by LSM
Create a disk group
Add disks to a disk group
Note
By default, LSM initializes each disk with one copy of the configuration database. If a disk group will have fewer than four disks, you should initialize each disk to have two copies of the disk group's configuration database to ensure that the disk group has multiple copies in case one or more disks fail. You must use the
voldisksetupcommand to initialize disks with more than one copy of the configuration database; see Section 5.1.1 for more information.
If you specify an uninitialized disk, LSM initializes the disk as an LSM sliced disk. If you specify a partition name, LSM initializes the partition as an LSM simple disk. You can specify several disks and disk partitions at once, separated by a space; for example:
#voldiskadd dsk3 dsk4a dsk5 dsk6g
After you initialize a disk or disk partition, LSM writes a new disk label and the disk or disk partition becomes an LSM disk for exclusive use by LSM.
The
voldiskadd
script prompts you for the following
information:
A disk group name
If you are creating a disk group, the disk group name must be unique and can contain up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ).
A disk media name for each disk you configure in the disk group
You can use the default disk media name or you can assign a disk media name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ).
Whether the disk should be a spare disk for the disk group
A spare disk is a disk initialized by LSM, but used only as a replacement disk if a disk that contains a mirror or RAID 5 plex fails. See Section 3.4.4 for more information about how LSM uses spare disks. For the best protection, configure at least one spare disk in each disk group that contains mirror or RAID 5 plexes.
The following example uses a disk called dsk9 to create a disk group called dg1:
#voldiskadd dsk9
Information similar to the following is displayed:
Add or initialize disks
Menu: VolumeManager/Disk/AddDisks
Here is the disk selected.
dsk9
Continue operation? [y,n,q,?] (default: y)
You can choose to add this disk to an existing disk group, a
new disk group, or leave the disk available for use by future
add or replacement operations. To create a new disk group,
select a disk group name that does not yet exist. To leave
the disk available for future use, specify a disk group name
of "none".
Which disk group [<group>,none,list,q,?] (default: rootdg) dg1
There is no active disk group named dg1.
Create a new group named dg1? [y,n,q,?] (default: y)
The default disk name that will be assigned is:
dg101
Use this default disk name for the disk? [y,n,q,?] (default: y)
Add disk as a spare disk for dg1? [y,n,q,?] (default: n)
A new disk group will be created named dg1 and the selected disks
will be added to the disk group with default disk names.
dsk9
Continue with operation? [y,n,q,?] (default: y)
The following disk device has a valid disk label, but does
not appear to have been initialized for the Logical Storage
Manager. If there is data on the disk that should NOT be
destroyed you should encapsulate the existing disk partitions
as volumes instead of adding the disk as a new disk.
dsk9
Initialize this device? [y,n,q,?] (default: y)
Initializing device dsk9.
Creating a new disk group named dg1 containing the disk
device dsk9 with the name dg101.
Goodbye.
4.1.3 Creating a Backup Copy of the Disk Label Information
It is highly recommended that you create a backup copy of the updated disk label information for each LSM disk.
Having this information will simplify the process of replacing a failed disk, by allowing you to copy the failed disk's attributes to a new disk. Once a disk fails, you cannot read its disk label, and you cannot copy that information to a new disk.
To create a file that contains a backup copy of the disk label information, enter:
#disklabel dskn > file
See the
disklabel(8)
reference page for more information on the
disklabel
command.
4.2 Creating an LSM Volume for New Data
To create an LSM volume for a new file system or application, use the
volassist
command.
The
volassist
command finds
the necessary space within the disk group and creates all the objects for
the volume.
You must specify a volume name and length (size) on the command
line.
You can specify values for other LSM volume attributes on the command line or in a text file that you create. If you do not specify a value for an attribute, LSM uses a default value.
To display the default values for volume attributes, enter:
#volassist help showattrs
Information similar to the following is displayed:
#Attributes: layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,diskalign,nostorage mirrors=2 columns=0 nlogs=1 regionlogs=1 raid5logs=1 min_columns=2 max_columns=8 regionloglen=0 raid5loglen=0 logtype=region stripe_stripeunitsize=64 raid5_stripeunitsize=16 usetype=fsgen diskgroup= comment="" fstype= user=0 group=0 mode=0600 probe_granularity=2048 alloc= wantalloc= mirror=
Some volume attributes have several options to define them.
Some options
define an attribute globally, while others define an attribute for a specific
plex type.
For example, you can specify the size of a stripe data unit
using the
stripeunit
option for both
striped or RAID 5 plexes, the
stripe_stripeunitsize
option for striped plexes, or the
raid5_stripeunitsize
option for RAID 5 plexes.
See the
volassist(8)
reference page for a complete list of attributes.
Table 4-1
describes some of the common attributes
for which you can specify a value.
Table 4-1: Common LSM Volume Attributes
| Attribute Description | Attribute Options |
| Plex type | layout={concatenated|striped|raid5} |
| Usage type | -U
{fsgen|raid5|gen} |
| Whether or not to create mirrors, and if so how many | mirror={number|yes|no} |
| Whether or not to use a Dirty Region Log (DRL) plex for mirrored plexes | logtype={drl|none} |
| Size of the stripe width for a striped or RAID 5 plex | stripeunit=data_unit_size |
| Number of columns for a striped or RAID 5 plex | nstripe=number_of_columns |
Creating a text file that specifies many of these attributes is useful
if you create many LSM volumes that use the same nondefault values for attributes.
Any attribute that you can specify on the command line can be specified on
a separate line in the text file.
By default, LSM looks for the
/etc/default/volassist
file when you create an LSM volume.
If you
created an
/etc/default/volassist
file, LSM creates each
volume using the attributes that you specify on the command line and in the
/etc/default/volassist
file.
Example 4-1
shows a text file called
/etc/default/volassist
that creates an LSM volume using a four-column
striped plex with two mirrors, a stripe width of 32 KB, and no log.
Example 4-1: LSM Volume Attribute Defaults File
# LSM Vn.n # volassist defaults file. Use '#' for comments # number of stripes nstripe=4 # layout layout=striped # mirroring nmirror=2 # logging logtype=none # stripe size stripeunit=32k
For example, to create an LSM volume using the attributes in the
/etc/default/volassist
file, enter:
#volassist make volume length
To specify a file other than the
/etc/default/volassist
file, you must use the
volassist
command with the
-d
option followed by the name of the file.
If you use the
-d
option, LSM creates the volume using the attributes that you specify
on the command line and in the named file.
For example, to create an LSM volume using the attributes in a file
other than the
/etc/default/volassist
file, enter:
#volassist make volume length -d filename
The following lists the order in which LSM assigns values to attributes:
Values on the command line
Values in a file that you specify by using the
volassist
-d
option
Values in the
/etc/default/volassist
file
Default values
4.2.1 Creating an LSM Volume Using a Concatenated Plex
Creating an LSM volume that uses a concatenated plex can be a three-step process. Step 1 is required, and the others are required only if you want to mirror the plex. To increase performance for mirror plexes, you can specify the disks for the data plexes and the DRL plex to ensure that LSM creates these plexes on different disks, preferably on different buses.
To create an LSM volume that uses a concatenated plex:
Create a volume with a single plex, optionally specifying disks:
#volassist [-g disk_group] make volume length [disks]
The following example creates a 3 GB volume called vol2 that uses disks dsk2, dsk3, and dsk4 in a disk group called dg1:
#volassist [-g dg1] make vol2 3g dsk2 dsk3 dsk4
The volume is created and started. If you want to mirror the plex, continue with step 2.
Add a mirror plex to the volume, specifying disks not used in the first data plex and preferably on different buses:
#volassist [-g disk_group] mirror volume init=active \ layout=nolog disks
The
init=active
option prevents LSM from synchronizing
the plexes.
Because the volume is new and contains no data yet, LSM does not
need to synchronize the plexes.
The following example creates a mirror plex for the same volume, using disks dsk5, dsk6, and dsk7:
#volassist -g dg1 mirror vol2 init=active \ layout=nolog dsk5 dsk6 dsk7
Note
Because two mirrors are used in the volume, 6 GB of free space is needed. Each mirror uses 3 GB of disk space.
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes:
#volassist addlog volume disk
The following example adds a DRL plex to vol2 on a disk called dsk10:
#volassist addlog vol2 dsk10
The volume is ready for use.
4.2.2 Creating an LSM Volume Using a Striped Plex
Creating an LSM volume that uses a striped plex can be a three-step process. Step 1 is required, and the others are required only if you want to mirror the plex. To increase performance for mirror plexes, you can specify the disks for the data plexes and the DRL plex to ensure that LSM creates these plexes on different disks, preferably on different buses.
Note
In general, you should not use LSM to stripe data if you also use a hardware controller to stripe data. In some specific cases such a configuration can improve performance but only if:
Most of the volume I/O requests are large (>= 1 MB).
The LSM volume is striped over multiple RAID sets on different controllers.
The LSM stripe size is a multiple of the full hardware RAID stripe size.
The number of LSM columns in each plex in the volume should be equal to the number of hardware RAID controllers. See your hardware RAID documentation for information about how to choose the best number of columns for the hardware RAID set.
By default, the
volassist
command creates columns
for a striped plex on disks in alphanumeric order, regardless of their order
on the command line.
To improve performance, you might want to create columns using disks on different buses. See Section 4.2.3 for more information about specifying the disk order for columns in a striped plex.
To create an LSM volume that uses striped plexes:
Create a volume with a single plex, optionally specifying disks, preferably on different buses:
#volassist [-g disk_group] make volume length \ layout=stripe [nstripe=number_of_columns] \ [stripeunit=data_unit_size] [disks]
The following example creates a 4 GB volume called vol_stripe that uses disks dsk2, dsk3 and dsk4 to create a three-column striped plex in a disk group called dg1:
#volassist -g dg1 make vol_stripe 4g \ layout=stripe nstripe=3 dsk2 dsk3 dsk4
The volume is created and started. If you want to mirror the plex, continue with step 2.
Add a mirror plex to the volume, specifying disks on a different bus:
#volassist [-g disk_group] mirror volume \ init=active layout=nolog disks
The
init=active
option prevents LSM from synchronizing
the plexes.
Because the volume is new and contains no data yet, LSM does not
need to synchronize the plexes.
The following example creates a mirror plex for the volume vol_stripe, using disks dsk5, dsk6, and dsk7:
#volassist -g dg1 mirror vol_stripe \ init=active layout=nolog dsk5 dsk6 dsk7
Note
Because two mirrors are used in the volume, 8 GB of free space is needed. Each mirror uses 4 GB of disk space.
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes:
#volassist addlog volume disk
The following example adds a DRL plex to vol_stripe on a disk called dsk10:
#volassist addlog vol_stripe dsk10
The volume is ready for use.
4.2.3 Creating an LSM Volume Using a Striped Plex (on Different Buses)
By default, LSM creates columns for a striped plex on the first available disks it finds in the disk group. This might result in a volume with columns that use disks on the same bus.
You can improve performance by creating a striped plex with columns that use disks on different buses. To do so, you must create the subdisks for each column.
Each subdisk you create should be the same size, on a different disk on a different bus, and a multiple of the data unit size, so there is no wasted space on the subdisk. For example, with a data unit size of 64 KB for a striped plex, each subdisk should be a multiple of 64. In the examples that follow, the subdisk size is 16 MB.
Creating an LSM volume that uses a striped plex on different buses can be a six-step process. Steps 3 and 5 are required only if you want to mirror the plex. To increase performance for mirror plexes, you can specify the disks for the data plexes and the DRL plex to ensure that LSM creates these plexes on different disks, preferably on different buses.
To create an LSM volume that uses a striped plex on different buses:
Create the subdisks on disks on different buses:
#volmake [-g disk_group] sd subdisk disk len=length
The following example creates subdisks on disks dsk2, dsk3, dsk4, dsk5, dsk6, and dsk7. In this example, disks dsk2 and dsk3 are on bus 1, dsk4 and dsk5 are on bus 2, and dsk6 and dsk7 are on bus 3:
#volmake sd dsk2-01 dsk2 len=16m#volmake sd dsk3-01 dsk3 len=16m#volmake sd dsk4-01 dsk4 len=16m#volmake sd dsk5-01 dsk5 len=16m#volmake sd dsk6-01 dsk6 len=16m#volmake sd dsk7-01 dsk7 len=16m
Create a striped plex, specifying the order of subdisks on which to create the columns:
#volmake [-g disk_group] plex plex layout=stripe \ stwidth=data_unit_size sd=subdisk,...
The following example uses the subdisks created in step 1 and lists them in alternating bus order to create a six-column striped plex called plex_01. Subdisks dsk2-01 and dsk3-01 are on bus 1, subdisks dsk4-01 and dsk5-01 are on bus 2, and subdisks dsk6-01 and dsk7-01 are on bus 3, so the command line lists the subdisks in a pattern that alternates the bus order:
#volmake plex plex_01 layout=stripe stwidth=64 \ sd=dsk2-01,dsk4-01,dsk3-01,dsk6-01,dsk5-01,dsk7-01
Optionally, create a mirror plex for the volume. If the volume will have only one data plex, go to step 4.
Repeat step 1 to create subdisks on a different group of disks on different buses for the second data plex.
The following example creates subdisks for the columns in the second data plex on disks dsk8, dsk9, dsk10, dsk11, dsk12, and dsk13. In this example, disks dsk8 and dsk9 are on bus 4, dsk10 and dsk11 are on bus 5, and dsk12 and dsk13 are on bus 6:
#volmake sd dsk8-01 dsk8 len=16m#volmake sd dsk9-01 dsk9 len=16m#volmake sd dsk10-01 dsk10 len=16m#volmake sd dsk11-01 dsk11 len=16m#volmake sd dsk12-01 dsk12 len=16m#volmake sd dsk13-01 dsk13 len=16m
Repeat step 2 to create the second data plex specifying the order of subdisks on which to create the columns.
The following example uses the subdisks created in step 3a and lists them in alternating bus order to create a six-column striped plex called plex_02. Subdisks dsk8-01 and dsk9-01 are on bus 4, subdisks dsk10-01 and dsk11-01 are on bus 5, and subdisks dsk12-01 and dsk13-01 are on bus 6, so the command line lists the subdisks in a pattern that alternates the bus order:
#volmake plex plex_02 layout=stripe stwidth=64 \ sd=dsk8-01,dsk10-01,dsk9-01,dsk12-01,dsk11-01,dsk13-01
Create the LSM volume, specifying the name of the data plex you created in step 2, and the additional data plex (if any) you created in step 3:
#volmake [-g disk_group] -U usage_type vol volume \ plex=plex ...
The following example creates an LSM volume called vol9 with a usage type of fsgen, using a plex called plex_stripe:
#volmake -U fsgen vol vol9 plex=plex_stripe
The following example creates an LSM volume called vol_mirr with a usage type of fsgen, using data plexes called plex_01 and plex_02:
#volmake -U fsgen vol vol_mirr plex=plex_01,plex_02
If the volume has mirror plexes, add a DRL plex to the volume on a disk that is not used by one of the data plexes:
#volassist addlog volume disk
Start the LSM volume:
#volume start volume
The volume is ready for use.
4.2.4 Creating an LSM Volume Using a RAID 5 Plex
By default, the
volassist
command creates columns
for a RAID 5 plex on disks in alphanumeric order, regardless of their order
on the command line.
To improve performance, you might want to create the columns on disks on different buses. See Section 4.2.5 for more information about specifying the disk order for columns in a RAID 5 plex.
The
volassist
command automatically creates a RAID
5 log plex for the volume.
To create an LSM volume that uses a RAID 5 plex, enter:
#volassist [-g disk_group] -U raid5 make volume \ length layout=raid5 [nstripe=number_of_columns] \ [stripeunit=data_unit_size] [disks]
The following example creates a 6 GB, six-column volume called vol6 in a disk group called dg1, using any available disks:
#volassist -g dg1 -U raid5 make vol6 6g layout=raid5 nstripe=6
4.2.5 Creating an LSM Volume Using a RAID 5 Plex (on Different Buses)
By default, LSM creates columns for a RAID 5 plex on the first available disks it finds in the disk group. This might result in a volume with columns that use disks on the same bus.
You can improve performance by creating a RAID 5 plex with columns that use disks on different buses. To do so, you must create the subdisks for each column.
Each subdisk you create should be the same size, on a different disk on a different bus, and a multiple of the data unit size, so there is no wasted space on the subdisk. For example, with a stripe width of 16 KB for a RAID 5 plex, each subdisk should be a multiple of 16.
To create an LSM volume that uses a RAID 5 plex on different buses:
Create the subdisks on disks on different buses:
#volmake [-g disk_group] sd subdisk disk,offset,length
The following example creates 1 MB subdisks for the data plex on disks called dsk6, dsk7, dsk8, and dsk9. In this example, disks dsk6 and dsk7 are on bus 1, and dsk8 and dsk9 are on bus 2:
#volmake sd dsk6-01 dsk6 len=1m#volmake sd dsk7-01 dsk7 len=1m#volmake sd dsk8-01 dsk8 len=1m#volmake sd dsk9-01 dsk9 len=1m
Create the RAID 5 data plex, specifying the order of subdisks on which to create the columns:
#volmake [-g disk_group] plex plex layout=raid5 \ stwidth=data_unit_size sd=subdisk,...
The following example uses the subdisks created in step 1 to create a four-column RAID 5 data plex called plex-01:
#volmake plex plex-01 layout=raid5 stwidth=16 \ sd=dsk6-01,dsk8-01,dsk7-01,dsk9-01
Note that in this plex, the stripe alternates between subdisks on buses 1 and 2.
Create the LSM volume, specifying the data plex:
#volmake [-g disk_group] -U raid5 vol volume plex=plex
The following example creates an LSM volume called vol5 using the plex called plex-01:
#volmake -U raid5 vol vol5 plex=plex-01
Add a RAID 5 log plex to the volume, on a disk that is not used by the data plex:
#volassist addlog volume disk
Start the LSM volume:
#volume start volume
The volume is ready for use.
4.2.6 Creating an LSM Volume for Secondary Swap Space
If disk errors occur in the swap space, a system crash is likely to occur. You can create an LSM volume using mirrored concatenated plexes to protect against disk I/O errors in the secondary swap space. Do not create a DRL plex for swap volumes, because mirror resynchronization is not necessary, and a DRL plex on swap volumes will interfere with crash dumps.
To create an LSM volume for the secondary swap space:
Create an LSM volume without a log:
#volassist [-g disk_group] -U gen make volume length \ nmirror=n layout=nolog
The following example creates an LSM volume called vol_swap2 that uses two mirrors with no log:
#volassist -U gen make vol_swap2 nmirror=2 layout=nolog
Set the LSM volume with the
start_ops=norecov
option so LSM does not resynchronize the mirrors:
#volume set start_opts=norecov volume
Add the LSM volume as secondary swap space using the
swapon
command:
#swapon /dev/vol/volume
Add the LSM device special file to the
swapdevice
kernel attribute value within the
vm:
section
of the
/etc/configtab
file.
The following example shows
the entry to change:
vm: swapdevice=/dev/disk/dsk1b, /dev/vol/volume
See the
System Administration
guide and the
swapon(8)
and
sysconfig(8)
reference pages for more information on adding additional swap space.
4.3 Configuring File Systems to Use LSM Volumes
After you create an LSM volume, you use it the same way you would use a disk partition. Because LSM uses the same interfaces as disk device drivers, you can specify an LSM volume in any operation where you can specify a disk or disk partition.
The following sections describe how to configure UFS and AdvFS to use
an LSM volume.
4.3.1 Creating a UFS File System on an LSM Volume
To create a UFS on an LSM volume:
Create a UFS using the LSM disk group and volume name:
#newfs [options] /dev/rvol/disk_group/volume
The following example creates a UFS on an LSM volume called vol_ufs in the dg1 disk group:
#newfs /dev/rvol/dg1/vol_ufs
It is not necessary to specify the name of the disk group for LSM volumes in the rootdg disk group.
See the
newfs(8)
reference page for information on
newfs
options.
Use the LSM block special device name to mount the file system:
#mount /dev/vol/disk_group/volume /mount_point
The following example mounts the LSM volume called vol_ufs as mnt2:
#mount /dev/vol/dg1/vol_ufs /mnt2
4.3.2 Creating an AdvFS File Domain on an LSM Volume
To create an AdvFS file domain on an LSM volume:
Create the AdvFS file domain using the LSM disk group and volume name:
#mkfdmn [options] /dev/vol/disk_group/volume domain
The following example creates an AdvFS file domain called dom1 on an LSM volume called vol_advfs in the dg1 disk group:
#mkfdmn /dev/vol/dg1/vol_advfs dom1
It is not necessary to specify the name of the disk group for LSM volumes in the rootdg disk group.
See the
mkfdmn(8)
reference page for information on
mkfdmn
options.
Create the AdvFS file set in the AdvFS domain:
#mkfset domain file_set
The following example creates an AdvFS file set called fs1 in an AdvFS domain called dom1:
#mkfset dom1 fs1
Mount the file system:
#mount domain#file_set /mount_point
The following example mounts the AdvFS file set called fs1 in the AdvFS domain called dom1 as mnt2:
#mount dom1#fs1 /mnt2
Note
You can add more LSM volumes to an existing AdvFS domain if the domain needs more storage by creating a new LSM volume and using the AdvFS
addvolcommand to add the volume to the domain. See AdvFS Administration for more information.
4.4 Creating an LSM Volume for Existing Data
When you create an LSM volume for existing data on a disk or partition, LSM:
Converts the disk or partition to an LSM nopriv disk
Encapsulates the data in the disk or partition
Configures the disk or partition into an LSM volume in the rootdg disk group
You can encapsulate data in:
Disks and disk partitions (Section 4.4.1)
AdvFS storage domains (Section 4.4.2)
The boot disk (Section 3.4.3)
4.4.1 Encapsulating Disks and Disk Partitions
The encapsulation procedure configures disks and disk partitions into
LSM nopriv disks using information in the disk label and in the
/etc/fstab
file.
After the encapsulation, entries in the
/etc/fstab
file or in the
/etc/sysconfigtab
file
are changed to use the LSM volume name instead of the block device name of
the disk or disk partition.
If you encapsulate an entire disk (by not specifying a partition letter), such as dsk3, all of the in-use partitions are encapsulated as one LSM nopriv disk.
To encapsulate a disk or disk partition:
Back up the data on the disk or disk partition to be encapsulated.
Unmount the disk or partition or take the data off line. If you cannot unmount the disk or partition or take the data off line, you must reboot the system to complete the encapsulation procedure.
Create the LSM encapsulation script:
#volencap [-g disk_group] {disk|partition}
The following example creates an encapsulation script for a disk called dsk3:
#volencap dsk3
Note
Although you can encapsulate several disks or disk partitions at the same time, it is recommended that you encapsulate each disk or disk partition separately.
Complete the encapsulation process:
#volreconfig
If the encapsulated disk or disk partition is in use, the
volreconfig
command prompts you to reboot the system.
4.4.2 Encapsulating AdvFS File Domains
If an AdvFS file domain consists of one disk partition, you can encapsulate it for use with the LSM software using the procedure described in Section 4.4.1. If the AdvFS domain consists of multiple disk partitions, you can encapsulate the AdvFS file domain instead of the individual disk partitions. When you encapsulate an AdvFS file domain, LSM changes the links in the domain tree to point to the LSM volumes. LSM creates a volume for each AdvFS partition in the domain.
No mount point changes are necessary during encapsulation, because the mounted file sets are abstractions to the domain. The domain can be activated normally after the encapsulation process completes. Once the domain is activated, the file sets remain unchanged and the encapsulation is transparent to AdvFS domain users.
To encapsulate an AdvFS file domain:
Back up the data in the AdvFS file domain to be encapsulated.
Make sure that the AdvFS file domain is not in use and unmount all file sets.
If you cannot unmount the file sets, you must reboot the system to complete the encapsulation procedure.
Create the LSM encapsulation script:
#volencap domain
The following example creates an encapsulation script for an AdvFS file domain called dom1:
#volencap dom1
Complete the encapsulation procedure:
#volreconfig
If the AdvFS file domain is mounted, the
volreconfig
command prompts you to reboot the system.
The
/etc/fdmns
directory is updated on successful
creation of LSM volumes.