An LSM volume is an object that represents a hierarchy of LSM objects that allocates space to, and stores data for, a file system or application. You create an LSM volume differently depending on whether the volume is for a new file system or application or an existing file system or application.
This chapter describes how to:
Create a disk group and check a disk group for space (Section 4.1)
Create an LSM volume for new data (Section 4.2)
Configure UFS or AdvFS file systems to use an LSM volume (Section 4.3)
Create an LSM volume for existing data (Section 4.4)
Use the information from the worksheets you filled out in
Chapter 2
to create disk groups and LSM volumes.
4.1 Creating Disk Groups
You must create an LSM volume in a disk group. By default, LSM creates volumes in the rootdg disk group, which was created when you installed LSM. You can create all LSM volumes in the rootdg disk group or you can create other disk groups. The following sections describe how to:
Display disk group information
Create a disk group
Add disks to a disk group
Create a backup copy of the disk label information
4.1.1 Displaying Disk Group Information
To display information about the rootdg disk group and other disk groups, enter:
#
voldisk list
Information similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 dg1 online dsk7 sliced dsk7 dg1 online dsk8 sliced - - unknown
DEVICE |
Specifies the disk access name assigned by the operating system software. |
TYPE |
Specifies the LSM disk type: sliced, simple, or nopriv. |
DISK |
Specifies the LSM disk media name. An LSM disk media name displays only if the disk is in a disk group. |
GROUP |
Specifies the disk group to which the device belongs. A group name displays only if a disk is in a disk group. |
STATUS |
Specifies the status of the LSM device. The status is one of the following:
|
To display the total usable space in a disk group, enter:
#
volassist [-g disk_group] maxsize
The following command line displays the available space in a disk group called dg1:
#
volassist -g dg1 maxsize
Information similar to the following is displayed:
Maximum volume size: 6139904 (2998Mb)
4.1.2 Creating a Disk Group or Adding Disks to a Disk Group
The
voldiskadd
script is an interactive script that
lets you:
Initialize disks or disk partitions for exclusive use by LSM
Create a disk group
Add disks to a disk group
Note
By default, LSM initializes each disk with one copy of the configuration database. If a disk group will have fewer than four disks, you should initialize each disk to have two copies of the disk group's configuration database to ensure that the disk group has multiple copies in case one or more disks fail. You must use the
voldisksetup
command to initialize disks with more than one copy of the configuration database. See Section 5.1.1 for more information.
If you specify an uninitialized disk, LSM initializes the disk as an LSM sliced disk. If you specify a partition name, LSM initializes the partition as an LSM simple disk. You can specify several disks and disk partitions at once, separated by a space; for example:
#
voldiskadd dsk3 dsk4a dsk5 dsk6g
After you initialize a disk or disk partition, LSM writes a new disk label and the disk or disk partition becomes an LSM disk for exclusive use by LSM.
The
voldiskadd
script prompts you for the following
information:
A disk group name
If you are creating a disk group, the disk group name must be unique and can contain up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ).
A disk media name for each disk you configure in the disk group
You can use the default disk media name or you can assign a disk media name of up to 31 alphanumeric characters that cannot include spaces or the forward slash ( / ).
Whether the disk should be a spare disk for the disk group
A spare disk is a disk initialized by LSM, but used only as a replacement disk if a disk that contains a mirror or RAID 5 plex fails. See Section 3.4.4 for more information about how LSM uses spare disks. For the best protection, configure at least one spare disk in each disk group that contains mirror or RAID 5 plexes.
The following example uses a disk called dsk9 to create a disk group called dg1:
#
voldiskadd dsk9
Information similar to the following is displayed:
Add or initialize disks Menu: VolumeManager/Disk/AddDisks Here is the disk selected. dsk9 Continue operation? [y,n,q,?] (default: y) y You can choose to add this disk to an existing disk group, a new disk group, or leave the disk available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disk available for future use, specify a disk group name of "none". Which disk group [<group>,none,list,q,?] (default: rootdg) dg1 There is no active disk group named dg1. Create a new group named dg1? [y,n,q,?] (default: y) y The default disk name that will be assigned is: dg101 Use this default disk name for the disk? [y,n,q,?] (default: y) y Add disk as a spare disk for dg1? [y,n,q,?] (default: n) n A new disk group will be created named dg1 and the selected disks will be added to the disk group with default disk names. dsk9 Continue with operation? [y,n,q,?] (default: y) y The following disk device has a valid disk label, but does not appear to have been initialized for the Logical Storage Manager. If there is data on the disk that should NOT be destroyed you should encapsulate the existing disk partitions as volumes instead of adding the disk as a new disk. dsk9 Initialize this device? [y,n,q,?] (default: y) y Initializing device dsk9. Creating a new disk group named dg1 containing the disk device dsk9 with the name dg101. Goodbye.
4.1.3 Creating a Backup Copy of the Disk Label Information
It is highly recommended that you create a backup copy of the updated disk label information for each LSM disk.
Having this information will simplify the process of replacing a failed disk, by allowing you to copy the failed disk's attributes to a new disk. Once a disk fails, you cannot read its disk label, and you cannot copy that information to a new disk.
To create a file that contains a backup copy of the disk label information, enter:
#
disklabel dskn > file
See the
disklabel
(8)
reference page for more information on the
disklabel
command.
4.2 Creating an LSM Volume for New Data
To create an LSM volume for a new file system or application, use the
volassist
command.
The
volassist
command finds
the necessary space within the disk group and creates all the objects for
the volume.
You must specify a volume name and length (size) on the command
line.
You can specify values for other LSM volume attributes on the command line or in a text file that you create. If you do not specify a value for an attribute, LSM uses a default value.
To display the default values for volume attributes, enter:
#
volassist help showattrs
Information similar to the following is displayed:
#Attributes: layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,diskalign,nostorage mirrors=2 columns=0 nlogs=1 regionlogs=1 raid5logs=1 min_columns=2 max_columns=8 regionloglen=0 raid5loglen=0 logtype=region stripe_stripeunitsize=64 raid5_stripeunitsize=16 usetype=fsgen diskgroup= comment="" fstype= user=0 group=0 mode=0600 probe_granularity=2048 alloc= wantalloc= mirror=
Some volume attributes have several options to define them.
Some options
define an attribute globally, while others define an attribute for a specific
plex type.
For example, you can specify the size of a stripe data unit using
the
stripeunit
option for both striped or RAID 5 plexes,
the
stripe_stripeunitsize
option for striped plexes, or
the
raid5_stripeunitsize
option for RAID 5 plexes.
See the
volassist
(8)
reference page for a complete list of attributes.
Table 4-1
describes some of the common attributes for which you can specify a value.
Table 4-1: Common LSM Volume Attributes
Attribute Description | Attribute Options |
Plex type | layout={concatenated|striped|raid5} |
Usage type | -U
{fsgen|raid5|gen} |
Whether or not to create mirrors, and if so how many | mirror={number|yes|no} |
Whether or not to use a Dirty Region Log (DRL) plex for mirrored plexes | logtype={drl|none} |
Size of the stripe width for a striped or RAID 5 plex | stripeunit=data_unit_size |
Number of columns for a striped or RAID 5 plex | nstripe=number_of_columns |
Creating a text file that specifies many of these attributes is useful
if you create many LSM volumes that use the same nondefault values for attributes.
Any attribute that you can specify on the command line can be specified on
a separate line in the text file.
By default, LSM looks for the
/etc/default/volassist
file when you create an LSM volume.
If you
created an
/etc/default/volassist
file, LSM creates each
volume using the attributes that you specify on the command line and in the
/etc/default/volassist
file.
Example 4-1
shows a text file called
/etc/default/volassist
that creates an LSM volume using a four-column
striped plex with two mirrors, a stripe width of 32 KB, and no log.
Example 4-1: LSM Volume Attribute Defaults File
# LSM Vn.n # volassist defaults file. Use '#' for comments # number of stripes nstripe=4 # layout layout=striped # mirroring nmirror=2 # logging logtype=none # stripe size stripeunit=32k
For example, to create an LSM volume using the attributes in the
/etc/default/volassist
file, enter:
#
volassist make volume length
To specify a file other than the
/etc/default/volassist
file, you must use the
volassist
command with the
-d
option followed by the name of the file.
If you use the
-d
option, LSM creates the volume using the attributes that you specify
on the command line and in the named file.
For example, to create an LSM volume using the attributes in a file
other than the
/etc/default/volassist
file, enter:
#
volassist make volume length -d filename
The following lists the order in which LSM assigns values to attributes:
Values on the command line
Values in a file that you specify by using the
volassist
-d
option
Values in the
/etc/default/volassist
file
Default values
4.2.1 Creating an LSM Volume Using a Concatenated Plex
Creating an LSM volume that uses a concatenated plex can be a multiple-step process. Step 1 is required, and the others are required only if you want to mirror the plex. To increase performance for mirror plexes, you can specify the disks for the data plexes and the DRL plex to ensure that LSM creates these plexes on different disks, preferably on different buses.
To create an LSM volume that uses a concatenated plex:
Create a volume with a single plex, optionally specifying disks:
#
volassist [-g disk_group] make volume length [disks]
The following example creates a 3 GB volume called vol2 that uses disks dsk2, dsk3, and dsk4 in a disk group called dg1:
#
volassist [-g dg1] make vol2 3g dsk2 dsk3 dsk4
The volume is created and started. If you want to mirror the plex, continue with step 2.
Add a mirror plex to the volume, specifying disks not used in the first data plex and preferably on different buses.
You can use the
init=active
option to prevent LSM
from synchronizing the plexes.
Plex synchronization is not necessary because
the volume does not yet contain any data.
Use this option only if the application
that will use the volume always writes a block before it reads from that block.
#
volassist [-g disk_group] mirror volume [init=active] \
layout=nolog disks
The following example creates a mirror plex for the same volume, using disks dsk5, dsk6, and dsk7:
#
volassist -g dg1 mirror vol2 init=active \
layout=nolog dsk5 dsk6 dsk7
Note
Because two plexes are used in the volume, 6 GB of free space is needed. Each plex uses 3 GB of disk space.
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes:
#
volassist addlog volume disk
The following example adds a DRL plex to vol2 on a disk called dsk10:
#
volassist addlog vol2 dsk10
The volume is ready for use.
4.2.2 Creating an LSM Volume Using a Striped Plex
Creating an LSM volume that uses a striped plex can be a multiple-step process, depending on whether you want mirrors and a DRL. Step 1 creates a volume with one plex, and the next steps add a mirror and a DRL to the volume. To increase performance for the volume, you can specify the disks for each plex to ensure that LSM creates these plexes on different disks, preferably on different buses.
Note
In general, you should not use LSM to stripe data if you also use a hardware controller to stripe data. In some specific cases such a configuration can improve performance but only if:
Most of the volume I/O requests are large (>= 1 MB).
The LSM volume is striped over multiple RAID sets on different controllers.
The LSM stripe size is a multiple of the full hardware RAID stripe size.
The number of LSM columns in each plex in the volume should be equal to the number of hardware RAID controllers. See your hardware RAID documentation for information about how to choose the best number of columns for the hardware RAID set.
By default, the
volassist
command creates columns
for a striped plex on disks in alphanumeric order, regardless of their order
on the command line.
To improve performance, you might want to create columns using disks on different buses. See Section 4.2.3 for more information about specifying the disk order for columns in a striped plex.
To create an LSM volume that uses striped plexes:
Create a volume with a single plex, optionally specifying disks, preferably on different buses:
#
volassist [-g disk_group] make volume length \
layout=stripe [nstripe=number_of_columns] \
[stripeunit=data_unit_size] [disks]
The following example creates a 4 GB volume called vol_stripe that uses disks dsk2, dsk3 and dsk4 to create a three-column striped plex in a disk group called dg1:
#
volassist -g dg1 make vol_stripe 4g \
layout=stripe nstripe=3 dsk2 dsk3 dsk4
The volume is created and started. If you want to add a mirror plex, continue with step 2.
Add a mirror plex to the volume, specifying disks not used in the first data plex and preferably on a different bus.
You can use the
init=active
option to prevent LSM
from synchronizing the plexes.
Plex synchronization is not necessary because
the volume does not yet contain any data.
Use this option only if the application
that will use the volume always writes a block before it reads from that block.
#
volassist [-g disk_group] mirror volume \
[init=active] layout=nolog disks
The following example creates a mirror plex for the volume vol_stripe, using disks dsk5, dsk6, and dsk7:
#
volassist -g dg1 mirror vol_stripe \
init=active layout=nolog dsk5 dsk6 dsk7
Note
Because two plexes are used in the volume, 8 GB of free space is needed. Each plex uses 4 GB of disk space.
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes:
#
volassist addlog volume disk
The following example adds a DRL plex to vol_stripe on a disk called dsk10:
#
volassist addlog vol_stripe dsk10
The volume is ready for use.
4.2.3 Creating an LSM Volume Using a Striped Plex (on Different Buses)
By default, LSM creates columns for a striped plex on the first available disks it finds in the disk group. This might result in a volume with columns that use disks on the same bus.
You can improve performance by creating a striped plex with columns that use disks on different buses. To do so, you must create the subdisks for each column.
Note
Each column of subdisks should be the same size and be a multiple of the data unit size so there is no wasted space. For example, a data unit size (stripe width) of 64 KB for a striped plex corresponds to 128 blocks (sectors), so the total of the subdisks in each column should be a multiple of 128.
If each column comprises one subdisk (the typical configuration), then the subdisk size should be a multiple of 128. If a column comprises two subdisks, one subdisk can be one sector and the other can be 127 sectors, both could be 64 sectors, or any other combination as long as the total is a multiple of 128. In the following example, there is one subdisk per column.
To create an LSM volume that uses a striped plex on different buses:
Create the subdisks on disks on different buses:
#
volmake [-g disk_group] sd subdisk disk len=length
The following example creates subdisks on disks dsk2 and dsk3 (on bus 1), dsk4 and dsk5 (on bus 2), and dsk6 and dsk7 (on bus 3):
#
volmake sd dsk2-01 dsk2 len=16m#
volmake sd dsk3-01 dsk3 len=16m#
volmake sd dsk4-01 dsk4 len=16m#
volmake sd dsk5-01 dsk5 len=16m#
volmake sd dsk6-01 dsk6 len=16m#
volmake sd dsk7-01 dsk7 len=16m
Create a striped plex, specifying the order of subdisks on which to create the columns:
#
volmake [-g disk_group] plex plex layout=stripe \
stwidth=data_unit_size sd=subdisk,...
The following example uses the subdisks created in step 1 and lists them in alternating bus order to create a six-column striped plex called plex_01. The command line lists the subdisks in a pattern that alternates the bus order from bus 1 to 2, then bus 1 to 3, then bus 2 to 3:
#
volmake plex plex_01 layout=stripe stwidth=64 \
sd=dsk2-01,dsk4-01,dsk3-01,dsk6-01,dsk5-01,dsk7-01
If you want to create a mirror plex and a DRL for the volume, complete this step. If the volume will have only one data plex, go to step 4.
Repeat step 1 to create subdisks on a different group of disks on different buses for the second data plex.
The following example creates subdisks for the columns in the second data plex on disks dsk8 and dsk9 (on bus 4), dsk10 and dsk11 (on bus 5), and dsk12 and dsk13 (on bus 6):
#
volmake sd dsk8-01 dsk8 len=16m#
volmake sd dsk9-01 dsk9 len=16m#
volmake sd dsk10-01 dsk10 len=16m#
volmake sd dsk11-01 dsk11 len=16m#
volmake sd dsk12-01 dsk12 len=16m#
volmake sd dsk13-01 dsk13 len=16m
Repeat step 2 to create the second data plex, specifying the order of subdisks on which to create the columns.
The following example uses the subdisks created in step 3a and lists them in alternating bus order to create a six-column striped plex called plex_02. The command line lists the subdisks in a pattern that alternates the bus order from bus 4 to 5, then bus 4 to 6, then bus 5 to 6:
#
volmake plex plex_02 layout=stripe stwidth=64 \
sd=dsk8-01,dsk10-01,dsk9-01,dsk12-01,dsk11-01,dsk13-01
Create the LSM volume, specifying the name of the data plex you created in step 2, and the additional data plex (if any) you created in step 3:
#
volmake [-g disk_group] -U usage_type vol volume \
plex=plex ...
The following example creates an LSM volume called vol9 with a usage
type of
fsgen
, using a plex called plex_stripe:
#
volmake -U fsgen vol vol9 plex=plex_stripe
The following example creates an LSM volume called vol_mirr with a usage
type of
fsgen
, using data plexes called plex_01 and plex_02:
#
volmake -U fsgen vol vol_mirr plex=plex_01,plex_02
If the volume has mirror plexes, add a DRL plex to the volume on a disk that is not used by one of the data plexes:
#
volassist addlog volume disk
Start the LSM volume:
#
volume start volume
The volume is ready for use.
4.2.4 Creating an LSM Volume Using a RAID 5 Plex
By default, the
volassist
command creates columns
for a RAID 5 plex on disks in alphanumeric order, regardless of their order
on the command line.
To improve performance, you might want to create the columns on disks on different buses. See Section 4.2.5 for more information about specifying the disk order for columns in a RAID 5 plex.
The
volassist
command automatically creates a RAID
5 log plex for the volume.
To create an LSM volume that uses a RAID 5 plex, enter:
#
volassist [-g disk_group] make volume length layout=raid5 \
[nstripe=number_of_columns] [stripeunit=data_unit_size] [disks]
The following example creates a 6 GB, six-column volume called vol6 in a disk group called dg1, using any available disks:
#
volassist -g dg1 make vol6 6g layout=raid5 nstripe=6
4.2.5 Creating an LSM Volume Using a RAID 5 Plex (on Different Buses)
By default, LSM creates columns for a RAID 5 plex on the first available disks it finds in the disk group. This might result in a volume with columns that use disks on the same bus.
You can improve performance by creating a RAID 5 plex with columns that use disks on different buses. To do so, you must create the subdisks for each column.
Note
Each column of subdisks should be the same size and be a multiple of the data unit size so there is no wasted space. For example, a data unit size (stripe width) of 16 KB for a RAID 5 plex corresponds to 32 blocks (sectors), so the total of the subdisks in each column should be a multiple of 32.
If each column comprises one subdisk (the typical configuration), then the subdisk size should be a multiple of 32. If a column comprises two subdisks, one subdisk can be one sector and the other can be 31 sectors, both could be 16 sectors, or any other combination as long as the total is a multiple of 32. In the following example, there is one subdisk per column.
To create an LSM volume that uses a RAID 5 plex on different buses:
Create the subdisks on disks on different buses:
#
volmake [-g disk_group] sd subdisk disk,offset,length
The following example creates 1 MB subdisks for the data plex on disks called dsk6, dsk7, dsk8, and dsk9. In this example, disks dsk6 and dsk7 are on bus 1, and dsk8 and dsk9 are on bus 2:
#
volmake sd dsk6-01 dsk6 len=1m#
volmake sd dsk7-01 dsk7 len=1m#
volmake sd dsk8-01 dsk8 len=1m#
volmake sd dsk9-01 dsk9 len=1m
Create the RAID 5 data plex, specifying the order of subdisks on which to create the columns:
#
volmake [-g disk_group] plex plex layout=raid5 \
stwidth=data_unit_size sd=subdisk,...
The following example uses the subdisks created in step 1 to create a four-column RAID 5 data plex called plex-01:
#
volmake plex plex-01 layout=raid5 stwidth=16 \
sd=dsk6-01,dsk8-01,dsk7-01,dsk9-01
Note that in this plex, the stripe alternates between subdisks on buses 1 and 2.
Create the LSM volume, specifying the data plex:
#
volmake [-g disk_group] -U raid5 vol volume plex=plex
The following example creates an LSM volume called vol5 using the plex called plex-01:
#
volmake -U raid5 vol vol5 plex=plex-01
Add a RAID 5 log plex to the volume, on a disk that is not used by the data plex:
#
volassist addlog volume disk
Start the LSM volume:
#
volume start volume
The volume is ready for use.
4.2.6 Creating an LSM Volume for Secondary Swap Space
If disk errors occur in the swap space, a system crash is likely to occur. You can create an LSM volume using mirrored concatenated plexes to protect against disk I/O errors in the secondary swap space. Do not create a DRL plex for swap volumes, because mirror resynchronization is not necessary, and a DRL plex on swap volumes will interfere with crash dumps.
To create an LSM volume for the secondary swap space:
Create an LSM volume without a log:
#
volassist [-g disk_group] -U gen make volume length \
nmirror=n layout=nolog
The following example creates an LSM volume called vol_swap2 that uses two mirrors with no log:
#
volassist -U gen make vol_swap2 128m nmirror=2 layout=nolog
Set the LSM volume with the
start_ops=norecov
option so LSM does not resynchronize the mirrors:
#
volume set start_opts=norecov volume
Add the LSM volume as secondary swap space using the
swapon
command:
#
swapon /dev/vol/volume
Add the LSM device special file to the
swapdevice
kernel attribute value within the
vm:
section
of the
/etc/sysconfigtab
file.
The following example shows
the entry to change:
vm: swapdevice=/dev/disk/dsk1b, /dev/vol/volume
See the
System Administration
manual and the
swapon
(8)
and
sysconfig
(8)
reference pages for more information on adding additional swap space.
4.3 Configuring File Systems to Use LSM Volumes
After you create an LSM volume, you use it the same way you would use a disk partition. Because LSM uses the same interfaces as disk device drivers, you can specify an LSM volume in any operation where you can specify a disk or disk partition.
The following sections describe how to configure AdvFS and UFS to use
an LSM volume.
4.3.1 Creating an AdvFS Domain on an LSM Volume
AdvFS treats LSM volumes as it does any other storage device. See AdvFS Administration for information on creating an AdvFS domain.
Note
You can add more LSM volumes to an existing AdvFS domain if the domain needs more storage by creating a new LSM volume and using the AdvFS
addvol
command to add the volume to the domain. See AdvFS Administration for more information.
4.3.2 Creating a UFS File System on an LSM Volume
To create a UFS on an LSM volume:
Create a UFS using the LSM disk group and volume name:
#
newfs [options] /dev/rvol/disk_group/volume
The following example creates a UFS on an LSM volume called vol_ufs in the dg1 disk group:
#
newfs /dev/rvol/dg1/vol_ufs
It is not necessary to specify the name of the disk group for LSM volumes in the rootdg disk group.
See the
newfs
(8)
reference page for information on
newfs
options.
Use the LSM block special device name to mount the file system:
#
mount /dev/vol/disk_group/volume /mount_point
The following example mounts the LSM volume called vol_ufs as mnt2:
#
mount /dev/vol/dg1/vol_ufs /mnt2
4.4 Creating an LSM Volume for Existing Data
When you create an LSM volume for existing data on a disk or partition, LSM:
Converts the disk or partition to an LSM nopriv disk
Encapsulates the data in the disk or disk partition
Configures the disk or disk partition into an LSM volume in the rootdg disk group
You can encapsulate data in:
Disks or disk partitions, including UFS file systems (Section 4.4.1)
AdvFS domains (Section 4.4.2)
The boot disk (Section 3.4.3)
4.4.1 Creating a Volume from Disks or Disk Partitions
The encapsulation procedure configures disks and disk partitions, which
can contain any kind of data including a UFS file system, into LSM nopriv
disks using information in the disk label and in the
/etc/fstab
file.
After the encapsulation, entries in the
/etc/fstab
file or in the
/etc/sysconfigtab
file are changed to use
the LSM volume name instead of the block device name of the disk or disk partition.
If you encapsulate an entire disk (by not specifying a partition letter), such as dsk3, all the in-use partitions are encapsulated as one LSM nopriv disk.
To encapsulate a disk or disk partition:
Back up the data on the disk or disk partition to be encapsulated.
Unmount the disk or partition or take the data off line. If you cannot unmount the disk or partition or take the data off line, you must restart the system to complete the encapsulation procedure.
Create the LSM encapsulation script:
#
volencap [-g disk_group] {disk|partition}
The following example creates an encapsulation script for a disk called dsk3:
#
volencap dsk3
Note
Although you can encapsulate several disks or disk partitions at the same time, it is recommended that you encapsulate each disk or disk partition separately.
Complete the encapsulation process:
#
volreconfig
If the encapsulated disk or disk partition is in use, the
volreconfig
command prompts you to restart the system.
Optional but recommended, move the volume off the nopriv disks to simple or sliced disks in the same disk group. See Section 5.1.6 for more information.
4.4.2 Creating a Volume from an AdvFS Domain
You can place an existing AdvFS domain into an LSM volume by either encapsulating the domain or migrating the domain to an LSM volume:
Encapsulating the domain creates an LSM volume on the same disk or disks that the domain already uses.
If an AdvFS domain consists of one disk or partition, you can encapsulate the disk or partition using the procedure described in Section 4.4.1.
If the AdvFS domain consists of multiple disks or partitions (requires the AdvFS Utilities License), you can encapsulate the AdvFS domain instead of the individual disk or partitions.
LSM creates an LSM volume for each AdvFS disk or partition in the domain.
Encapsulating a domain might require restarting the system if you cannot unmount the filesets before performing the encapsulation.
Migrating the domain creates an LSM volume on disks that you specify, moves the domain data to the new volume, and removes the domain from its original disks. The disks are no longer in use by the domain after the migration completes.
This operation does not require you to unmount filesets or restart the system, but temporarily uses additional disk space until the migration is complete.
When you encapsulate an AdvFS domain, LSM changes the links in the
/etc/fdmns
directory to point to the LSM volumes.
No mount point changes are necessary during encapsulation or migration,
because the mounted filesets are abstractions to the domain.
The domain can
be activated normally after the encapsulation or migration process completes.
Once the domain is activated, the filesets remain unchanged and the encapsulation
or migration is transparent to AdvFS domain users.
4.4.2.1 Encapsulating an AdvFS Domain
To encapsulate an AdvFS domain:
Back up the data in the AdvFS domain with the
vdump
utility.
Unmount all filesets.
If the domain is in use (you cannot unmount the filesets), you can create
the encapsulation script and run
volreconfig
when convenient
to complete the encapsulation procedure.
Create the LSM encapsulation script:
#
volencap domain
The following example creates an encapsulation script for an AdvFS domain called dom1:
#
volencap dom1
Complete the encapsulation procedure:
#
volreconfig
If the AdvFS domain is mounted, the
volreconfig
command
prompts you to restart the system.
The
/etc/fdmns
directory is updated on successful
creation of LSM volumes.
4.4.2.2 Migrating an AdvFS Domain
You can place any AdvFS domain (except for the
root_domain
) into an LSM volume.
Note on TruClusters
See the Cluster Administration manual for information on migrating an AdvFS domain in a cluster.
This operation uses a different disk than the disk on which the domain originally resides and therefore does not require a restart. You can specify:
The name of the volume (default is the name of the domain
with the suffix
vol
)
The number of stripes and mirrors that you want the volumes to use
Striping improves read performance, and mirroring ensures data availability in the event of a disk failure.
You must specify LSM disks by their disk media names to create the volume
for the domain.
The disks that you specify must belong to the same disk group,
because LSM volumes can use disks from only one disk group.
There must be
sufficient LSM disks and the disks must be large enough to contain the domain.
See the
volmigrate
(8)
reference page for more information on disk requirements and
the options for striping and mirroring.
To migrate a domain into an LSM volume, enter:
#
volmigrate [-g diskgroup] [-m num_mirrors] [-s num_stripes] domain disk_media_name ...
The
volmigrate
command creates a volume with the
specified characteristics, moves the data from the domain into the volume,
removes the original disk or disks from the domain, and leaves those disks
unused.
The volume is started and ready for use, and no restart is required.