This chapter describes how to use LSM commands to create LSM volumes. Creating LSM volumes typically involves:
Checking that there is free disk space for the LSM volume and adding more disks or disk groups if necessary
Creating LSM volumes
Configuring LSM volumes for use
The tasks described in this chapter can also be accomplished by using:
The Storage Administrator GUI. See Chapter 9 for more information on the Storage Administrator.
The
voldiskadm
menu interface.
See
Appendix D
for more information on the
voldiskadm
menu interface.
The Visual Administrator GUI. See Appendix B for more information on the Visual Administrator
For more information on an LSM command, see the reference page that
corresponds to its name.
For example, for more information on the
volassist
command, enter:
#
man volassist
5.1 Checking for Free Disk Space
Before you create a volume, check to see if any of the system's disks
were initialized for use with the LSM software, and verify that there is enough
free disk space within a disk group to create the volume.
5.1.1 Checking for Initialized Disks
To display a list of initialized disks, enter:
#
voldisk list
Output similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 rootdg online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg1 online dsk10 sliced - - unknown dsk11 sliced - - unknown
A value of
online
in the
STATUS
column indicates that a disk was initialized for use with the LSM software.
In the previous example:
The disks called
dsk2
through
dsk6
were initialized for use with the LSM software and were added
into the
rootdg
disk group.
The disk called
dsk7
was initialized for
use with the LSM software because its status is
online
,
but it is not currently configured within any disk group.
The disks called
dsk8
and
dsk9
were initialized for use with the LSM software and were added
into the
dg1
disk group.
The disks called
dsk0
,
dsk1
,
dsk10
, and
dsk11
were not initialized for use
with the LSM software because their status is
unknown
.
5.1.2 Checking for Space in a Disk Group
To display how much free disk space is available in disk groups, enter:
#
voldg free
Output similar to the following is displayed:
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS rootdg dsk2 dsk2 dsk2 2097217 2009151 - rootdg dsk3 dsk3 dsk3 2097152 2009216 - rootdg dsk4 dsk4 dsk4 0 4106368 - rootdg dsk5 dsk5 dsk5 0 4106368 - rootdg dsk6 dsk6 dsk6 0 4106368 - dg1 dsk8 dsk8 dsk8 0 4106368 - dg1 dsk9 dsk9 dsk9 0 4106368 -
The value in the
LENGTH
column displays the amount
of free space on a disk.
To display detailed disk space information in a specific disk group, enter:
#
volassist [-g disk_group] help space
For example, to display detailed disk space information about the
rootdg
disk group, enter:
#
volassist help space
Output similar to the following is displayed:
Disk: dsk2 len=4106368 used=2097217 free=2009151 (48.93%) Attributes: dm:dsk2 device:dsk2 da:dsk2 Free regions: 2097233,2009151 Disk: dsk3 len=4106368 used=2097152 free=2009216 (48.93%) Attributes: dm:dsk3 device:dsk3 da:dsk3 Free regions: 2097168,2009216 Disk: dsk4 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk4 device:dsk4 da:dsk4 Free regions: 16,4106368 Disk: dsk5 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk5 device:dsk5 da:dsk5 Free regions: 16,4106368 Disk: dsk6 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk6 device:dsk6 da:dsk6 Free regions: 16,4106368 Disk sets: da:dsk2 space=4106368 used=2097217 free=2009151 (48.93%) da:dsk3 space=4106368 used=2097152 free=2009216 (48.93%) da:dsk4 space=4106368 used=0 free=4106368 (100.00%) da:dsk5 space=4106368 used=0 free=4106368 (100.00%) da:dsk6 space=4106368 used=0 free=4106368 (100.00%) device:dsk2 space=4106368 used=2097217 free=2009151 (48.93%) device:dsk3 space=4106368 used=2097152 free=2009216 (48.93%) device:dsk4 space=4106368 used=0 free=4106368 (100.00%) device:dsk5 space=4106368 used=0 free=4106368 (100.00%) device:dsk6 space=4106368 used=0 free=4106368 (100.00%) dm:dsk2 space=4106368 used=2097217 free=2009151 (48.93%) dm:dsk3 space=4106368 used=2097152 free=2009216 (48.93%) dm:dsk4 space=4106368 used=0 free=4106368 (100.00%) dm:dsk5 space=4106368 used=0 free=4106368 (100.00%) dm:dsk6 space=4106368 used=0 free=4106368 (100.00%)
To display detailed information about the disks in the
dg1
disk group, enter:
#
volassist -g dg1 help
space
Output similar to the following is displayed:
Disk: dsk8 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk8 device:dsk8 da:dsk8 Free regions: 16,4106368 Disk: dsk9 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk9 device:dsk9 da:dsk9 Free regions: 16,4106368 Disk sets: da:dsk8 space=4106368 used=0 free=4106368 (100.00%) da:dsk9 space=4106368 used=0 free=4106368 (100.00%) device:dsk8 space=4106368 used=0 free=4106368 (100.00%) device:dsk9 space=4106368 used=0 free=4106368 (100.00%) dm:dsk8 space=4106368 used=0 free=4106368 (100.00%) dm:dsk9 space=4106368 used=0 free=4106368 (100.00%)
After you determine whether or not there is available disk space to
create a volume, it may be necessary to configure more disks for use with
the LSM software before you can create a volume.
See
Section 5.2
if you need to configure more disks for use with the LSM software.
See
Section 5.3
if there is sufficient disk space to create a
volume.
5.2 Configuring a Disk for LSM Use
If there is not enough disk space to create a volume, you must configure more disks for use with the LSM software, which involves:
Initializing a disk for use with the LSM software, which does the following:
Destroys existing data on a disk.
Updates the disk label.
Uses the default values described in Table 5-1 to configure a disk for use with the LSM software.
Adding the initialized disk to a disk group.
You must place
an initialized disk into a disk group for the LSM software to use it.
You
can add disks to the default disk group
(rootdg
), which
by default is created during the LSM installation and always exists on a system
running the LSM software, or you can create additional disk groups to organize
your disks into logical sets.
Each disk group that you create must:
Contain at least one disk that is
online
and does not belong to another disk group
Be assigned a unique name
Table 5-1
shows the default values for
the options that are used when you initialize disks for use with the LSM software.
These options specify the size and layout of the disk's private region, which
contains the disk's identification information, an area for the disk group's
configuration database, and other information used internally by the LSM software.
The default values for these options are sufficient for most environments,
and changing them is usually not necessary.
Table 5-1: Disk Options Default Values
Option | Specifies | Default Value |
privlen=length |
The length of the private area (used for LSM private data) to create on the disk. | 4096 sectors |
publen=length |
The length of the public area to create on the disk. | The size of the disk minus the private area on the disk |
noconfig |
Whether or not to disable the setup of kernel logs and configuration databases on the disk. The size of the private area is not changed, but it will not contain the normal private data. | Disabled |
config |
Whether or not to enable the setup of kernel logs and configuration databases on the disk. | Enabled |
nconfig=number |
The number of configuration copies and log copies to be initialized on the disk. | 1 |
configlen=length |
The length in sections of each configuration copy. | The default value is calculated based on
the value of the
nconfig
attribute |
loglen=length |
The length of each log copy. | The default value is calculated based on
the values of the
nconfig
and
nlog
attributes |
The following sections describe how to configure new disks for use with
the LSM software by using either the
voldiskadd
interactive
utility or the individual LSM commands.
See
Chapter 9
for information on how to configure
new disks using the Storage Administrator.
See
Appendix C
for information on how to configure new disks using the
voldiskadm
menu interface.
5.2.1 Configuring a Disk Using the
voldiskadd
Command
You use the
voldiskadd
command to initialize an entire
disk for use with the LSM software.
The
voldiskadd
command
prompts you for information about the disk, uses default information described
in
Table 5-1
to initialize the disk, and places
the disk in a disk group that you specify.
If the disk group does not exist,
it is created.
If you do not want to use the default information described in Table 5-1 to initialize the disk, initialize the disk using the individual LSM commands described in Section 5.2.2.
To configure a disk for use with the LSM software using the
voldiskadd
command, enter:
#
voldiskadd disk_name
For example, to configure a disk called
dsk9
as an
LSM sliced disk, enter:
#
voldiskadd dsk9
If you omit the device name on the command line,
voldiskadd
prompts you for it.
Output similar to the following is displayed.
Notice in this output
that the disk will be a member of the
dg1
disk group, which
is created as a result of this procedure.
Add or initialize disks Menu: VolumeManager/Disk/AddDisks Here is the disk selected. dsk9 Continue operation? [y,n,q,?] (default: y) You can choose to add this disk to an existing disk group, a new disk group, or leave the disk available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disk available for future use, specify a disk group name of "none". Which disk group [<group>,none,list,q,?] (default: rootdg) dg1 There is no active disk group named dg1. Create a new group named dg1? [y,n,q,?] (default: y) The default disk name that will be assigned is: dg101 Use this default disk name for the disk? [y,n,q,?] (default: y) Add disk as a spare disk for dg1? [y,n,q,?] (default: n) A new disk group will be created named dg1 and the selected disks will be added to the disk group with default disk names. dsk9 Continue with operation? [y,n,q,?] (default: y) The following disk device has a valid disk label, but does not appear to have been initialized for the Logical Storage Manager. If there is data on the disk that should NOT be destroyed you should encapsulate the existing disk partitions as volumes instead of adding the disk as a new disk. dsk9 Initialize this device? [y,n,q,?] (default: y) Initializing device dsk9. Creating a new disk group named dg1 containing the disk device dsk9 with the name dg101. Goodbye.
Once a disk is configured for use with the LSM software, you can create
volumes as described in
Section 5.3.
5.2.2 Configuring a Disk LSM Using Individual Commands
To configure a disk for use with the LSM software by using individual commands, you enter:
The
voldisksetup
command to initialize
disks
The
voldg
command to either add the initialized
disk to an existing disk group or to create a new disk group
5.2.2.1 Initializing a Disk Using the
voldisksetup
Command
The
voldisksetup
command performs two functions:
Updates the partition table in the disk's disk label.
The
disk must already have a disk label before using the
voldisksetup
command.
Initializes the disk's LSM private region that contains the disk's identification information, an area for the disk group's configuration database, and other important information used by the LSM software.
To initialize a disk, enter:
#
voldisksetup -i {diskname | partition}
[options]
By specifying a disk name with the
voldisksetup
command,
the entire disk is initialized for use with the LSM software as a
sliced
disk.
Alternatively, specifying a disk partition initializes
that partition for use with the LSM software as a
simple
disk.
For ease of management and greater flexibility, configure the entire
disk for use with the LSM software as a
sliced
disk whenever
possible.
Table 5-1
lists the options for which
you can change values when using the
voldisksetup
command;
however, it is usually not necessary to change these values.
Follow these steps to configure an entire disk for use with the LSM software as a sliced disk:
Identify that the disk is not initialized for use with the LSM software by entering the following command:
#
voldisk list
Output similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 rootdg online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg1 online dsk10 sliced - - unknown dsk11 sliced - - unknown dsk12 sliced - - unknown dsk13 sliced - - unknown
Disks not initialized for use with the LSM software display
unknown
in the
STATUS
column.
Once an uninitialized disk is identified, enter the
disklabel
command to verify that the disk is not being used.
For
example to verify that a disk called
dsk10
is not being
used, enter:
#
disklabel dsk10
Output similar to the following is displayed:
# /dev/rdisk/dsk10c: type: SCSI disk: RZ1BB-CS label: flags: dynamic_geometry bytes/sector: 512 sectors/track: 86 tracks/cylinder: 16 sectors/cylinder: 1376 cylinders: 3045 sectors/unit: 4110480 rpm: 7228 interleave: 1 trackskew: 40 cylinderskew: 80 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype [fsize bsize cpg] # NOTE: values not exact a: 131072 0 unused 0 0 # (Cyl. 0 - 95*) b: 262144 131072 unused 0 0 # (Cyl. 95*- 285*) c: 4110480 0 unused 0 0 # (Cyl. 0 - 2987*) d: 0 0 unused 0 0 # (Cyl. 0 - -1) e: 0 0 unused 0 0 # (Cyl. 0 - -1) f: 0 0 unused 0 0 # (Cyl. 0 - -1) g: 1858632 393216 unused 0 0 # (Cyl. 285*- 1636*) h: 1858632 2251848 unused 0 0 # (Cyl. 1636*- 2987*)
All the disk partition's
fstype
field should be listed
as
unused
.
Note
Not all software that uses a disk partition updates the
fstype
field to something other thanunused
. Be sure to verify that the disk is really unused.
Initialize the disk for use with the LSM software by entering the following command:
#
voldisksetup -i disk_name
For example, to initialize a disk called
dsk10
for
use with the LSM software, enter:
#
voldisksetup -i dsk10
Display the results.
Use the
disklabel
command to display how
the disk label was updated.
For example, to display the disk label for a disk
called
dsk10
, enter:
#
disklabel dsk10
Output similar to the following is displayed:
# /dev/rdisk/dsk10c: type: SCSI disk: RZ1BB-CS label: flags: dynamic_geometry bytes/sector: 512 sectors/track: 86 tracks/cylinder: 16 sectors/cylinder: 1376 cylinders: 3045 sectors/unit: 4110480 rpm: 7228 interleave: 1 trackskew: 40 cylinderskew: 80 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype [fsize bsize cpg] # NOTE: values not exact a: 131072 0 unused 0 0 # (Cyl. 0 - 95*) b: 262144 131072 unused 0 0 # (Cyl. 95*- 285*) c: 4110480 0 unused 0 0 # (Cyl. 0 - 2987*) d: 0 0 unused 0 0 # (Cyl. 0 - -1) e: 0 0 unused 0 0 # (Cyl. 0 - -1) f: 0 0 unused 0 0 # (Cyl. 0 - -1) g: 4106384 0 LSMpubl # (Cyl. 0 - 2984*) h: 4096 4106384 LSMpriv # (Cyl. 2984*- 2987*)
Use the
voldisk list disk_name
command to display the disk values used within the disk's private
region.
For example, to display the disk values for a disk called
dsk10
, enter:
#
voldisk list dsk10
Device: dsk10 devicetag: dsk10 type: sliced hostid: disk: name= id=929462025.1171.wdt2 group: name= id= flags: online ready autoimport pubpaths: block=/dev/disk/dsk10g char=/dev/rdisk/dsk10g privpaths: block=/dev/disk/dsk10h char=/dev/rdisk/dsk10h version: 2.1 iosize: min=512 (bytes) max=32768 (blocks) public: slice=6 offset=16 len=4106368 private: slice=7 offset=0 len=4096 update: time=929462026 seqno=0.1 headers: 0 248 configs: count=1 len=2993 logs: count=1 len=453 Defined regions: config priv 17- 247[ 231]: copy=01 offset=000000 disabled config priv 249- 3010[ 2762]: copy=01 offset=000231 disabled log priv 3011- 3463[ 453]: copy=01 offset=000000 disabled
Use the
voldisk list
command to verify
that the status of the disk is
online
, but not part of
a disk group.
For example:
#
voldisk list
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 rootdg online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg1 online dsk10 sliced - - online dsk11 sliced - - unknown dsk12 sliced - - unknown dsk13 sliced - - unknown
After the disk is initialized, you can add it to an existing disk group
or you can create a new disk group.
The following sections describe how to
add an initialized disk to an existing disk group or how to create a new
disk group.
5.2.2.2 Adding a Disk To a Disk Group
After a disk is initialized for use with the LSM software, you can add it into an existing disk group.
Follow these steps to add a disk to an existing disk group:
Identify initialized disks that do not belong to a disk group by entering the following command:
#
voldisk list
Output similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 rootdg online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg1 online dsk10 sliced - - online dsk11 sliced - - unknown dsk12 sliced - - unknown dsk13 sliced - - unknown
Initialized disks that do not belong to a disk group have a
STATUS
of
online
and a blank
GROUP
entry, which is represented by a dash.
In the previous output, disks called
dsk7
and
dsk10
are initialized and not part of a disk group because their
status is
online
and the
GROUP
column
is blank.
Display the disk groups by entering the following command:
#
voldg list
Output similar to the following is displayed:
NAME STATE ID rootdg enabled 927328730.1026.wdt2 dg1 enabled 929455995.1168.wdt2
This output shows that
the system has two disk groups:
rootdg
and
dg1
.
Add an initialized disk to a disk group by entering the following command:
#
voldg adddisk [-g
disk_group]
diskname
For example, to add the LSM sliced disk called
dsk7
to the
rootdg
disk group, enter:
#
voldg adddisk dsk7
To add the LSM sliced disk called
dsk10
to a disk
group called
dg1
, enter:
#
voldg -g dg1 adddisk dsk10
After disks are added to a disk group, you can create volumes as described
in
Section 5.3.
5.2.2.3 Creating A Disk Group
While placing all the disks into the default disk group,
rootdg
, provides the greatest flexibility for creating and reconfiguring
volumes, you may want to group disks together to create other disk groups.
You can use an initialized disk that is not in a disk group to create a disk group. The disks configured into a disk group provide the disk space that is used for creating volumes. LSM volumes can only use disks that are in the same disk group. Therefore, carefully decide how to group disks into disk groups.
Follow these steps to create a disk group:
Identify initialized disks that are not in a disk group by entering the following command:
#
voldisk list
Output similar to the following is displayed:
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 rootdg online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg1 online dsk10 sliced - - online dsk11 sliced - - unknown dsk12 sliced - - unknown dsk13 sliced - - unknown
Initialized disks that not in a disk group have a
STATUS
of
online
and a blank
GROUP
entry, which
is represented by a dash.
Create a disk group by entering the following command:
#
voldg init
disk_group
disk_name
For example, to create a disk group called
dg2
using
a disk called
dsk10
, enter:
#
voldg init dg2 dsk10
By default, LSM maintains up to four copies of the LSM configuration database on different disks within the disk group. When a disk is added, removed, or fails, the LSM software automatically evaluates, and if necessary, changes the number of copies and location of the configuration databases for that disk group.
To display the current number and locations of a disk group's configuration database, enter the following command:
#
voldg list disk_group
For example, to display the current number and locations of configuration
databases for a disk group called
dg2
, enter:
#
voldg list dg2
Output similar to the following is displayed:
Group: dg2 dgid: 929473041.1178.wdt2 import-id: 0.1177 flags: copies: nconfig=default nlog=default config: seqno=0.1027 permlen=2993 free=2991 templen=2 loglen=453 config disk dsk10 copy 1 len=2993 state=clean online log disk dsk10 copy 1 len=453
Display the current LSM configuration for a disk group by entering the following command:
#
volprint [-g disk_group] -ht
For example to display the current LSM configuration for a disk group
called
dg2
, enter:
#
volprint -g dg2 -ht
Output similar to the following is displayed:
DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE dg dg2 default default 5000 929473041.1178.wdt2 dm dsk10 dsk10 sliced 4096 4106368 -
Once a disk is initialized for use with the LSM software and in a disk
group, you can create volumes as described in
Section 5.3.
5.3 Creating A Volume
After disks are initialized and added into disk groups, you can create LSM volumes. Creating an LSM volume includes:
Selecting the type of data layout for the volume. Data layout types are simple, concatenated, striped, mirrored, mirrored/striped, or RAID5.
Deciding which volume usage type to use, for example the
fsgen
or
gen
type.
The LSM volume should use
the
fsgen
type if the volume will contain a file system;
otherwise, the volume should use the
gen
usage type.
If you encapsulated and mirrored the boot disk as described in
Chapter 4, then the root volume has a usage type of
root
and the swap volumes has a usage type of
swap
.
See
Chapter 4
for more information on encapsulating
and mirroring the root and swap
Locating storage space to create the volume.
Creating and associating a volume object with one or more plex objects.
Associating subdisks to each of the volume's plexes.
You can create an LSM volume by using:
The
volassist
command
The
volassist
command provides an easy method for
creating and changing volume configurations.
The
volassist
command:
Finds space for and creates volumes
Use a set of default values for options, which you can change, to create a volume. To view the default values for options, enter:
#
volassist help showattrs
Output similar to the following is displayed:
#Attributes: layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,diskalign,nostorage mirrors=2 columns=0 nlogs=1 regionlogs=1 raid5logs=1 min_columns=2 max_columns=8 regionloglen=0 raid5loglen=0 logtype=region stripe_stripeunitsize=128 raid5_stripeunitsize=32 usetype=fsgen diskgroup= comment="" fstype= user=0 group=0 mode=0600 probe_granularity=2048 alloc= wantalloc= mirror=
Adds mirrors and logs to existing volumes
Provides for the migration of data from specified disks
Provides facilities for the online backup of existing volumes
A series of individual LSM commands. Using individual commands to create volumes is for system administrators who require greater flexibility in defining an LSM volume configuration.
The Storage Administrator
The
voldiskadm
menu interface
The following sections describe how to create LSM volumes by using either
the
volassist
command or the individual LSM commands.
See
Chapter 9
for information on how to create
volumes using the Storage Administrator.
5.3.1 Creating Simple and Concatenated Volumes
An LSM volume that maps the volume blocks directly to the disk blocks without mirroring, striping, or disk concatenation is often referred to as a simple volume.
Using a simple volume has minimal I/O performance impact and allows greater flexibility compared to using a disk partition without LSM because you can easily change the configuration online, such as moving the data to a less busy disk, adding a mirror, and so on.
A concatenated LSM volume is a volume that combines one or more sections of disk space. Usually these multiple disk sections, or subdisks, span multiple, different disks, but this is not required.
A concatenated volume can be used to combine several smaller disks
to form a single, larger LSM volume.
5.3.1.1 Using the
volassist
Command
You can use the
volassist
command to create a simple
volume on a disk by specifying the disk to be used and a volume size that
is less than or equal to the available storage space on that disk.
Specifying
a volume size that exceeds an individual disk size will create a concatenated
volume.
If you do not specify a disk name or multiple disks, the
volassist
command selects the disk location of the volume.
However, a concatenated
volume may be created that spans multiple disks if the volume will not fit
on one disk.
Follow these steps to create a simple volume called
v1
in the
rootdg
disk group on a disk called
dsk2
:
Display the disk space on a disk by entering the following command:
#
volassist help space |
grep disk_name
For example, to check the space on a disk called
dsk2
,
enter:
#
volassist help space |
grep dsk2
Output similar to the following is displayed:
Disk: dsk2 len=4106368 used=0 free=4106368 (100.00%) dm:dsk2 device:dsk2 da:dsk2 da:dsk2 space=4106368 used=0 free=4106368 (100.00%) device:dsk2 space=4106368 used=0 free=4106368 (100.00%) dm:dsk2 space=4106368 used=0 free=4106368 (100.00%)
Create the volume by entering the following command:
# volassist [-g group_name] make volume_name\
length [disk_name]
For example, to create a simple volume called
v1
in the
rootdg
disk group on a disk called
dsk2
, enter:
#
volassist make v1 4106368s
dsk2
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v1
, enter:
#
volprint -ht v1
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v1 fsgen ENABLED ACTIVE 4106368 SELECT - pl v1-01 v1 ENABLED ACTIVE 4106368 CONCAT - RW sd dsk2-01 v1-01 dsk2 0 4106368 0 dsk2 ENA
Follow these steps create a concatenated volume:
Display the disk space in a disk group by entering the following command:
#
volassist [-g disk_group] help space
For example, to display the disk space in a disk group called
dg1
, enter:
#
volassist -g dg1 help
space
Output similar to the following is displayed:
Disk: dsk8 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk8 device:dsk8 da:dsk8 Free regions: 16,4106368 Disk: dsk9 len=4106368 used=0 free=4106368 (100.00%) Attributes: dm:dsk9 device:dsk9 da:dsk9 Free regions: 16,4106368 Disk sets: da:dsk8 space=4106368 used=0 free=4106368 (100.00%) da:dsk9 space=4106368 used=0 free=4106368 (100.00%) device:dsk8 space=4106368 used=0 free=4106368 (100.00%) device:dsk9 space=4106368 used=0 free=4106368 (100.00%) dm:dsk8 space=4106368 used=0 free=4106368 (100.00%) dm:dsk9 space=4106368 used=0 free=4106368 (100.00%)
Create the concatenated volume by entering the following command:
# volassist [-g group_name] -U usage_type make \
volume_name length [disk_names]
For example, to create a 3 GB, concatenated volume called
v2
in the disk group called
dg1
on disks called
dsk8
and
dsk9
, enter:
#
volassist -g dg1 -U gen
make v2 3g dsk8 dsk9
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v2
, enter:
#
volprint -ht v2
Output similar to the following is displayed:
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v2 gen ENABLED ACTIVE 6291456 SELECT - pl v2-01 v2 ENABLED ACTIVE 6291456 CONCAT - RW sd dsk8-01 v2-01 dsk8 0 2185088 0 dsk8 ENA sd dsk9-01 v2-01 dsk9 0 4106368 2185088 dsk9 ENA
5.3.1.2 Using Individual Commands
To create a simple volume called
v1
in the
rootdg
disk group on a disk called
dsk2
using
the
volmake
and
volume
commands, enter:
#
volmake sd dsk2-01 dsk2,0,4106368
#
volmake plex v1-01 sd=dsk2-01
#
volmake -U fsgen vol v1 plex=v1-01
#
volume start v1
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v1
, enter:
#
volprint -ht v1
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v1 fsgen ENABLED ACTIVE 4106368 SELECT - pl v1-01 v1 ENABLED ACTIVE 4106368 CONCAT - RW sd dsk2-01 v1-01 dsk2 0 4106368 0 dsk2 ENA
5.3.2 Creating A Striped Volume
Using LSM striped volumes (RAID0) is a common and effective way to dramatically increasing I/O performance. The actual performance gained by striping depends on numerous factors such as:
The number of disks within the stripe set
The location of the disks
How users and applications perform I/O
The stripe width
The I/O performance can improve and scale linearly by the same number of disks used within a stripe-set. For example, striping a volume's data across two disks can potentially improve both read and write performance for that volume by a factor of 2, and striping data across four disks can potentially improve performance up to a factor of 4.
LSM striped volumes can also improve performance by eliminating one bus or controller from becoming the bottleneck for the volume's I/O. By using multiple disks that reside on multiple buses to form the stripe set, a greater I/O throughput can be achieved for a single volume than would be otherwise possible if the volume's data resided on the same I/O bus. Therefore, understanding the system's hardware I/O topology when selecting which disks to use when configuring a striped volume will help to significantly improve I/O performance and avoid bottlenecks.
The default stripe width of 64KB usually works best for most I/O workloads, such as file systems and databases that generate multiple I/O to the same volume. For highly specialized environments where very large, raw I/O will always be performed to a volume one I/O at a time (for example multiple I/O is never issued to the same volume at the same time), a different stripe width may provide better performance, which enables the larger data transfer to be split up and performed in parallel. The best stripe width size to use for single, large I/O environments depends on:
Whether the I/O size varies
The number of disks within the stripe set
The hardware configuration, such as whether multiple buses are used
The hardware performance, such as average disk seek and transfer times
It is best to experiment with different stripe widths sizes to determine the size that works best for these specialized I/O environments.
Using LSM's online support can help when configuring and deconfiguring
different plexes with different stripe width sizes for comparing what works
best for your actual I/O workload.
5.3.2.1 Using the
volassist
Command
Follow these steps to create a striped volume using the
volassist
command:
Determine which disks are configured for use with the LSM software by entering the following command:
#
volprint -g disk_group -dt
For example, to display the LSM disks in the
rootdg
disk group, enter:
#
volprint -g rootdg -dt
Output similar to the following is displayed:
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE dm dsk2 dsk2 sliced 4096 4106368 - dm dsk3 dsk3 sliced 4096 4106368 - dm dsk4 dsk4 sliced 4096 4106368 - dm dsk5 dsk5 sliced 4096 4106368 - dm dsk6 dsk6 sliced 4096 4106368 - dm dsk7 dsk7 sliced 4096 4106368 -
Create the striped volume by entering the following command:
#
volassist [-g disk_group] make volume_name length \ nstripe=n [options]
Where n is the number of columns to be configured in the stripe set.
For example, to create the striped volume called
v_stripe
with the default stripe width on disks
dsk2
through
dsk7
, enter:
#
volassist make v_stripe 6g nstripe=6 dsk2 dsk4 dsk6 \ dsk3 dsk5 dsk7
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v_stripe
, enter:
#
volprint -ht v_stripe
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v_stripe fsgen ENABLED ACTIVE 12582912 SELECT v_stripe-01 pl v_stripe-01 v_stripe ENABLED ACTIVE 12582912 STRIPE 6/128 RW sd dsk2-01 v_stripe-01 dsk2 0 2097152 0/0 dsk2 ENA sd dsk3-01 v_stripe-01 dsk3 0 2097152 1/0 dsk3 ENA sd dsk4-01 v_stripe-01 dsk4 0 2097152 2/0 dsk4 ENA sd dsk5-01 v_stripe-01 dsk5 0 2097152 3/0 dsk5 ENA sd dsk6-01 v_stripe-01 dsk6 0 2097152 4/0 dsk6 ENA sd dsk7-01 v_stripe-01 dsk7 0 2097152 5/0 dsk7 ENA
5.3.2.2 Using Individual Command
For control over how the volume is configured, use the
volmake
and
volume
commands.
Using these commands to
create a stripe volume provides you with greater control in specifying which
disks will be used for which stripe column.
In this way, you can obtain the
best performance by configuring the striped plex so the stripe columns alternate
or rotate across different hardware buses.
Follow these steps to use the
volmake
and
volume
commands to create an LSM stripe volume:
Determine which hardware bus on which each LSM disk resides on by entering the following command:
#
file /dev/rdisk/disk_name
For example, to determine the hardware bus for disks called
dsk2c
,
dsk3c
,
dsk4c
,
dsk5c
,
dsk6c
,
dsk7c
, enter:
#
file /dev/rdisk/dsk2c /dev/rdisk/dsk3c \ /dev/rdisk/dsk4c /dev/rdisk/dsk5c \ /dev/rdisk/dsk6c /dev/rdisk/dsk7c
Output similar to the following is displayed:
/dev/rdisk/dsk2c: character special (19/70) SCSI #1 RZ1BB-CS ( disk #3 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk3c: character special (19/86) SCSI #1 RZ1BB-CS ( disk #4 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk4c: character special (19/102) SCSI #2 RZ1BB-CS ( disk #5 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk5c: character special (19/118) SCSI #2 RZ1BB-CS ( disk #6 (SCSI ID #5) (SCSI LUN #0) /dev/rdisk/dsk6c: character special (19/134) SCSI #3 RZ1BB-CS ( disk #7 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk7c: character special (19/150) SCSI #3 RZ1BB-CS ( disk #0 (SCSI ID #2) (SCSI LUN #0)
Create the subdisks by entering the following commands:
#
volmake sd sub_disk_name disk_name,
length
For example, to create subdisks called
dsk2-01
,
dsk3-01
,
dsk4-01
,
dsk5-01
,
dsk6-01
,
dsk7-01
, enter:
#
volmake sd dsk2-01 dsk2,0,2097152
#
volmake sd dsk3-01 dsk3,0,2097152
#
volmake sd dsk4-01 dsk4,0,2097152
#
volmake sd dsk5-01 dsk5,0,2097152
#
volmake sd dsk6-01 dsk6,0,2097152
#
volmake sd dsk7-01 dsk7,0,2097152
Create a striped plex by entering the following command:
# volmake plex plex_name layout=stripe st_width=64k \
sd=sub_disk_names
For example, to create a plex called
v_stripe-01
using subdisks called
dsk2-01
,
dsk3-01
,
dsk4-01
,
dsk5-01
,
dsk6-01
,
dsk7-01
, enter:
#
volmake plex v_stripe-01 layout=stripe st_width=64k \ sd=dsk2-01,dsk4-01,dsk6-01,dsk3-01,dsk5-01,dsk7-01
Notice the order of the subdisks specified when creating the plex will rotate the stripe columns across different hardware buses.
Create the volume using the striped plex by entering the following command:
#
volmake -U usage_type vol volume_name
plex=plex_name
For example, to use a plex called
v_stripe-01
to
create a volume called
v_stripe
, enter:
#
volmake -U gen vol v_stripe
plex=v_stripe-01
Start the volume by entering the following command:
#
volume start volume_name
For example, to start a volume called
v_stripe
, enter:
#
volume start v_stripe
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v_stripe
, enter:
#
volprint -ht v_stripe
Output similar to the following is displayed:
# Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v_stripe gen ENABLED ACTIVE 12582912 ROUND - pl v_stripe-01 v_stripe ENABLED ACTIVE 12582912 STRIPE 6/128 RW sd dsk2-01 v_stripe-01 dsk2 0 2097152 0/0 dsk2 ENA sd dsk4-01 v_stripe-01 dsk4 0 2097152 1/0 dsk4 ENA sd dsk6-01 v_stripe-01 dsk6 0 2097152 2/0 dsk6 ENA sd dsk3-01 v_stripe-01 dsk3 0 2097152 3/0 dsk3 ENA sd dsk5-01 v_stripe-01 dsk5 0 2097152 4/0 dsk5 ENA sd dsk7-01 v_stripe-01 dsk7 0 2097152 5/0 dsk7 ENA
5.3.3 Creating a Mirrored Volume
Using LSM mirrored volumes (RAID1) is a common and effective way to improve data availability. If one disk fails on a mirrored volume, the data can still be accessed from the other copy, or plex. By mirroring data using disks connected to different controllers or buses, you can improve data availability even further because the data is still accessible if a controller, cable, or storage cabinet fails. Therefore, it is helpful to understand a system's I/O hardware topology; that is, knowing which disk reside on which I/O bus.
Besides improving data availability, mirroring significantly improves read performance because multiple reads to the same volume are simultaneously done by using the multiple copies of data. For example, read performance can potentially improve by a factor of two on a mirrored volume with two plexes because twice as many reads are performed done at the same time.
Writes to the volume result in multiple, simultaneous write requests to each plex, so the time it takes to write to a volume may be slightly longer because of slight performance deviations between individual disks. For example, an individual write might take an additional 5 percent on average to complete because the volume write must wait for both writes to complete on both plexes (disks).
You can improve overall I/O performance with mirroring because the larger
performance gains for read often more than offset the slight degradation
for writes.
Comparing the number of read operations to the number of write
operations on a volume using the
volstat
command can help
give you better insight into whether mirroring can also help improve overall
performance as well as provide higher data availability.
Because the LSM software allows you to change a volume (add or remove a mirror) on line, you can measure the overall performance implications on the actual I/O workload without stopping or disrupting service to a volume.
Mirrored volumes created with the
volassist
command
will have dirty region logging (DRL) enabled by default.
A DRL is used with
mirrored volumes to track used (or dirty) regions within the mirrored volume.
While DRL may add slight overhead to writes to the mirrored volume, the DRL
significantly reduces the amount of time that it takes to resynchronize a
mirrored volume when the system boots after a failure because only the dirty
regions within the volume are resynchronized rather than the entire volume.
While using a DRL with a mirrored volume is not required and has no affect on data integrity, a DRL dramatically reduces the amount of time it takes to resynchronize a mirrored volume. It is recommended that you configure a mirrored volume with a DRL, which is the default.
Note
In a TruCluster environment, the resynchronization overhead and time is significantly high. You should always configure a mirrored volume with a DRL in a TruCluster environment.
5.3.3.1 Using the
volassist
Command
Follow these steps to create a mirrored volume:
Determine which disks are configured for use with the LSM software by entering the following command:
#
volprint -g disk_group -dt
For example, to display the LSM disks in the
dg1
disk group, enter:
#
volprint -g dg1 -dt
Output similar to the following is displayed:
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dm dsk8 dsk8 - 4106368 - - - - dm dsk9 dsk9 - 4106368 - - - - dm dsk10 dsk10 - 4106368 - - - -
By default,
volassist
creates a DRL, so additional
storage space is needed when creating the volume.
Also, the plex layout is
concatenated by default.
Create a mirrored volume by entering the following command:
#
volassist [-g disk_group] make volume_name \
length nmirror=2 [disk_names]
For example, to create a mirrored volume in the
dg1
disk group using disks called
dsk8
,
dsk9
,
and
dsk10
, enter:
#
volassist -g dg1 make v_mirr01 4106368s nmirror=2 \ dsk8 dsk9 dsk10
If you do not specify a disk name, or if you specify more than three
disks, the
volassist
command selects the disk location
of the volume.
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v_mirr01
, enter:
#
volprint -ht v_mirr01
Output similar to the following is displayed:
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v_mirr01 fsgen ENABLED ACTIVE 4106368 SELECT - pl v_mirr01-01 v_mirr01 ENABLED ACTIVE 4106368 CONCAT - RW sd dsk8-01 v_mirr01-01 dsk8 0 4106368 0 dsk8 ENA pl v_mirr01-02 v_mirr01 ENABLED ACTIVE 4106368 CONCAT - RW sd dsk9-01 v_mirr01-02 dsk9 0 4106368 0 dsk9 ENA pl v_mirr01-03 v_mirr01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk10-01 v_mirr01-03 dsk10 0 130 LOG dsk10 ENA
Notice in this output that the total volume size is 4106368 sectors,
and two data plexes called
v_mirr01-01
and
v_mirr01-02
of the same size were created, which use disks called
dsk8
and
dsk9
respectively.
Also notice the
DRL plex is called
v_mirr01-03
and uses a disk called
dsk10
.
To maintain greater control over which disk will contain the volume's data and which disk is used for the volume's DRL, you may want to first create the volume without a DRL, then add the log separately. For example, to create the same volume as the previous example, but explicitly specifying the disks for data and the disk for a DRL, enter the following commands:
#
volassist -g dg1 make v_mirr01 4106368s nmirror=2 \ layout=nolog dsk8 dsk9
#
volassist -g dg1 addlog
v_mirr01 dsk10
5.3.3.2 Using Individual Commands
For complete control over how the volume is configured, use the
volmake
and
volume
commands.
For example, to
create a volume with two plexes called
v_mirr01-01
and
v_mirr01-02
that use disks called
dsk8
and
dsk9
respectively, and create a DRL plex called
v_mirr01-03
that uses a disk called
dsk10
, enter:
#
volmake -g dg1 sd dsk8-01 dsk8,0,4106368
#
volmake -g dg1 sd dsk9-01 dsk9,0,4106368
#
volmake -g dg1 sd dsk10-01 dsk10,0,130
#
volmake -g dg1 plex v_mirr01-01 sd=dsk8-01
#
volmake -g dg1 plex v_mirr01-02 sd=dsk9-01
#
volmake -g dg1 plex v_mirr01-03 logsd=dsk10-01
#
volmake -g dg1 -Ufsgen vol v_mirr01 \ plex=v_mirr01-01,v_mirr01-02,v_mirr01-03
#
volume start v_mirr01
In this example, notice the mirrored volume's DRL size is 130 blocks and was placed on a different disk than the volume's data.
The following section provides more information on sizing and placing
a mirrored volume's DRL.
5.3.3.3 Creating a DRL for a Mirrored Volume
When creating a mirrored volume using the
volassist
command, a DRL is created by default.
This section provides additional information
on sizing and placing a mirrored volume's DRL for best results.
Follow these guidelines to create a DRL:
The volume must be mirrored.
Avoid placing the log on a heavily-used disk.
Avoid using the same disk for both the volume's data and log.
Use disks within a storage subsystem configured with a nonvolatile write-back cache, if available.
At least one log subdisk must exist on the volume. However, only one log subdisk can exist per plex.
Although you can associate a logging subdisk to a plex that also contains data, it is best to configure a logging subdisk to plexes that do not contain data, for example a separate or log only plex.
It is possible to mirror log subdisks by having more than one log subdisk (but only one per plex) in the volume. This ensures that logging can continue, even if a disk failure causes one log subdisk to become inaccessible.
The minimum DRL size for a TruCluster environment is 65 blocks.
The
volassist
command creates a DRL sized for a TruCluster
environment even on non-TruCluster system to ensure a smooth migration to
a TruCluster environment in the future.
Table 5-2
shows example optimum DRL sizes
for TruCluster configurations.
Table 5-2: DRL Sizes for TruCluster Configurations
Volume Size in GB | DRL Size in Blocks |
1 or smaller | 65 |
2 | 132 |
3 | 132 |
4 | 198 |
5 | 198 |
60 | 2046 |
61 | 2046 |
62 or larger | 2122 |
See Cluster Administration for information about configuring LSM in a TruCluster environment.
The minimum DRL size for a non-TruCluster environment is 2 blocks.
For systems not configured as part of a TruCluster environment, you must configure a log subdisk with 2 or more blocks, preferably an even number, because the last block in a log subdisk with an odd number of blocks is not used. The log subdisk size is normally proportional to the volume size. If a volume is less than 2 GB, a log subdisk of 2 blocks is sufficient. Increase the log subdisk by 2 blocks for each additional 2 GB of volume size. To facilitate later migration to a TruCluster environment, you should use the TruCluster DRL sizes in Table 5-2.
By default, the
volassist
command configures a larger
log subdisk so the mirrored volume with the log can be used within a TruCluster.
Table 5-3
shows example optimum DRL sizes for
non-TruCluster systems.
Table 5-3: DRL Sizes for Non-TruCluster Configurations
Volume Size in GB | DRL Size in Blocks |
1 or smaller | 2 |
2 | 4 |
3 | 4 |
4 | 6 |
5 | 6 |
60 | 62 |
61 | 62 |
62 or larger | 64 |
By default, a log plex is created to contain the log subdisk. Once created, the plex containing a log subdisk is treated as a regular plex. You can remove the log plex and subdisk using the same procedures to remove regular plexes and subdisks.
To use the
volassist
command to create a DRL for
a mirrored volume, enter:
#
volassist [-g disk_group] addlog volume_name \
[disk_name]
For example, to create a DRL for a volume called
volmir
,
enter:
#
volassist addlog volmir
To use the
volmake
and
volplex
commands to create a DRL for a volume called
volmir
, enter:
#
volmake sd dsk10-01 dsk10,0,130
#
volmake plex volmir-03 logsd=dsk10-01
#
volplex att volmir volmir-03
Note
Do not configure a DRL for mirrored volumes that are used for swap.
5.3.4 Creating A Mirrored and Striped Volume
Configuring a LSM volume to be both mirrored and striped is a common and effective way to improve both performance and availability for a volume. This is accomplished with the LSM software by configuring each of the volume's data plexes, or mirrors, to have a stripe layout. Just as when creating either a striped or mirrored volume, understanding the system's I/O hardware topology (for example, which disks are on which buses) is useful for maximizing performance and availability by using disks that reside on different buses.
It may not always be practical to both mirror and stripe across buses
(for example, have each disk on its own I/O bus).
Mirroring across I/O buses
is preferred over striping because this provides both the highest level of
availability and ensures all the volume's reads and writes are evenly distributed
across the buses for the best performance.
5.3.4.1 Using the
volassist
Command
Follow these steps to create a mirrored and striped volume:
Create a mirrored and striped volume by entering the following command:
#
volassist [-g disk_group] make volume_name nstripe=n \
nmirror=m [options]
In this command, n is the number of columns to be used and m is the number of plexes.
For example, to create a 3GB, mirrored and striped volume called
vol4
with the default stripe width using any of the disks in the
rootdg
disk group, enter:
#
volassist make vol4 3g nmirror=2
nstripe=3
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information about a volume called
v_mirr01
, enter:
#
volprint -ht vol4
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol4 fsgen ENABLED ACTIVE 6291456 SELECT - pl vol4-01 vol4 ENABLED ACTIVE 6291456 STRIPE 3/128 RW sd dsk2-01 vol4-01 dsk2 130 2097152 0/0 dsk2 ENA sd dsk3-01 vol4-01 dsk3 0 2097152 1/0 dsk3 ENA sd dsk4-01 vol4-01 dsk4 0 2097152 2/0 dsk4 ENA pl vol4-02 vol4 ENABLED ACTIVE 6291456 STRIPE 3/128 RW sd dsk5-01 vol4-02 dsk5 0 2097152 0/0 dsk5 ENA sd dsk6-01 vol4-02 dsk6 0 2097152 1/0 dsk6 ENA sd dsk7-01 vol4-02 dsk7 0 2097152 2/0 dsk7 ENA pl vol4-03 vol4 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk2-02 vol4-03 dsk2 0 130 LOG dsk2 ENA
The
volassist
command selects which disks to use,
which may not be the optimum configuration.
Follow these steps to select the disks are used:
Check the I/O hardware topology for the disks to be used. For example:
#
file /dev/rdisk/dsk6c /dev/rdisk/dsk7c \ /dev/rdisk/dsk8c /dev/rdisk/dsk9c \ /dev/rdisk/dsk10c /dev/rdisk/dsk11c /dev/rdisk/dsk12c
Output similar to the following is displayed:
/dev/rdisk/dsk6c: character special (19/134) SCSI #3 RZ1BB-CS ( disk #7 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk7c: character special (19/150) SCSI #3 RZ1BB-CS ( disk #0 (SCSI ID #2) (SCSI LUN #0) /dev/rdisk/dsk8c: character special (19/166) SCSI #3 RZ1BB-CS ( disk #1 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk9c: character special (19/182) SCSI #3 RZ1BB-CS ( disk #2 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk10c: character special (19/198) SCSI #4 RZ1BB-CS ( disk #3 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk11c: character special (19/214) SCSI #4 RZ1BB-CS ( disk #4 (SCSI ID #3) (SCSI LUN #0) /dev/rdisk/dsk12c: character special (19/230) SCSI #4 RZ1BB-CS ( disk #5 (SCSI ID #5) (SCSI LUN #0)
Create a striped volume using disks on the same I/O bus (as shown in the previous output) by entering the following command:
#
volassist make volume_name length \
nstripe=number disks
For example, to create a 3GB 3-way striped volume called
vol4
using disks called
dsk10
,
dsk11
, and
dsk12
, enter:
#
volassist make vol4 3g nstripe=3
dsk10 dsk11 dsk12
Add a 3-way striped plex (mirror) so that the volume's data is mirrored across SCSI buses by entering the following command:
#
volassist mirror volume_name nstripe=number
disks
For example to mirror a volume called
vol4
using
disks called
dsk6
,
dsk7
, and
dsk8
, enter:
#
volassist mirror vol4 nstripe=3
dsk6 dsk7 dsk8
Add a DRL on a separate disk by entering the following command:
#
volassist addlog volume_name disk_name
For example, to create a DRL on a disk called
dsk9
for a volume called
vol4
, enter:
#
volassist addlog vol4 dsk9
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information for a volume called
vol4
, enter:
#
volprint -ht vol4
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol4 fsgen ENABLED ACTIVE 6291456 SELECT - pl vol4-01 vol4 ENABLED ACTIVE 6291456 STRIPE 3/128 RW sd dsk10-01 vol4-01 dsk10 0 2097152 0/0 dsk10 ENA sd dsk11-01 vol4-01 dsk11 0 2097152 1/0 dsk11 ENA sd dsk12-01 vol4-01 dsk12 0 2097152 2/0 dsk12 ENA pl vol4-02 vol4 ENABLED ACTIVE 6291456 STRIPE 3/128 RW sd dsk6-01 vol4-02 dsk6 0 2097152 0/0 dsk6 ENA sd dsk7-01 vol4-02 dsk7 0 2097152 1/0 dsk7 ENA sd dsk8-01 vol4-02 dsk8 0 2097152 2/0 dsk8 ENA pl vol4-03 vol4 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk9-01 vol4-03 dsk9 0 130 LOG dsk9 ENA
5.3.4.2 Using Individual Commands
For complete control over how the mirrored and striped volume is configured,
use the
volmake
and
volume
commands.
Using these commands allow you to specify how disks are used.
For example
the following commands creates a 3-way stripe, adds a 3-way striped
plex (mirror) so that the volume's data is mirrored across SCSI buses, and
adds a DRL on a separate disk:
#
volmake sd dsk6-01 dsk6,0,2097152
#
volmake sd dsk7-01 dsk7,0,2097152
#
volmake sd dsk8-01 dsk8,0,2097152
#
volmake plex vol4-01 layout=stripe st_width=64k \ sd=dsk6-01,dsk7-01,dsk8-01
#
volmake sd dsk10-01 dsk10,0,2097152
#
volmake sd dsk11-01 dsk11,0,2097152
#
volmake sd dsk12-01 dsk12,0,2097152
#
volmake plex vol4-02 layout=stripe st_width=64k \ sd=dsk10-01,dsk11-01,dsk12-01
#
volmake sd dsk9-01 dsk9,0,2097152
#
volmake plex vol4-03 logsd=dsk9-01
#
volmake -U fsgen vol vol4 plex=vol4-01,vol4-02,vol4-03
#
volume start vol
A RAID5 volume provides an alternative method to mirroring (RAID1) for improving data availability. A RAID5 volume contains a single plex, consisting of multiple subdisks derived from three or more disks. Data is striped across the subdisks, along with parity information that provides data redundancy.
Compared to a mirrored volume, a RAID5 volume requires fewer disks to improve data availability . For example, a 5-way stripe set requires six disks if configured as RAID5, compared to ten disks if it were mirrored and striped. However, there are disadvantages to using RAID5 volumes that might make usin mirrored or mirrored and striped volumes more desirable:
RAID5 write-performance is often slower because both the data and new parity information are written. A single write to a volume often translates into two reads followed by two writes in order to read, modify, and write the volume's new data and new parity.
If a disk fails, a write to any one of the volume's disks translates to first reading all the disks before the data and parity are written. A read to a RAID5 volume with a failed disk may require reading all the other disks instead.
Data availability is not as high as with mirroring. For example, if a second disk fails in RAID5 volume, all the volume's data on those disks is lost because the parity information can only be used to recover data when one disk fails.
Despite the disadvantages, using a RAID5 volume might make sense either for read intensive environments or to improve availability on rarely accessed data.
You must configure a RAID5 log with a RAID5 volume.
A RAID5 log is
required to recover the volumes data when the system boots after a system
failure.
A RAID5 log differs from a mirrored volume DRL in that RAID5 logs
contain the data that was being written to the volume when the failure occurred.
The data in the RAID5 log is required to recover a RAID5 volume running in
degraded mode due to a failed disk.
Therefore, a RAID5 log is necessary to
maintain the volume's data integrity after a failure.
A mirrored volume DRL
is not needed for data integrity, rather it is used only to accelerate the
recovery process.
When creating a RAID5 volume with the
volassist
command, a log is created by default.
The stripe width used for a RAID5 volume is typically smaller than the stripe width used for striping (RAID0) to lessen the performance impact of RAID5 writes. Unlike striping (RAID0), splitting up a write across all of the disks within stripe-set improves performance because reading existing data to determine the new parity is not necessary when writing a full, RAID5 row of data. For example, writing 64KB of data to a five column RAID5 stripe with a stripe width of 64KB may involve two parallel reads followed by two parallel writes (for example, reading both the existing data and parity, then writing the new data and new parity).
However, writing the same 64KB of data to a five-column RAID5 stripe
with a stripe width of only 16KB could instead allow the 64KB of data to immediately
write to disks (for example, five parallel writes to the four data disks and
the one parity disk) because the new parity for the RAID5 row is determined
from the 64KB of data, thereby making any reads of the old data or parity
unnecessary.
The RAID5 default stripe width of 16KB and usually works best
for most environments.
5.3.5.1 Using the
volassist
Command
Follow these steps to create a RAID5 volume:
Create a RAID5 volume by entering the following command:
#
volassist [-g disk_group] make volume_name length \ layout=raid5 [options]
For example, to create a RAID5 volume called
volraid
that is 100 MB, enter:
volassist make volraid 100m layout=raid5 nstripe=4 \
dsk6 dsk7 dsk8 dsk9 dsk10
Display the results by entering the following command:
#
volprint -ht volume_name
For example, to display information for a volume called
volraid
, enter:
#
volprint -ht volraid
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volraid raid5 ENABLED ACTIVE 204864 RAID - pl volraid-01 volraid ENABLED ACTIVE 204864 RAID 4/32 RW sd dsk6-01 volraid-01 dsk6 0 68288 0/0 dsk6 ENA sd dsk7-01 volraid-01 dsk7 0 68288 1/0 dsk7 ENA sd dsk8-01 volraid-01 dsk8 0 68288 2/0 dsk8 ENA sd dsk9-01 volraid-01 dsk9 0 68288 3/0 dsk9 ENA pl volraid-02 volraid ENABLED LOG 1280 CONCAT - RW sd dsk10-01 volraid-02 dsk10 0 1280 0 dsk10 ENA
5.3.5.2 Using Individual Commands
For complete control over how the volume is configured, use the
volmake
and
volume
commands.
Using these commands
allow you to specify how disks are used.
For example, to create a RAID5 volume
called
volraid
that is 100 MB, enter:
#
volmake sd dsk6-01 dsk6,0,68288
#
volmake sd dsk7-01 dsk7,0,68288
#
volmake sd dsk8-01 dsk8,0,68288
#
volmake sd dsk9-01 dsk9,0,68288
#
volmake plex volraid-01 layout=raid5 st_width=16k \ sd=dsk6-01,dsk7-01,dsk8-01,dsk9-01
#
volmake sd dsk10-01 dsk10,0,1280
#
volmake plex volraid-02 logsd=dsk10-01
#
volmake -U raid5 vol volraid plex=volraid-01,volraid-02
#
volume start volraid
You should always configure a log with a RAID5 volume. You should add a log if a RAID5 volume does not have a log or if the current log fails due to a disk failure.
To add a RAID5 log to a RAID5 volume using the
volassist
command, enter:
#
volassist [-g disk_group] addlog volume_name
[disk_name]
For example, to create a log for the RAID5 volume called
volraid
, enter:
#
volassist addlog volraid
Alternatively, you can use the
volmake
and
volplex
commands to add a RAID5 log to a RAID5 volume.
For example,
to create a log for the RAID5 volume called
volraid
, enter:
#
volmake sd dsk10-01 dsk10,0,1280
#
volmake plex volraid-02 logsd=dsk10-01
#
volplex att volraid volraid-02
5.4 Configuring LSM Volumes For Use
Once you create an LSM volume, you can use the LSM volume in the same manner that you would use a disk partition. For example, you can put a file system on it, configure a database to use the volume as a raw device, use the LSM volume for additional system swap, and so on. Because LSM adheres to the same interfaces as any other Unix disk driver, anything that can be configured to use a disk or disk partition can use an LSM volume instead.
The following sections provide examples on how to use an LSM volume
for file systems, secondary swap, and a raw device such as third party databases.
5.4.1 Using LSM Volumes with UFS
Follow these steps to create a UFS file system on an LSM volume:
Create a volume with a volume usage type of
fsgen
as described in the previous sections.
Specify the LSM volume's character (raw) device name to the
newfs
command.
LSM character special devices are in the
/dev/rvol
directory, so to configure an UFS file on an LSM volume,
enter:
#
newfs [specific_options] /dev/rvol/disk_group/ volume_name
In this command, disk_group is the volume's disk group and volume_name is the volume's name.
Volume special device files for volumes in the
rootdg
disk group are in the
/dev/rvol
and
/dev/rvol/rootdg
directories, so it's not necessary to specify the name of the
disk group for volumes in the
rootdg
disk group.
See the
newfs
(8)
reference page for more information on the
newfs
options
and creating UFS file systems.
For example, to create a UFS file system on the LSM volume called
vol_mirr
in the
rootdg
disk group, enter:
#
newfs /dev/rvol/vol_mirr
Use the LSM block special device name to mount the file system.
For example to mount the LSM volume called
vol_mirr
on
mnt2
, enter:
#
mount /dev/vol/vol_mirr /mnt2
Once a UFS file system is placed on an LSM volume, the volume's configuration
can be changed to
online
.
For example, the volume can
be mirrored or moved to occupy a different disk using the
volassist
mirror
or
volassist move
commands.
Also, a
volume snapshot can be taken for quick data backup using the
volassist
snapshot
command as described in
Chapter 6.
Note, however that UFS file system can not be dynamically resized.
Therefore,
do not resize the volume using the
volassist grow
or
volassist shrink
commands.
Resizing an LSM volume containing a
UFS file system may lead to data loss.
5.4.2 Using LSM Volumes with AdvFS
Using LSM with AdvFS is a common and effective way to manage the system's storage and file systems. You use LSM to manage and provide storage and use AdvFS to store, manage, and provide files and file systems. Using LSM to manage all the system's storage provides the greatest flexibility for spreading the performance and space needs across different hardware, regardless of how that storage is used. For example, whether the volumes are used for AdvFS, UFS, databases, or swap.
Common LSM volume configurations used for AdvFS domains are:
Mirrored volumes, which maintains data and system availability in the event of an AdvFS domain or a system panic (crash) that is caused by a disk failure.
Striped volumes, which spreads the file system's I/O, including AdvFS transaction log I/O, across multiple disks for better performance.
Mirrored/striped volumes, which maximizes both availability and performance.
Use the following guidelines when creating an LSM volume for an AdvFS domain:
When resizing an AdvFS domain's storage, use AdvFS's
addvol
and
rmvol
commands rather than resizing
the LSM volume itself.
See the AdvFS documentation for more information on
these commands.
The
volassist
command's default stripe-width
of 64KB usually works best with AdvFS.
When using multiple, striped LSM volumes within the same AdvFS multi-volume domain, configure the same number of disks within the LSM striped volume. For example, if six disks were used to create two, LSM striped volumes in an AdvFS multi-volume domain, configure both volumes as 3-way striped sets instead of as one 2-way striped set and one as a 4-way striped set.
5.4.2.1 Using an LSM Volume Within an AdvFS Domain
To use an LSM volume within an AdvFS file domain, create a volume with
a volume usage type of
fsgen
as described in the previous
sections.
Once the LSM volume is created, specify the LSM volume's block
device name using either the
mkfdmn
or
addvol
command.
LSM block special devices reside in the
/dev/vol
directory.
For example, to use the
mkfdmn
command
to create an AdvFS file domain using an LSM volume, enter:
#
mkfdmn [options] /dev/vol/disk_group/volume_name \ domain_name
In this command disk_group
is the name of the volume's disk group and volume_name
is the name of the volume.
Volume special device files for volumes in the
rootdg
disk group are in the
/dev/vol
and
/dev/vol/rootdg
directories, so it is not necessary to specify the disk_group name for volumes in the
rootdg
disk group.
For example, to create an AdvFS domain called
dom1
on the LSM volume called
vol_mirr1
in the
rootdg
disk group, enter:
#
mkfdmn /dev/vol/vol_mirr1 dom1
See the
mkfdmn
(8)
reference page for more information on using
the
mkfdmn
options and creating AdvFS domains.
Once the file domain is created, create an AdvFS fileset and mount it in the usual manner. For example:
#
mkfset dom1 fs1
#
mount dom1#fs1 /mnt2
5.4.2.2 Adding an LSM Volume into an Existing AdvFS Domain
To add an LSM volume into an existing AdvFS domain, enter:
#
addvol /dev/vol/disk_group/volume_name domain_name
Where disk_group is the name
of the volume's disk group and volume_name
is the name of the volume.
Volume special device files for volumes in the
rootdg
disk group are in the
/dev/vol
and
/dev/vol/rootdg
directories, so it is not necessary to specify the disk_group name for volumes in the
rootdg
disk group.
For example, to add an AdvFS domain called
dom1
on
the LSM volume called
vol_mirr2
in the
rootdg
disk group, enter:
#
addvol /dev/vol/vol_mirr2 dom1
5.4.2.3 Removing an LSM Volume from AdvFS Domain
To remove an LSM volume from an AdvFS domain, enter:
#
rmvol /dev/vol/disk_group/volume_name domain_name
In this command disk_group
is the name of the volume's disk group and volume_name
is the name of the volume.
Volume special device files for volumes in the
rootdg
disk group are in the
/dev/vol
and
/dev/vol/rootdg
directories, so it is not necessary to specify the disk_group name for volumes in the
rootdg
disk group.
For example, to remove an LSM volume called
vol_mirr1
in the
rootdg
disk group from an AdvFS domain called
dom1
, enter:
#
rmvol /dev/vol/vol_mirr1 dom1
Output similar to the following is displayed:
rmvol: Removing volume '/dev/vol/vol_mirr1' from domain 'dom1' rmvol: Removed volume '/dev/vol/vol_mirr1' from domain 'dom1'
5.4.3 Using LSM Volumes for Secondary Swap Space
The system swap space is a vital system resource. If disk errors occur in the swap space, a system crash is likely to occur. You can use an LSM mirrored volume for the secondary swap space to guard against disk I/O errors in the secondary swap space.
Follow these steps to create an LSM mirrored volume for the secondary swap space:
Create an LSM volume in the
rootdg
disk
group with a usage type of
gen
and set the volume's start
options to
norecov
Add the volume as secondary swap space using the
swapon
command
If you are adding multiple disks as LSM volumes to secondary swap space, add the disks as several individual LSM volumes rather than striping or concatenating them into a single, larger LSM volume. Adding multiple, individual LSM volumes is preferable because the swapping algorithm automatically distributes its data across multiple disks to improve performance.
Note
Do not configure DRL on swap volumes. Mirror resynchronization is not necessary after a crash for volumes used for swap, and configuring DRL on swap volumes interferes with crash dumps.
The following commands create and add a mirrored volume called
swapvol1
with a size of 102400 sectors to the secondary swap space:
#
volmake sd dsk8-01 dsk8,0,102400
#
volmake sd dsk9-01 dsk9,0,102400
#
volmake plex vol_swap2-01 sd=dsk8-01
#
volmake plex vol_swap2-02 sd=dsk9-01
#
volmake -U gen vol vol_swap2 \ plex=vol_swap2-01,vol_swap2-02 \ start_opts=norecov
#
volume start vol_swap2
To display the results, enter:
#
volprint -ht vol_swap2
Output similar to the following is displayed:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_swap2 gen ENABLED ACTIVE 102400 ROUND - pl vol_swap2-01 vol_swap2 ENABLED ACTIVE 102400 CONCAT - RW sd dsk8-01 vol_swap2-01 dsk8 0 102400 0 dsk8 ENA pl vol_swap2-02 vol_swap2 ENABLED ACTIVE 102400 CONCAT - RW sd dsk9-01 vol_swap2-02 dsk9 0 102400 0 dsk9 ENA
Once the LSM volume is created, you can configure to use it as a swap
device like any other disk device.
For example, to configure the LSM volume
for swap using the
swapon
command, enter:
#
swapon /dev/vol/vol_swap2
Then add the LSM special device file to the
swapdevice
kernel attribute value within the
vm:
section of the
/etc/configtab
file.
For example:
vm: swapdevice=/dev/disk/dsk1b, /dev/vol/vol_swap2
See the
System Administration
and the
swapon
(8)
and the
sysconfig
(8)
reference pages for more information on adding additional swap space.
5.4.4 Using LSM Volumes with Databases and Other Software
Databases and other software that directly use disk partitions to perform raw I/O can also be configured to use LSM volumes.
To do so, create a volume with a usage type of
gen
,
then configure it to be used with the database or other software by using
the volume's character special device file located in the
/dev/rvol/disk_group
directory where disk_group is the volume's disk group name.
Note that volume special device files for volumes in the
rootdg
disk group are in the
/dev/rvol
,
/dev/rvol/rootdg
,
/dev/vol
and in the
/dev/vol/rootdg
directories, so it is not necessary to specify the disk group name
for volumes in the
rootdg
disk group.
Often databases or other software that perform raw I/O require the special device file to have certain settings for the access permissions, mode, user, and/or group. The special device file settings for LSM volumes can be specified when the volume is created. For example:
#
volassist -U gen make vol_db1 32g user=dba group=dba \ mode=0600
To display the LSM volumes special device file's access permissions generated by from the above example, enter:
#
ls -l /dev/*vol/vol_db1
Output similar to the following is displayed:
crw------- 1 dba dba 40, 8 Jun 28 16:33 /dev/rvol/vol_db1 brw------- 1 dba dba 40, 8 Jun 28 16:33 /dev/vol/vol_db1
Once the volume is created, do not change these attributes using standard
UNIX commands such as the
chown
,
chgrp
,
or
chmod
commands.
To change the owner, group, or mode
of LSM volume special device files, use the LSM
voledit
command.
For example, to change user and group to
dba
and the mode to
0600
for a volume called
vol_db1
in the
rootdg
disk group, enter:
#
voledit set user=dba group=dba
mode=0600 vol_db1