You must plan your LSM configuration before you can use LSM volumes for applications or file systems. Planning your LSM configuration includes deciding:
How many volumes you want, and the number and type of data plexes the volumes will use
How many disk groups you need, and which disks you will configure in a disk group
This chapter provides information and worksheets to assist you in planning
LSM volumes and disk groups.
You might want to make copies of the blank worksheets
for future use.
2.1 Planning LSM Volumes
Planning LSM volumes includes deciding what attributes you want the LSM volumes to have. An LSM volume has two types of attributes:
Attributes for which you must provide a value, as described in Table 2-1.
Attributes that are assigned a default value, which you can change, as described in Table 2-2.
Table 2-1: LSM Volume Attributes with No Default Values
Attribute | Notes |
Volume name |
Can be 31 alphanumeric characters but cannot include a space or slash (/). Must be unique in the disk group where you create the volume. |
Volume size or length |
The amount of space needed for the data in the LSM volume. You can specify volume size in sectors (the default), kilobytes, megabytes, gigabytes, or terabytes. |
Table 2-2: LSM Volume Attributes with Default Values
Attribute | Notes and Default Value |
Number of plexes | LSM volumes can have up to 32 plexes, with the following restrictions or recommendations:
Default: One concatenated data plex, no log plex. |
Log plex size | For volumes less than or equal to 1 GB that use mirror plexes, the default DRL is 65 blocks to allow for migration to a TruCluster environment. The minimum DRL size is approximately 2 blocks per GB of volume size. (You can use the minimum if you know the LSM configuration will not be used in a cluster.) For volumes that use a RAID 5 plex, the log plex size is [10 * (number of columns * data unit size)]. |
Plex type | A plex type is either concatenated, striped, or RAID 5. You can mirror concatenated or striped plexes. Default: Concatenated, no mirror. See Table 2-3 for information on choosing a plex type. |
Name of the disk group where you will create the volume | A volume can be in only one disk group. Default: rootdg disk group. |
LSM disks that the volume will use | If the volume has a striped or RAID 5 plex, each column must be of equal size and be on different disks, preferably on different buses. If the volume has mirror plexes, each data plex should use disks on different buses, and the DRL plex should be on a disk that is not used for a data plex. Default: LSM chooses the disks. |
Usage type of the volume | Use
Use
Use
Default:
|
Table 2-3
describes the benefits and trade-offs
of various plex layouts and lists scenarios where one plex type might provide
better performance, or be more cost effective, than a different plex type.
For optimal performance you might need to tune your system to the work load.
The layout you choose depends on your specific system configuration, data
availability and reliability needs, and application requirements.
Table 2-3: Choosing a Plex Type
The following sections provide worksheets to assist you in planning LSM volumes depending on the type of plex you want to use. Using the information in these worksheets will help you when you create volumes as described in Chapter 4.
Note
When you create an LSM volume with the
volassist
command (the recommended and simplest method), LSM performs all the necessary calculations and creates a volume and log plexes of the appropriate sizes. The following worksheets are provided to help you approximate the space needed and ensure the disk group has enough space for the volumes you want.
2.1.1 Planning an LSM Volume That Uses a Concatenated Plex
Use the following worksheet to plan an LSM volume that uses a concatenated
plex.
Figure 2-1: Worksheet for Planning a Volume with Concatenated Plexes
Attribute | Default Values | Chosen Values |
Volume name | No default |
|
Volume size | No default |
|
Number of data plexes | 1 |
|
If more than one plex, DRL plex size | 65 blocks for volumes less than or equal to 1 GB [Footnote 1] |
|
Disk group name | rootdg |
|
Usage type | fsgen |
|
Total space required | (Volume size * number of plexes) + DRL size |
|
2.1.2 Planning an LSM Volume That Uses a Striped Plex
Use the following worksheet to plan an LSM volume that uses a striped
plex.
Figure 2-2: Worksheet for Planning a Volume with Striped Plexes
Attribute | Default Values | Chosen Values |
Volume name | No default |
|
Volume size | No default |
|
Data unit size | 64 KB |
|
Number of columns | Minimum of two, based on number of disks in disk group and the volume size |
|
Number of data plexes | 1 |
|
If more than one plex, DRL plex size | 65 blocks for volumes less than or equal to 1 GB [Footnote 2] |
|
Disk group name | rootdg |
|
Usage type | fsgen |
|
Total space required | (Volume size * number of plexes) + DRL size |
|
2.1.3 Planning an LSM Volume That Uses a RAID 5 Plex
Use the following worksheet to plan an LSM volume that uses a RAID 5
plex.
Figure 2-3: Worksheet for Planning a Volume with a RAID 5 Plex
Attribute | Default Values | Chosen Values |
Volume name | No default |
|
Volume size | No default |
|
Data unit size | 16 KB |
|
Number of columns (NCOL) | Between 3 and 8 based on number if disks in disk group and the volume size | (Minimum of three) |
Log plex size | 10 * (data unit size * number of columns) |
|
Disk group name | rootdg |
|
Usage type | Must be
raid5 |
|
Total space required | (Volume size * NCOL / (NCOL-1)) + log plex size |
|
At a minimum, you must plan the rootdg disk group, which is created when you install LSM. Planning a disk group requires that you identify:
The space requirements for the disk group by identifying the size of volumes that you will create in the disk group, as described in Section 2.1.
Unused storage devices to meet the space requirement of the disk group, as described in Section 2.3.
When you plan a disk group, consider the following:
You must identify at least one unused storage device for the rootdg disk group when you install LSM.
A disk group should have more than one storage device to ensure that there are multiple copies of the disk group's configuration database.
To improve performance, keep the number of disks in rootdg to ten or fewer. Create additional disk groups if you:
Have more than ten disks to place under LSM control.
Might move the disk group to a different system.
A disk group should have storage devices on different buses to improve performance and availability.
Choose a name for the disk group carefully. There is no direct method for renaming disk groups, as there is for renaming volumes or LSM disks. To rename a disk group, you must deport it and import it with a new name. This requires stopping all activity on volumes in that disk group for the time necessary to deport and import it, which might not be acceptable or feasible in your environment.
The following considerations apply specifically to clusters:
To improve performance, use the rootdg disk group only for
system-related volumes such as the
cluster_root
domain
and the swap devices for members.
Try to keep the number of disks in rootdg
to ten or fewer.
The disks in each disk group should be accessible by all cluster members (on a shared bus) so that all members have access to the volumes, even if one or more members are not running. If some disks are accessible only to some members, try to use those disks for data that is needed only by the member directly attached; for example, that member's swap space.
Use the worksheets in Figure 2-4 and Figure 2-5 to plan disk groups. You can make copies and fill in the information on the copies rather than in the manual. This lets you keep the disk group information with each system running LSM, for your reference. Also, because you can change your LSM configuration at any time, you can make a new copy of the blank worksheets to record your changes.
In the appropriate worksheet, enter the following:
Under Disk Group Information, include any information that will help you keep track of the purpose of that disk group. For example, you might create a disk group called finance whose purpose is to contain one or more volumes that will be used by a financial application. You might create another disk group called db1, which will contain a volume used by a database.
Under Volume, Plex and Spare Disk Information, include the names of all volumes in that disk group, their plex type, which disks belong to which plex and identify any spare disks that will be used to replace failed disks. See Section 3.4.4 and Section 3.4.4.1 for more information on spare disks.
Figure 2-6 is an example of a completed worksheet.
Figure 2-4: Worksheet for Planning the rootdg Disk Group
Disk Group Information | Disks in Group | Bus/LUN Number | Disk Size | Volume, Plex, and Spare Disk Information |
Name: rootdg Purpose: |
||||
Figure 2-5: Worksheet for Planning Additional Disk Groups
Disk Group Information | Disks in Group | Bus/LUN Number | Disk Size | Volume, Plex, and Spare Disk Information |
Name: Purpose: |
||||
Figure 2-6
shows a consolidated example
of what your disk group planning worksheets might look like when complete.
Note that this example applies only to a standalone system, not a cluster.
Figure 2-6: Worksheet for Planning Disk Groups for a Standalone System (Consolidated Example)
Disk Group Information | Disks in Group | Bus/LUN Number | Disk Size | Volume, Plex, and Spare Disk Information |
Name: rootdg Purpose: root file system and system disks. |
dsk0 | 0 | 4 GB | root disk (encapsulated: rootvol plex-01) |
dsk1 | 0 | 4 GB | rootvol plex-02 | |
dsk4 | 2 | 4 GB | swapvol plex-01 | |
dsk5 | 2 | 4 GB | swapvol plex-02 | |
dsk16 | 6 | 4 GB | hot-spare disk | |
Name: data_dg Purpose: Database, must be redundant. Contains volume with mirrored striped plexes and DRL. |
dsk6 | 3 | 18 GB | volume: db_vol plex: db_vol-01 |
dsk7 | 3 | 18 GB | plex: db_vol-01 | |
dsk8 | 4 | 18 GB | plex: db_vol-02 | |
dsk9 | 4 | 18 GB | plex: db_vol-02 | |
dsk10 | 5 | 18 GB | plex: db_vol-03 (DRL plex) | |
dsk11 | 5 | 18 GB | hot-spare disk | |
dsk15 | 6 | 18 GB | hot-spare disk | |
Name: finance_dg Purpose: Financial application, must be highly available. Contains volume with RAID 5 plex (read-only application). |
dsk20 | 7 | 9 GB | volume: fin_vol column: 1 |
dsk25 | 8 | 9 GB | column 2 | |
dsk30 | 9 | 9 GB | column 3 | |
dsk35 | 10 | 9 GB | column 4 | |
dsk40 | 11 | 9 GB | column 5 | |
dsk45 | 16 | 9 GB | log plex | |
dsk16 | 6 | 18 GB | hot-spare disk |
2.3 Identifying Unused Storage Devices
Unused storage devices are unused disks, partitions, and RAID disks that LSM can initialize to become LSM disks for use in the rootdg disk group or in the other disk groups that you create.
You can also identify unused LSM disks for use in a disk group. An unused LSM disk is a storage device that you initialized for use by LSM but did not assign to a disk group.
The following sections describe how to identify unused disks, partitions, and LSM disks. See your hardware RAID documentation for information on identifying unused hardware RAID disks.
To identify unused storage devices, you can use:
The Disk Configuration Graphical User Interface (GUI) (Section 2.3.1).
The command-line interpreter interface (Section 2.3.2).
The
voldisk list
command on a system where
LSM is running (Section 2.3.3).
2.3.1 Using the Disk Configuration GUI to Identify Unused Disks
To identify unused disks using the Disk Configuration GUI, start the Disk Configuration interface using either of the following methods:
From the system prompt, enter:
#
/usr/sbin/diskconfig
From the SysMan Applications pop-up menu on the CDE Front Panel:
Choose Configuration.
Double click the Disk icon in the SysMan Configuration folder.
A window titled Disk Configuration on hostname is displayed. This is the main window for the Disk Configuration GUI, and lists the following information for each disk:
The disk name, such as dsk10
The device model, such as RZ1CB-CA
The bus number for the device
For more information about a disk, double click on the list item (or click Configure when a disk is highlighted). The Disk Configuration: Configure Partitions: window is displayed.
This window contains:
A graphical representation of the disk partitions in a horizontal bar-chart format and disk information such as the disk name, the total size of the disk, and usage information.
A Partition Table button that you can click to display a bar chart of the current partitions in use, their sizes, and the file system in use.
A Disk Attributes button that you can click to display values for disk attributes.
For more information about the Disk Configuration GUI, see its online
help.
2.3.2 Using Operating System Commands to Identify Unused Disks
You can use the following operating system commands to identify unused disks:
List all the disks on the system:
#
file /dev/rdisk/dsk*c
Information similar to the following is displayed:
/dev/rdisk/dsk0c: character special (19/38) SCSI #1 "RZ1CD-CS" disk #1 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk10c: character special (19/198) SCSI #3 "RZ1CD-CS" disk #3 (SCSI ID #5) (SCSI LUN #0) /dev/rdisk/dsk11c: character special (19/214) SCSI #4 "RZ1BB-CS" disk #4 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk12c: character special (19/230) SCSI #4 "RZ1BB-CS" disk #5 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk13c: character special (19/246) SCSI #4 "RZ1BB-CS" disk #6 (SCSI ID #2) (SCSI LUN #0) /dev/rdisk/dsk14c: character special (19/262) SCSI #4 "RZ1BB-CS" disk #7 (SCSI ID #3) (SCSI LUN #0) /dev/rdisk/dsk15c: character special (19/278) SCSI #4 "RZ1CD-CS" disk #8 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk16c: character special (19/294) SCSI #4 "BD009635C3" disk #9 (SCSI ID #5) (SCSI LUN #0) /dev/rdisk/dsk17c: character special (19/310) SCSI #4 "BD009635C3" disk #10 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk18c: character special (19/326) SCSI #5 "RZ1CD-CS" disk #11 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk19c: character special (19/342) SCSI #5 "RZ1BB-CS" disk #12 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk1c: character special (19/54) SCSI #1 "RZ1BB-CA" disk #2 (SCSI ID #2) (SCSI LUN #0) /dev/rdisk/dsk20c: character special (19/358) SCSI #5 "RZ1CB-CA" disk #13 (SCSI ID #2) (SCSI LUN #0) /dev/rdisk/dsk21c: character special (19/374) SCSI #5 "RZ1CB-CA" disk #14 (SCSI ID #3) (SCSI LUN #0) /dev/rdisk/dsk22c: character special (19/390) SCSI #5 "RZ1CF-CF" disk #15 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk23c: character special (19/406) SCSI #5 "RZ1CF-CF" disk #8 (SCSI ID #5) (SCSI LUN #0) /dev/rdisk/dsk24c: character special (19/422) SCSI #5 "BD009635C3" disk #9 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk25c: character special (19/438) SCSI #6 "RZ1BB-CS" disk #10 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk26c: character special (19/454) SCSI #6 "RZ1CD-CS" disk #11 (SCSI ID #3) (SCSI LUN #0) /dev/rdisk/dsk27c: character special (19/470) SCSI #6 "RZ1CD-CS" disk #12 (SCSI ID #5) (SCSI LUN #0) /dev/rdisk/dsk2c: character special (19/70) SCSI #1 "RZ1CD-CS" disk #3 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk3c: character special (19/86) SCSI #1 "RZ1CD-CS" disk #4 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk4c: character special (19/102) SCSI #2 "RZ1BB-CS" disk #5 (SCSI ID #0) (SCSI LUN #0) /dev/rdisk/dsk5c: character special (19/118) SCSI #2 "RZ1CD-CS" disk #6 (SCSI ID #2) (SCSI LUN #0) /dev/rdisk/dsk6c: character special (19/134) SCSI #2 "RZ1CD-CS" disk #7 (SCSI ID #4) (SCSI LUN #0) /dev/rdisk/dsk7c: character special (19/150) SCSI #2 "RZ1CD-CS" disk #0 (SCSI ID #6) (SCSI LUN #0) /dev/rdisk/dsk8c: character special (19/166) SCSI #3 "RZ1BB-CA" disk #1 (SCSI ID #1) (SCSI LUN #0) /dev/rdisk/dsk9c: character special (19/182) SCSI #3 "RZ1CD-CS" disk #2 (SCSI ID #3) (SCSI LUN #0)
To verify if a disk or partition is unused, choose a disk
from the output of the
file /dev/rdisk/dsk*c
command and
enter the
disklabel
command with the name of the disk;
for example:
#
disklabel dsk20c
Disk partition information similar to the following is displayed:
type: SCSI disk: RZ1CB-CA label: flags: dynamic_geometry bytes/sector: 512 sectors/track: 113 tracks/cylinder: 20 sectors/cylinder: 2260 cylinders: 3708 sectors/unit: 8380080 rpm: 7200 interleave: 1 trackskew: 9 cylinderskew: 9 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype fsize bsize cpg # ~Cyl values a: 131072 0 unused 0 0 # 0 - 57* b: 262144 131072 unused 0 0 # 57*- 173* c: 8380080 0 unused 0 0 # 0 - 3707 d: 0 0 unused 0 0 # 0 - 0 e: 0 0 unused 0 0 # 0 - 0 f: 0 0 unused 0 0 # 0 - 0 g: 3993432 393216 unused 0 0 # 173*- 1940* h: 3993432 4386648 unused 0 0 # 1940*- 3707
See the
disklabel
(8)
reference page for more information on the
disklabel
command.
If you are using AdvFS, display the disks in use by all domains:
#
ls /etc/fdmns/*/*
/etc/fdmns/cluster_root/dsk7b /etc/fdmns/root2_domain/dsk11a
/etc/fdmns/cluster_usr/dsk7g /etc/fdmns/root_domain/dsk1a
/etc/fdmns/cluster_var/dsk7h /etc/fdmns/usr_domain/dsk1g
/etc/fdmns/root1_domain/dsk10a
If you are using UFS, display all mounted file sets:
#
mount
2.3.3 Using the LSM voldisk Command to Identify Unused Disks
When LSM starts, it obtains a list of disk device addresses from the operating system software and checks the disk labels to determine which devices are initialized for LSM use and which are not.
If LSM is running on the system, you can use the
voldisk
command to display a list of all known disks and to display detail information
about a particular disk:
To view a list of disks, enter:
#
voldisk list
Information similar to the following is displayed.
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 dg1 online dsk7 sliced - - online dsk8 sliced dsk8 dg1 online dsk9 sliced - - online dsk10 sliced - - online dsk11 sliced - - online dsk12 sliced - - online dsk13 sliced - - unknown dsk14 sliced - - unknown
The following list describes the information in the output:
DEVICE |
Specifies the disk access name assigned by the operating system. |
TYPE |
Specifies the LSM disk type (sliced, simple, or nopriv). |
DISK |
Specifies the LSM disk media name. A dash (-) means the device is not assigned to a disk group and therefore does not have an LSM disk media name. |
GROUP |
Specifies the disk group to which the device belongs. A dash (-) means the device is not assigned to a disk group. |
STATUS |
An unused storage device is one that does not have a DISK name or GROUP name and has a status of unknown. An unused LSM disk is one that has a DISK name but has no GROUP name and a status of online or offline. |
To display detail information about an LSM disk, enter:
#
voldisk list disk
The following example displays information for an LSM disk called dsk5:
Device: dsk5 devicetag: dsk5 type: sliced hostid: servername disk: name=dsk5 id=942260116.1188.servername group: name=dg1 id=951155418.1233.servername flags: online ready autoimport imported pubpaths: block=/dev/disk/dsk5g char=/dev/rdisk/dsk5g privpaths: block=/dev/disk/dsk5h char=/dev/rdisk/dsk5h version: n.n iosize: min=512 (bytes) max=2048 (blocks) public: slice=6 offset=16 len=2046748 private: slice=7 offset=0 len=4096 update: time=952956192 seqno=0.11 headers: 0 248 configs: count=1 len=2993 logs: count=1 len=453 Defined regions: config priv 17- 247[ 231]: copy=01 offset=000000 enabled config priv 249- 3010[ 2762]: copy=01 offset=000231 enabled log priv 3011- 3463[ 453]: copy=01 offset=000000 enabled
The size of an LSM disk is displayed in blocks as the
len=
value in the
public:
row.
2048 blocks equal
1 MB.
See the
voldisk
(8)
reference page for more information on the
voldisk
command.