2    Planning LSM Volumes and Disk Groups

You must plan your LSM configuration before you can use LSM volumes for applications or file systems. Planning your LSM configuration includes deciding:

This chapter provides information and worksheets to assist you in planning LSM volumes and disk groups. You might want to make copies of the blank worksheets for future use.

2.1    Planning LSM Volumes

Planning LSM volumes includes deciding what attributes you want the LSM volumes to have. An LSM volume has two types of attributes:

Table 2-1:  LSM Volume Attributes with No Default Values

Attribute Notes

Volume name

Can be 31 alphanumeric characters but cannot include a space or slash (/).

Must be unique in the disk group where you create the volume.

Volume size or length

The amount of space needed for the data in the LSM volume.

You can specify volume size in sectors (the default), kilobytes, megabytes, gigabytes, or terabytes.

Table 2-2:  LSM Volume Attributes with Default Values

Attribute Notes and Default Value
Number of plexes

LSM volumes can have up to 32 plexes, with the following restrictions or recommendations:

  • Volumes that use a RAID 5 plex have only one data plex (RAID 5 plexes cannot be mirrored) and can have up to 31 log plexes.

  • Volumes that use concatenated or striped plexes can have any combination of data and log plexes for a total of 32. At least one plex should be a log plex.

Default: One concatenated data plex, no log plex.

Log plex size

For volumes less than or equal to 1 GB that use mirror plexes, the default DRL is 65 blocks to allow for migration to a TruCluster environment. The minimum DRL size is approximately 2 blocks per GB of volume size. (You can use the minimum if you know the LSM configuration will not be used in a cluster.)

For volumes that use a RAID 5 plex, the log plex size is [10 * (number of columns * data unit size)].

Plex type

A plex type is either concatenated, striped, or RAID 5. You can mirror concatenated or striped plexes.

Default: Concatenated, no mirror.

See Table 2-3 for information on choosing a plex type.

Name of the disk group where you will create the volume

A volume can be in only one disk group.

Default: rootdg disk group.

LSM disks that the volume will use

If the volume has a striped or RAID 5 plex, each column must be of equal size and be on different disks, preferably on different buses.

If the volume has mirror plexes, each data plex should use disks on different buses, and the DRL plex should be on a disk that is not used for a data plex.

Default: LSM chooses the disks.

Usage type of the volume

Use fsgen for volumes that use concatenated or striped plexes and contain a file system.

Use gen for volumes that use concatenated or striped plexes and contain data other than a file system.

Use raid5 for volumes that use a RAID5 plex regardless of the contents of the volume.

Default: fsgen.

Table 2-3 describes the benefits and trade-offs of various plex layouts and lists scenarios where one plex type might provide better performance, or be more cost effective, than a different plex type. For optimal performance you might need to tune your system to the work load. The layout you choose depends on your specific system configuration, data availability and reliability needs, and application requirements.

Table 2-3:  Choosing a Plex Type

Plex Type Benefits and Possible Uses Trade-offs
Concatenated

Allows you to use space on multiple disks that might otherwise be wasted.

Concatenated plex can be mirrored for data redundancy.

Good for volumes containing infrequently used data or data that does not change often or small volumes that can be confined to a single disk.

Possible uneven performance (hot spots, where one disk is in use by multiple applications).

When mirrored, requires at least twice as much disk space (up to 32 times, depending on number of plexes).

Striped

Allows you to distribute data and therefore I/O load evenly across many disks.

Good for:

  • Large volumes that cannot be confined to a single disk.

  • Applications with large read-request size.

  • Volumes that contain data that changes often (many writes).

    Striping is preferred over RAID 5 in this case, because RAID 5 imposes the overhead of calculating and writing parity data along with volume data.

Striped plexes can be mirrored for data redundancy and high availability.

When mirrored, requires at least twice as much disk space (up to 32 times, depending on number of plexes).

RAID 5

Provides redundancy through parity, using fewer disks than a volume with striped mirror plexes.

Provides the I/O distribution benefit of striping.

Good for volumes with a high read-to-write ratio.

Depending on the I/O stripe size, performance might be slower than a volume with striped plexes due to parity calculation.

The RAID 5 plex type is not supported in a cluster.

The following sections provide worksheets to assist you in planning LSM volumes depending on the type of plex you want to use. Using the information in these worksheets will help you when you create volumes as described in Chapter 4.

Note

When you create an LSM volume with the volassist command (the recommended and simplest method), LSM performs all the necessary calculations and creates a volume and log plexes of the appropriate sizes. The following worksheets are provided to help you approximate the space needed and ensure the disk group has enough space for the volumes you want.

2.1.1    Planning an LSM Volume That Uses a Concatenated Plex

Use the following worksheet to plan an LSM volume that uses a concatenated plex.

Figure 2-1:  Worksheet for Planning a Volume with Concatenated Plexes

Attribute Default Values Chosen Values
Volume name No default

 

Volume size No default

 

Number of data plexes 1

 

If more than one plex, DRL plex size

65 blocks for volumes less than or equal to 1 GB [Footnote 1]

 

Disk group name rootdg

 

Usage type fsgen

 

Total space required (Volume size * number of plexes) + DRL size

 

2.1.2    Planning an LSM Volume That Uses a Striped Plex

Use the following worksheet to plan an LSM volume that uses a striped plex.

Figure 2-2:  Worksheet for Planning a Volume with Striped Plexes

Attribute Default Values Chosen Values
Volume name No default

 

Volume size No default

 

Data unit size 64 KB

 

Number of columns Minimum of two, based on number of disks in disk group and the volume size

 

Number of data plexes 1

 

If more than one plex, DRL plex size 65 blocks for volumes less than or equal to 1 GB [Footnote 2]

 

Disk group name rootdg

 

Usage type fsgen

 

Total space required (Volume size * number of plexes) + DRL size

 

2.1.3    Planning an LSM Volume That Uses a RAID 5 Plex

Use the following worksheet to plan an LSM volume that uses a RAID 5 plex.

Figure 2-3:  Worksheet for Planning a Volume with a RAID 5 Plex

Attribute Default Values Chosen Values
Volume name No default

 

Volume size No default

 

Data unit size 16 KB

 

Number of columns (NCOL) Between 3 and 8 based on number if disks in disk group and the volume size

(Minimum of three)

Log plex size 10 * (data unit size * number of columns)

 

Disk group name rootdg

 

Usage type Must be raid5

raid5

Total space required (Volume size * NCOL / (NCOL-1)) + log plex size

 

2.2    Planning Disk Groups

At a minimum, you must plan the rootdg disk group, which is created when you install LSM. Planning a disk group requires that you identify:

When you plan a disk group, consider the following:

Use the worksheets in Figure 2-4 and Figure 2-5 to plan disk groups. You can make copies and fill in the information on the copies rather than in the manual. This lets you keep the disk group information with each system running LSM, for your reference. Also, because you can change your LSM configuration at any time, you can make a new copy of the blank worksheets to record your changes.

In the appropriate worksheet, enter the following:

Figure 2-4:  Worksheet for Planning the rootdg Disk Group

Disk Group Information Disks in Group Bus/LUN Number Disk Size Volume, Plex, and Spare Disk Information

Name: rootdg

Purpose:

       
         
         
         
         
         
         
         
         
         
         
         

Figure 2-5:  Worksheet for Planning Additional Disk Groups

Disk Group Information Disks in Group Bus/LUN Number Disk Size Volume, Plex, and Spare Disk Information

Name:

Purpose:

       
         
         
         
         
         
         
         
         
         
         
         

Figure 2-6 shows a consolidated example of what your disk group planning worksheets might look like when complete. Note that this example applies only to a standalone system, not a cluster.

Figure 2-6:  Worksheet for Planning Disk Groups for a Standalone System (Consolidated Example)

Disk Group Information Disks in Group Bus/LUN Number Disk Size Volume, Plex, and Spare Disk Information

Name: rootdg

Purpose: root file system and system disks.

dsk0 0 4 GB root disk (encapsulated: rootvol plex-01)
dsk1 0 4 GB rootvol plex-02
dsk4 2 4 GB swapvol plex-01
dsk5 2 4 GB swapvol plex-02
dsk16 6 4 GB hot-spare disk

Name: data_dg

Purpose: Database, must be redundant.

Contains volume with mirrored striped plexes and DRL.

dsk6 3 18 GB

volume: db_vol

plex: db_vol-01

dsk7 3 18 GB plex: db_vol-01
dsk8 4 18 GB plex: db_vol-02
dsk9 4 18 GB plex: db_vol-02
dsk10 5 18 GB plex: db_vol-03 (DRL plex)
dsk11 5 18 GB hot-spare disk
dsk15 6 18 GB hot-spare disk

Name: finance_dg

Purpose: Financial application, must be highly available.

Contains volume with RAID 5 plex (read-only application).

dsk20 7 9 GB

volume: fin_vol

column: 1

dsk25 8 9 GB column 2
dsk30 9 9 GB column 3
dsk35 10 9 GB column 4
dsk40 11 9 GB column 5
dsk45 16 9 GB log plex
dsk16 6 18 GB hot-spare disk

2.3    Identifying Unused Storage Devices

Unused storage devices are unused disks, partitions, and RAID disks that LSM can initialize to become LSM disks for use in the rootdg disk group or in the other disk groups that you create.

You can also identify unused LSM disks for use in a disk group. An unused LSM disk is a storage device that you initialized for use by LSM but did not assign to a disk group.

The following sections describe how to identify unused disks, partitions, and LSM disks. See your hardware RAID documentation for information on identifying unused hardware RAID disks.

To identify unused storage devices, you can use:

2.3.1    Using the Disk Configuration GUI to Identify Unused Disks

To identify unused disks using the Disk Configuration GUI, start the Disk Configuration interface using either of the following methods:

For more information about the Disk Configuration GUI, see its online help.

2.3.2    Using Operating System Commands to Identify Unused Disks

You can use the following operating system commands to identify unused disks:

  1. List all the disks on the system:

    # file /dev/rdisk/dsk*c
    

    Information similar to the following is displayed:

    /dev/rdisk/dsk0c:       character special (19/38) SCSI #1 "RZ1CD-CS" disk #1 (SCSI ID #0) (SCSI LUN #0)
    /dev/rdisk/dsk10c:      character special (19/198) SCSI #3 "RZ1CD-CS" disk #3 (SCSI ID #5) (SCSI LUN #0)
    /dev/rdisk/dsk11c:      character special (19/214) SCSI #4 "RZ1BB-CS" disk #4 (SCSI ID #0) (SCSI LUN #0)
    /dev/rdisk/dsk12c:      character special (19/230) SCSI #4 "RZ1BB-CS" disk #5 (SCSI ID #1) (SCSI LUN #0)
    /dev/rdisk/dsk13c:      character special (19/246) SCSI #4 "RZ1BB-CS" disk #6 (SCSI ID #2) (SCSI LUN #0)
    /dev/rdisk/dsk14c:      character special (19/262) SCSI #4 "RZ1BB-CS" disk #7 (SCSI ID #3) (SCSI LUN #0)
    /dev/rdisk/dsk15c:      character special (19/278) SCSI #4 "RZ1CD-CS" disk #8 (SCSI ID #4) (SCSI LUN #0)
    /dev/rdisk/dsk16c:      character special (19/294) SCSI #4 "BD009635C3" disk #9 (SCSI ID #5) (SCSI LUN #0)
    /dev/rdisk/dsk17c:      character special (19/310) SCSI #4 "BD009635C3" disk #10 (SCSI ID #6) (SCSI LUN #0)
    /dev/rdisk/dsk18c:      character special (19/326) SCSI #5 "RZ1CD-CS" disk #11 (SCSI ID #0) (SCSI LUN #0)
    /dev/rdisk/dsk19c:      character special (19/342) SCSI #5 "RZ1BB-CS" disk #12 (SCSI ID #1) (SCSI LUN #0)
    /dev/rdisk/dsk1c:       character special (19/54) SCSI #1 "RZ1BB-CA" disk #2 (SCSI ID #2) (SCSI LUN #0)
    /dev/rdisk/dsk20c:      character special (19/358) SCSI #5 "RZ1CB-CA" disk #13 (SCSI ID #2) (SCSI LUN #0)
    /dev/rdisk/dsk21c:      character special (19/374) SCSI #5 "RZ1CB-CA" disk #14 (SCSI ID #3) (SCSI LUN #0)
    /dev/rdisk/dsk22c:      character special (19/390) SCSI #5 "RZ1CF-CF" disk #15 (SCSI ID #4) (SCSI LUN #0)
    /dev/rdisk/dsk23c:      character special (19/406) SCSI #5 "RZ1CF-CF" disk #8 (SCSI ID #5) (SCSI LUN #0)
    /dev/rdisk/dsk24c:      character special (19/422) SCSI #5 "BD009635C3" disk #9 (SCSI ID #6) (SCSI LUN #0)
    /dev/rdisk/dsk25c:      character special (19/438) SCSI #6 "RZ1BB-CS" disk #10 (SCSI ID #1) (SCSI LUN #0)
    /dev/rdisk/dsk26c:      character special (19/454) SCSI #6 "RZ1CD-CS" disk #11 (SCSI ID #3) (SCSI LUN #0)
    /dev/rdisk/dsk27c:      character special (19/470) SCSI #6 "RZ1CD-CS" disk #12 (SCSI ID #5) (SCSI LUN #0)
    /dev/rdisk/dsk2c:       character special (19/70) SCSI #1 "RZ1CD-CS" disk #3 (SCSI ID #4) (SCSI LUN #0)
    /dev/rdisk/dsk3c:       character special (19/86) SCSI #1 "RZ1CD-CS" disk #4 (SCSI ID #6) (SCSI LUN #0)
    /dev/rdisk/dsk4c:       character special (19/102) SCSI #2 "RZ1BB-CS" disk #5 (SCSI ID #0) (SCSI LUN #0)
    /dev/rdisk/dsk5c:       character special (19/118) SCSI #2 "RZ1CD-CS" disk #6 (SCSI ID #2) (SCSI LUN #0)
    /dev/rdisk/dsk6c:       character special (19/134) SCSI #2 "RZ1CD-CS" disk #7 (SCSI ID #4) (SCSI LUN #0)
    /dev/rdisk/dsk7c:       character special (19/150) SCSI #2 "RZ1CD-CS" disk #0 (SCSI ID #6) (SCSI LUN #0)
    /dev/rdisk/dsk8c:       character special (19/166) SCSI #3 "RZ1BB-CA" disk #1 (SCSI ID #1) (SCSI LUN #0)
    /dev/rdisk/dsk9c:       character special (19/182) SCSI #3 "RZ1CD-CS" disk #2 (SCSI ID #3) (SCSI LUN #0)
     
    

  2. To verify if a disk or partition is unused, choose a disk from the output of the file /dev/rdisk/dsk*c command and enter the disklabel command with the name of the disk; for example:

    # disklabel dsk20c
    

    Disk partition information similar to the following is displayed:

    type: SCSI
    disk: RZ1CB-CA
    label:
    flags: dynamic_geometry
    bytes/sector: 512
    sectors/track: 113
    tracks/cylinder: 20
    sectors/cylinder: 2260
    cylinders: 3708
    sectors/unit: 8380080
    rpm: 7200
    interleave: 1
    trackskew: 9
    cylinderskew: 9
    headswitch: 0           # milliseconds
    track-to-track seek: 0  # milliseconds
    drivedata: 0
     
    8 partitions:
    #            size       offset    fstype  fsize  bsize   cpg  # ~Cyl values
      a:       131072            0    unused      0      0        #      0 - 57*
      b:       262144       131072    unused      0      0        #     57*- 173*
      c:      8380080            0    unused      0      0        #      0 - 3707
      d:            0            0    unused      0      0        #      0 - 0
      e:            0            0    unused      0      0        #      0 - 0
      f:            0            0    unused      0      0        #      0 - 0
      g:      3993432       393216    unused      0      0        #    173*- 1940*
      h:      3993432      4386648    unused      0      0        #   1940*- 3707
     
    

    See the disklabel(8) reference page for more information on the disklabel command.

  3. If you are using AdvFS, display the disks in use by all domains:

    # ls /etc/fdmns/*/*
    /etc/fdmns/cluster_root/dsk7b   /etc/fdmns/root2_domain/dsk11a
    /etc/fdmns/cluster_usr/dsk7g    /etc/fdmns/root_domain/dsk1a
    /etc/fdmns/cluster_var/dsk7h    /etc/fdmns/usr_domain/dsk1g
    /etc/fdmns/root1_domain/dsk10a
    

  4. If you are using UFS, display all mounted file sets:

    # mount
    

2.3.3    Using the LSM voldisk Command to Identify Unused Disks

When LSM starts, it obtains a list of disk device addresses from the operating system software and checks the disk labels to determine which devices are initialized for LSM use and which are not.

If LSM is running on the system, you can use the voldisk command to display a list of all known disks and to display detail information about a particular disk:

  1. To view a list of disks, enter:

    # voldisk list
    

    Information similar to the following is displayed.

    DEVICE       TYPE      DISK         GROUP        STATUS  
    dsk0         sliced    -            -            unknown 
    dsk1         sliced    -            -            unknown 
    dsk2         sliced    dsk2         rootdg       online  
    dsk3         sliced    dsk3         rootdg       online  
    dsk4         sliced    dsk4         rootdg       online  
    dsk5         sliced    dsk5         rootdg       online  
    dsk6         sliced    dsk6         dg1          online  
    dsk7         sliced    -            -            online  
    dsk8         sliced    dsk8         dg1          online  
    dsk9         sliced    -            -            online  
    dsk10        sliced    -            -            online  
    dsk11        sliced    -            -            online  
    dsk12        sliced    -            -            online  
    dsk13        sliced    -            -            unknown 
    dsk14        sliced    -            -            unknown
    

    The following list describes the information in the output:

    DEVICE Specifies the disk access name assigned by the operating system.
    TYPE

    Specifies the LSM disk type (sliced, simple, or nopriv).

    DISK

    Specifies the LSM disk media name. A dash (-) means the device is not assigned to a disk group and therefore does not have an LSM disk media name.

    GROUP

    Specifies the disk group to which the device belongs. A dash (-) means the device is not assigned to a disk group.

    STATUS

    An unused storage device is one that does not have a DISK name or GROUP name and has a status of unknown.

    An unused LSM disk is one that has a DISK name but has no GROUP name and a status of online or offline.

  2. To display detail information about an LSM disk, enter:

    # voldisk list disk
    

    The following example displays information for an LSM disk called dsk5:

    Device:    dsk5
    devicetag: dsk5
    type:      sliced
    hostid:    servername
    disk:      name=dsk5 id=942260116.1188.servername
    group:     name=dg1 id=951155418.1233.servername
    flags:     online ready autoimport imported
    pubpaths:  block=/dev/disk/dsk5g char=/dev/rdisk/dsk5g
    privpaths: block=/dev/disk/dsk5h char=/dev/rdisk/dsk5h
    version:   n.n
    iosize:    min=512 (bytes) max=2048 (blocks)
    public:    slice=6 offset=16 len=2046748
    private:   slice=7 offset=0 len=4096
    update:    time=952956192 seqno=0.11
    headers:   0 248
    configs:   count=1 len=2993
    logs:      count=1 len=453
    Defined regions:
     config   priv     17-   247[   231]: copy=01 offset=000000 enabled
     config   priv    249-  3010[  2762]: copy=01 offset=000231 enabled
     log      priv   3011-  3463[   453]: copy=01 offset=000000 enabled
    

    The size of an LSM disk is displayed in blocks as the len= value in the public: row. 2048 blocks equal 1 MB.

See the voldisk(8) reference page for more information on the voldisk command.