This chapter describes how to setup up the LSM software, which includes:
Installing or upgrading the LSM software subsets.
Initializing the LSM software, which prepares the system and disks for use with the LSM software.
2.1 Installing or Upgrading the LSM Software
The way you install the LSM software depends on if you are:
Installing the LSM software for the first time during a full installation. See the Installation Guide for more information.
Installing the LSM software for the first time after a full installation. See the Installation Guide for more information.
Performing a full installation on a system that was previously using the LSM software. See Section 2.1.1 for more information.
Performing an upgrade installation on a system with the LSM software. See Section 2.1.2 for more information.
Note
The LSM Versions 5.0 software and higher have an on-disk LSM internal metadata format that is not compatible with the previous versions of the LSM software. That is, LSM Version 5.0 or higher cannot use the metadata format from previous LSM versions, nor can a previous LSM version use the metadata format from LSM Version 5.0 or higher.
If the LSM software detects an older metadata format within a disk's private region during a disk group import, LSM automatically converts the old format to the new format. Once converted to the new format, you can no longer use a disk group with previous versions of LSM.
2.1.1 Performing a Full Installation on an System With LSM
If the system is running the LSM software, follow these steps before performing a full installation:
Check for a previous version of the LSM software by entering the following command:
#
voldisk list rz2 | grep
version
Output similar to the following is displayed:
version: 1.1
Optionally, prevent non-rootdg
disk groups
from automatically being converted to the new LSM internal metadata format
during the full installation process.
For example, to prevent disk groups called
dg1
and
dg2
from being converted and used with the new version of the LSM
software, enter:
#
voldg deport dg1 dg2
Determine the previous hostid configured with the LSM software by entering the following command:
#
/sbin/voldctl list
Output similar to the following is displayed:
Volboot file
version: 3/1
seqno: 0.2
hostid: rio.dec.com
entries:
disk rz2 type=sliced
disk rz3 type=sliced
disk rz8 type=sliced
disk rz9 type=sliced
In this output the hostid is
rio.dec.com
Save the current LSM configuration by entering the following command:
#
volsave
Output similar to the following is displayed:
LSM configuration being saved to /usr/var/lsm/db/LSM.date.rio
LSM Configuration saved successfully to /usr/var/lsm/db/LSM.date.rio
Confirm that the LSM configuration was saved by entering the following command:
#
ls /usr/var/lsm/db/LSM.date.rio
Output similar to the following is displayed:
header rootdg.d volboot voldisk.list
Save the LSM configuration to tape or other removable media.
The LSM software is reinitialized if during the full installation you
selected to install the system's
root
,
usr
,
var
, and
swap
partitions directly onto LSM volumes
.
If the system's
root
,
usr
,
var
, and
swap
partitions are not installed directly
onto LSM volumes during the full installation, select and configure the LSM
subsets during the operating system installation process.
See the
Installation Guide
for more information on installing the operating system software.
After installing the operating system, follow these steps to reinitialize the LSM software to use previous disk group configurations:
Either:
Restore the
/etc/vol/volboot
file by entering
the following command:
#
cp /backup/usr/var/lsm/db/LSM.date.hostname/ \ volboot /etc/vol/volboot
Or, create a new
/etc/vol/volboot
file
using the hostid obtained in the previous Step 3.
For example, to create a
new
/etc/vol/volboot
file for a system with a hostid of
rio
, enter:
#
voldctl init rio.dec.com
Reinitialize the LSM special device files and start the LSM daemons and volumes by entering the following command:
#
volsetup
Warning
Do not use the force option with the
volsetup
command. Doing so destroys the previous LSM configuration for therootdg
disk group.
2.1.2 Performing an Upgrade Installation on a System with LSM
If the LSM software was initialized on a system before an upgrade installation, be sure to select the LSM subsets during the upgrade installation process. If one of the file systems was configured on an LSM volume, you must start the LSM software and its volumes before proceeding with the upgrade installation after booting the system to single-user mode.
See the
Installation Guide
for more information on installing the LSM
software.
2.2 Initializing the LSM Software For the First Time
You must initialize the LSM software if during a full installation you did not install the system's file systems into LSM volumes, or if you did not perform an upgrade installation on a system that was previously running the LSM software.
Use one of the following methods to initialize the LSM software for the first time:
While installing the operating system software, select the option to install file systems directly to LSM volumes. See the Installation Guide for more information on initializing the LSM software while install the operating system software.
Use the
volsetup
command.
This is the simplest
method to initialize the LSM software.
The
volsetup
command
automatically provides a default configuration that is suitable for most environments.
See
Section 2.2.2
for more information on initializing
the LSM software using the
volsetup
command.
Use a series of LSM commands. Although this method is more complicated, it allows you to have more control over your LSM configuration. See Section 2.2.3 for more information on initializing the LSM software using commands.
Initializing the LSM software:
Modifies the
/etc/inittab
file to include
LSM entries that automatically starts the LSM software when the system boots.
Allows you to create, initialize, and add disks to the
rootdg
disk group.
You must configure at least one disk or partition
in the
rootdg
disk group.
You do not have to use the
rootdg
disk group, however it must exist before you can create other
disk groups.
Verifies the disk labels for the disks in the
rootdg
disk group.
If any disks were previously used, and the
fstype
field for any partition is anything other than
unused
, you must reinitialize the disk label.
Creates the
/etc/vol/volboot
file, which
contains:
The host ID that the LSM software uses to establish ownership of physical disks. The host ID ensures that two or more nonclustered systems can access disks on a shared SCSI bus without interfering with each other.
An optional list of the disks in the
rootdg
disk group that contain the LSM configuration database.
Do not manually edit or delete the
/etc/vol/volboot
file.
Use the
voldctl
command to update the
/etc/vol/volboot
file.
Starts the
vold
daemon.
The
vold
daemon receives requests from other utilities for configuration
changes, modifies configuration information stored on disk, and communicates
the changes to the kernel.
Sets the number of configuration databases.
The size of the
configuration database depends on the number of LSM objects (volume, plex,
subdisk, and disk).
Each LSM object created in the disk group requires one
record in the configuration database.
Approximately two records can fit in
one sector (512 bytes).
Disks that are added to the
rootdg
disk group need one record for each disk in all other disk groups.
Certain
LSM configuration changes (for example, moving a subdisk) involve creating
and later deleting two to four temporary records in the configuration database
to perform the operation.
Sets the number of log regions and the size of the private region. The default private region size is 4096 blocks (512 bytes).
Starts the LSM software.
2.2.1 Before You Initialize the LSM Software
Before you initialize the LSM software, you should:
Verify that the LSM subsets are installed by entering the following command:
#
setld -i | grep LSM
Output similar to the following is displayed:
OSFLSMBASE500 installed Logical Storage Manager (System Administration)
OSFLSMBIN500 installed Logical Storage Manager Kernel Modules (Kernel
Build Environment)
OSFLSMX11500 installed Logical Storage Manager GUI (System Administration)
If the LSM subsets do not display with a status of
installed
, use the
setld
command to install them.
See
the
Installation Guide
for more information on installing the LSM subsets.
Verify that the LSM drivers are configured into the kernel by entering the following command:
#
devswmgr -getnum driver=LSM
Output similar to the following is displayed:
Device switch reservation list
(*=entry in use)
driver name instance major
------------------------------- -------- -----
LSM 4 43
LSM 3 42
LSM 2 41*
LSM 1 40*
If
the LSM drivers do not display, you must rebuild the kernel using the
doconfig
command.
See the
Installation Guide
for more information
on rebuilding the kernel.
Identify the disks that you want to use with the LSM software by entering the following command:
#
file /dev/rdisk/dsk*c
Note the device names in the output.
Choose a disk to use for the
rootdg
disk
group.
Enter the
disklabel
,
swapon
,
mount
, or
showfdmn
command (as shown in the output
that follows) to identify the disk partitions already in use.
Also, check
the configuration for any other third-party software that uses raw partition:
#
disklabel dsk0
Output similar to the following is displayed:
# /dev/rdisk/dsk0a:
8 partitions:
# size offset fstype [fsize bsize cpg]
a: 262144 0 AdvFS # (Cyl. 0 - 328*)
b: 262144 262144 swap # (Cyl. 328*- 657*)
c: 2050860 0 unused 0 0 # (Cyl. 0 - 2569)
d: 508857 524288 unused 0 0 # (Cyl. 657*- 1294*)
e: 508857 1033145 unused 0 0 # (Cyl. 1294*- 1932*)
f: 508858 1542002 unused 0 0 # (Cyl. 1932*- 2569)
g: 1526572 524288 AdvFS # (Cyl. 657*- 2569)
h: 0 0 unused 0 0 # (Cyl. 0 - -1)
#
swapon
-s
Output similar to the following is displayed:
Swap partition /dev/disk/dsk6b (default swap):
...
#
mount
Output similar to the following is displayed:
root_domain#root on / type advfs (rw)
usr_domain#usr on /usr type advfs (rw)
var_domain#var on /var type advfs (rw)
staff1#staff1 on /share/demo2/usr/staff1 type advfs (rw)
...
#
showfdmn root_domain usr_domain var_domain staff1 | \ grep dev
Output similar to the following is displayed:
1L 262144 39712 85% on 256 256 /dev/disk/dsk1a
1L 2050848 582240 72% on 256 256 /dev/disk/dsk2c
2 1191936 444624 63% on 256 256 /dev/disk/dsk23d
1L 2050848 883136 57% on 256 256 /dev/disk/dsk5c
1L 4110480 1800880 56% on 256 256 /dev/disk/dsk7c
2 4110480 1804384 56% on 256 256 /dev/disk/dsk20c
3 4110480 1794688 56% on 256 256 /dev/disk/dsk14c
If you don't know the domain names, enter:
#
ls -lagR /etc/fdmns
If you are in single-user mode, set the host name for your system before you initialize the LSM software by entering the following command:
#
/sbin/hostname
-s
name
Verify that the PATH environment variable includes the
/usr/sbin
and/sbin
directories.
This simplifies
the use of LSM commands, which are located in both of these directories.
2.2.2 Initializing the LSM Software Using The
volsetup
Command
The
volsetup
command automatically initializes the
LSM software by:
Modifying disk labels
Initializing disks for use with the LSM software
Creating the
rootdg
disk group
Configuring disks into the
rootdg
disk
group
You enter the
volsetup
command only once.
To add
more disks, you can use the
voldiskadd
command, as described
in
Section 6.2.1.
Initialize the LSM software by entering the following command:
#
volsetup disk_name
If you omit the name of a disk, the
volsetup
command
prompts you for it.
For example, to initialize the LSM software using a disk called
dsk4
to create the
rootdg
disk group, enter:
#
volsetup dsk4
Note
When you initialize the LSM software, do not specify the boot disk with the
volsetup
command. After you initialize the LSM software, you can encapsulate the boot disk to add partitions on the boot disk to therootdg
disk group. See Chapter 4 for more information on encapsulating the boot disk.
If the
volsetup
command displays an error message
or if the initialization fails, you may need to modify the disk label and
reinitialize the disk.
See the
volsetup
(8)
reference page for more information.
2.2.3 Initializing the LSM Software Using Commands
Using the
volsetup
command to initialize the LSM
software for the first time is the most common and easiest way to set up the
LSM software as described in
Section 2.2.2.
However,
if you require more control over how the LSM software is set up, you can use
a series of commands instead of the
volsetup
command.
Follow these steps to use a series of LSM commands to initialize the LSM software:
Adds entries to the
/etc/inittab
file that
automatically start LSM when the system boots by entering the following command:
#
volinstall
If the
volinstall
command fails, then:
Verify that the/etc/inittab
file was
modified to include LSM entries by entering the following command:
#
grep LSM inittab
Output similar to the following is display:
lsmr:s:sysinit:/sbin/lsmbstartup -b /dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup -n /dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n /dev/console 2>&1 ##LSM
Verify that the LSM special files were created by entering the following command:
#
ls -l /dev/vol*
Output similar to the following is display:
crw-r--r-- 1 root system 41, 0 Mar 4 09:35 /dev/volconfig crw-r--r-- 1 root system 41, 3 Mar 4 09:35 /dev/volinfo crw-r--r-- 1 root system 41, 2 Mar 4 09:35 /dev/voliod crw-r--r-- 1 root system 41, 1 Mar 4 09:35 /dev/voltrace
Start the
vold
daemon in the disabled mode
by entering the following command:
#
vold -m disable
Create and initialize the
/etc/vol/volboot
file by entering the following command:
#
voldctl init
Create the
rootdg
disk group by entering
the following command:
#
voldg init rootdg
Warning
Enter the
voldg init
command only once to create a disk group. If you use thevoldg init
command a second time, you will destroy configuration information.
Verify that the
rootdg
disk group was created
by entering the following command:
#
volprint
Output similar to the following is displayed:
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg rootdg rootdg - - - - - - dm dsk4 dsk4 - 1854536 - - - -
Verify that the disk label for disks to be used with the LSM
software have a
fstype
status of
unused
by entering the following command:
#
disklabel disk_name
For example, to display the disk label for a disk called
dsk10
, enter:
#
disklabel dsk10
Output similar to the following is displayed:
# /dev/rdisk/dsk10c: type: SCSI disk: RZ1BB-CS label: flags: dynamic_geometry bytes/sector: 512 sectors/track: 86 tracks/cylinder: 16 sectors/cylinder: 1376 cylinders: 3045 sectors/unit: 4110480 rpm: 7228 interleave: 1 trackskew: 40 cylinderskew: 80 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype [fsize bsize cpg] # NOTE: values not exact a: 131072 0 unused 0 0 # (Cyl. 0 - 95*) b: 262144 131072 unused 0 0 # (Cyl. 95*- 285*) c: 4110480 0 unused 0 0 # (Cyl. 0 - 2987*) d: 0 0 unused 0 0 # (Cyl. 0 - -1) e: 0 0 unused 0 0 # (Cyl. 0 - -1) f: 0 0 unused 0 0 # (Cyl. 0 - -1) g: 1858632 393216 unused 0 0 # (Cyl. 285*- 1636*) h: 1858632 2251848 unused 0 0 # (Cyl. 1636*- 2987*)
If the disk is no longer in use, but the
fstype
field
for any partition is anything other than
unused
, you must
initialize the disk label.
For example, to initialize the disk label for a
disk called
dsk2
, enter:
#
disklabel -wr dsk2
If you receive an error message that the disk does not start at block zero, enter the following commands:
#
disklabel -z disk_name
#
disklabel -wr disk_name
Repartition and initialize the LSM private region on the disk by entering the following command:
#
voldisksetup -i disk_name
For example, to repartition and initialize the LSM private region on
a disk called
dsk9
, enter:
#
voldisksetup -i dsk9
Display the results by entering the following command:
#
disklabel dsk9 | grep
LSM
Output similar to the following is displayed:
g: 4106384 0 LSMpubl # (Cyl. 0 - 2984*) h: 4096 4106384 LSMpriv # (Cyl. 2984*- 2987*)
LSM automatically maintains the number of active configuration databases and the location of the databases for a disk group. LSM dynamically evaluates and, if needed, activates or deactivates a configuration database within a disk's private region when a disk is added, removed, or fails. Therefore, it is not necessary to explicitly specify the location and number of configurations on a disk.
See the
voldisksetup
(4)
and
voldisk
(4)
reference pages for more information on disk initialization
options.
Add a disk to the
rootdg
disk group by
entering the following command:
#
voldg adddisk disk_name
For example, to add a disk called
dsk9
to the
rootdg
disk group, enter:
#
voldg adddisk dsk9
Enable the
vold
daemon by entering the
following command:
#
voldctl enable
Set the number of LSM I/O daemons, which is either two or the number of central processing units (CPUs) on the system, whichever is greater. For example, on a single CPU system, enter:
#
voliod set 2
On a four CPU system, enter:
#
voliod set 4
You only need to set the LSM I/O daemons the first time you initialize the LSM software. The correct number of I/O daemons is correctly set when the system boots.
2.2.4 Verifying that the LSM Software was Initialized
Follow these steps to verify that the LSM software is initialized:
Verify that the disk was added to the
rootdg
disk group by entering the following command:
#
volprint
Output similar to the following is displayed:
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg rootdg rootdg - - - - - - dm dsk4 dsk4 - 1854536 - - - -
Verify that the
vold
daemon is enabled
by entering the following command:
#
voldctl mode
Output similar to the following is displayed:
mode: enabled
Verify that two or more
voliod
daemons
are running by entering the following command:
#
voliod
Output similar to the following is displayed:
2 volume I/O daemons are running
Verify that the/etc/inittab
file was
modified to include LSM entries by entering the following command:
#
grep LSM /etc/inittab
Output similar to the following is displayed:
lsmr:s:sysinit:/sbin/lsmbstartup -b /dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup -n /dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n /dev/console 2>&1 ##LSM
Verify that the
/etc/vol/volboot
file
was created by entering the following command:
#
/sbin/voldctl list
Output similar to the following is displayed:
Volboot file version: 3/1 seqno: 0.4 hostid: test.abc.xyz.com entries:
After the LSM software is set up, you can:
Migrate, or encapsulate, existing file systems or data into LSM volumes. Encapsulating file systems or data allows you to configure existing file systems or data into LSM volumes, without physically moving the data. See Chapter 3 for more information.
Migrate, or encapsulate, the partitions on the boot disk into LSM volumes. Encapsulating the boot disk allows you to configure the partitions on the boot disk into LSM volumes, without physically moving the data. See Chapter 4 for more information.
Configure new disks for use with the LSM software and create LSM volumes. See Chapter 5 for more information.
Note
LSM does not support encapsulation of data on ULTRIX Disk Shadowing (UDS) volumes or ULTRIX Striping Driver stripe volumes.