This appendix contains sample LSM commands and system output that show how to perform common LSM administrative tasks.
The following examples show the commands you use to set up LSM for the
first time. Refer to
Chapter 3
and the
volsetup(8)
and
volinstall(8)
reference pages for further information about setting up LSM.
The following command lists the disks configured on system:
#
file /dev/rr*c
/dev/rrz10c: character special (8/18434) SCSI #1 RZ26 ... /dev/rrz12c: character special (8/20482) SCSI #1 RZ26 ...
The following command identifies disk partitions already in use:
#
mount; swapon -s
/dev/rz3a on / type ufs (rw) /proc on /proc type procfs (rw) /dev/rz3g on /usr type ufs (rw) /dev/rz3h on /usr/users type ufs (rw) Swap partition /dev/rz3b (default swap):
The following example confirms disk file system types (fstypes) fields that are unused:
#
disklabel rz10; disklabel rz12
size offset fstype [fsize bsize cpg]
a: 131072 0 unused 1024 8192 # (Cyl. 0 - 164*)
b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*)
c:2050860 0 unused 1024 8192 # (Cyl. 0 - 2569)
d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*)
e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*)
f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*)
g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*)
h: 838444 1212416 unused 1024 8192 # (Cyl. 1519*- 2569*)
The following commands initialize the disklabel tag for any new disks:
#
disklabel -rw /dev/rrz10c rz26
#
disklabel -rw rz12 rz26
The following commands set up LSM for the first time:
#
volsetup rz10 rz12 rz32 rz34
Approximate maximum number of physical disks .. by LSM ? [10] 50 : Initialization of vold and the rootdg disk group was successful.
At this point, the disks
rz10,
rz12,
rz32,
and
rz34
have been added
to LSM and LSM has been initialized and is ready to use.
The following commands display various configuration information. The first command checks all LSM disk status:
#
voldisk list
DEVICE TYPE DISK GROUP STATUS rz10 sliced rz10 rootdg online rz12 sliced rz12 rootdg online rz32 sliced rz32 rootdg online rz34 sliced rz34 rootdg online
The following command checks the status of the
rootdg
diskgroup:
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
dgrootdg 784429068.1025.rio.zk3.dec.com
dm rz10 rz10 sliced 1024 2049820 /dev/rrz10g dm rz12 rz12 sliced 1024 2049820 /dev/rrz12g dm rz32 rz32 sliced 1024 2049820 /dev/rrz32g dm rz34 rz34 sliced 1024 2049820 /dev/rrz34g
The following command checks the status of
/etc/vol/volboot:
#
voldctl list
Volboot file version: 3/1 seqno: 0.6 hostid: rio.zk3.dec.com entries: disk rz10 type=sliced disk rz12 type=sliced disk rz32 type=sliced disk rz34 type=sliced
The following command checks the status of
vold:
#
voldctl mode
mode: enabled
The following examples show how to add additional disks to LSM.
Refer to
Chapter 3,
voldiskadd(8),
voldisksetup(8),
voldisk(8),
voldg(8),
and
voldctl(8)
for further information on adding disks under LSM.
Most of the following examples assume no data was already existing on the disk. See Section C.4 for encapsulation examples that add disks already containing data. If only certain partitions on the disk are free, those partitions can be added to LSM, as shown in the following examples.
The following example initializes and adds an entire disk to the
rootdg
disk group:
#
voldiskadd rz17
Which disk group [<group>,none,list,q,?] (default: rootdg) <return> : Enter disk name [<name>,q,?] (default: disk01) <return> : Continue with operation? [y,n,q,?] (default: y) <return> : Add rz17 to /etc/vol/volboot file ? [y,n,q,?] (default: y) <return>
If the entire disk is not free, but certain partitions are free, then
you can add the free partitions to LSM. The following example shows the
commands to initialize and add free partitions to
rootdg.
#
voldiskadd rz18h
Which disk group [<group>,none,list,q,?] (default: rootdg) <return> : Enter disk name [<name>,q,?] (default: disk02) rz18h : Continue with operation? [y,n,q,?] (default: y) <return> : Add rz18h to /etc/vol/volboot file ? [y,n,q,?] (default: y) n
The following examples initialize the disk with the
voldisksetup
command:
#
voldisksetup -i rz19 privlen=1024 nconfig=1 nlogs=1
The following command adds rz19 as disk02 to
rootdg:
#
voldg adddisk disk02=rz19
The following command initializes
rz18g:
#
voldisksetup -i rz18g privlen=1024 nconfig=1 nlogs=1
The following command adds
rz18g
to the
rootdg
disk group with disk media name
rz18g:
#
voldg adddisk rz18g=rz18g
Note that four to eight copies of a disk group's configuration on different disks or controllers is adequate. If you want to add more disks to a disk group, add them without a configuration database and kernel log.
The following example sequence adds more disks:
#
voldisksetup -i rz21 privlen=1024 nconfig=0 nlogs=0
#
voldisksetup -i rz36 privlen=1024 nconfig=0 nlogs=0
#
voldg adddisk rz21
#
voldg adddisk rz36
Note
To add a disk with no configuration database to a disk group, set the
nconfigattribute to 0 when initializing the disk withvoldisksetup, as shown in the previous example. Do not initialize a new disk as a nopriv disk - this disk type is appropriate only for encapsulation of existing data.
The following example shows how to add a partition that contains existing
data (for example, the
/usr
file system) under LSM using the encapsulation utility. Refer to
Chapter 4
and the
volencap(8)
reference page for further information.
The following command determines which partition has
/usr:
#
mount
/dev/rz3a on / type ufs (rw) /proc on /proc type procfs (rw) /dev/rz3g on /usr type ufs (rw)
The following command moves
/usr
under LSM:
#
volencap rz3g
The rz3g disk has been configured for encapsulation.
The following example shuts down the system in order for the change to
take effect. After the system is rebooted,
/usr
will be under LSM.
#
shutdown -r now
The following commands shows how the changes are made:
#
cat /etc/fstab | grep vol
/dev/vol/rootdg/vol-rz3g /usr ufs rw 1 2
#
mount
/dev/rz3a on / type ufs (rw) /proc on /proc type procfs (rw) /dev/vol/rootdg/vol-rz3g on /usr type ufs (rw)
#
volprint -htv
V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v vol-rz3g fsgen ENABLED ACTIVE 819200 SELECT ... pl vol-rz3g-01 vol-rz3g ENABLED ACTIVE 819200 CONCAT ... sd rz3g-01 vol-rz3g-01 0 0 819200 rz3g ... #
The following example shows an alternative way to add a partition
containing data (for example, a UFS file system) under LSM by using the LSM
commands. Refer to
Chapter 3
and the
voldisk(8),
voldg(8),
volmake(8),
and
volume(8)
for more information.
The following command checks the disk label for UFS:
#
disklabel rz8
size offset fstype [fsize bsize cpg]
a: 131072 0 unused 1024 8192 # (Cyl. 0 - 164*)
b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*)
c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569)
d: 552548 393216 4.2BSD 1024 8192 16 # (Cyl. 492*- 1185*)
e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*)
f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*)
g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*)
h: 838444 1212416 unused 1024 8192 # (Cyl. 1519*- 2569*)
The following command indicates that rz8d is not mounted:
#
mount
/dev/rz3a on / type ufs (rw) /proc on /proc type procfs (rw) /dev/vol/rootdg/vol-rz3g on /usr type ufs (rw)
The following command adds rz8d with no private region:
#
voldisk init rz8d type=nopriv
The following command adds rz8d to the
rootdg
disk group:
#
voldg adddisk rz8d
You can use the
volmake help
command whenever you need help remembering the options:
#
volmake help
The following command creates a subdisk:
#
volmake sd ov-sd rz8d,0,552548s
The following command creates the plex and associates it to the subdisk:
#
volmake plex ov-pl sd=ov-sd
The following command creates volume and attaches plexes:
#
volmake vol ov-vol plex=ov-pl usetype=fsgen
The following command starts the volume:
#
volume start ov-vol
The following command checks the results:
#
volprint -ht ov-vol
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v ov-vol fsgen ENABLED ACTIVE 552548 ROUND ... pl ov-pl ov-vol ENABLED ACTIVE 552548 CONCAT ... sdov-sd ov-pl 0 0 552548 rz8d ...
The following command mounts the volume:
#
mount /dev/vol/ov-vol /usr/OV
The following command can be used
to edit the
/etc/fstab
file and add an entry to automatically mount the volume:
#
vi /etc/fstab
The following examples show some different ways to create
LSM volumes. Refer to
Chapter 6
and the
voldisk(8),
voldg(8),
volmake(8),
and
volume(8)
reference pages for more information.
The following examples show how to use the LSM commands for a top-down approach to creating volumes. The first command shows how to create a volume of 100 Mb on disk01:
#
volassist make myvol1 100m disk01
The following command creates a volume anywhere but on disk01:
#
volassist make myvol2 100m !disk01
The following command creates a volume anywhere:
#
volassist make myvol3 1g
The following command displays the results of the configuration changes:
#
volprint -htv myvol1 myvol2 myvol3
V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v myvol1 fsgen ENABLED ACTIVE 204800 SELECT - pl myvol1-01 myvol1 ENABLED ACTIVE 204800 CONCAT - sd disk01-01 myvol1-01 0 0 204800 disk01
v myvol2 fsgen ENABLED ACTIVE 204800 SELECT - pl myvol2-01 myvol2 ENABLED ACTIVE 204800 CONCAT - sd disk02-01 myvol2-01 0 0 204800 disk02
v myvol3 fsgen ENABLED ACTIVE 2097152 SELECT - pl myvol3-01 myvol3 ENABLED ACTIVE 2097152 CONCAT - sd disk01-02 myvol3-01 0 204800 1845020 disk01 sd disk02-02 myvol3-01 1845020 204800 252132 disk02
The following command puts a UFS on a volume called myvol1:
#
newfs /dev/rvo1/rootdg/myvol1 rz26l
The following command mounts the volume:
#
mount /dev/vol/rootdg/myvol1 /mnt8
The following shows how to use LSM commands for a bottom up approach to creating volumes. This method provides more control when setting up an LSM environment.
The following command looks for free space in the
rootdg
disk group:
#
voldg -g rootdg free
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS rootdg rz10 rz10 rz10 0 2049820 - rootdg rz12 rz12 rz12 0 2049820 - rootdg disk02 rz19 rz19 456932 1592888 - rootdg rz21 rz21 rz21 0 2049820 - rootdg rz32 rz32 rz32 0 2049820 - rootdg rz34 rz34 rz34 0 2049820 - rootdg rz36 rz36 rz36 0 2049820 -
The following command creates a subdisk:
#
volmake sd v4-sd disk02,456932,100m
The following command creates a plex:
#
volmake plex v4-pl sd=v4-sd
The following command creates the volume and attaches plexes:
#
volmake vol v4 plex=v4-pl usetype=fsgen
The following command starts the volume:
#
volume start v4
The following command checks the results:
#
volprint -ht v4
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v v4 fsgen ENABLED ACTIVE 204800 ROUND - pl v4-pl v4 ENABLED ACTIVE 204800 CONCAT - sd v4-sd v4-pl 0 456932 204800 disk02 #
The following examples show some different ways to mirror LSM volumes.
Refer to
Chapter 7,
Chapter 15,
and the
volassist(8),
volmake(8),
volsd(8),
volplex(8),
and
volume(8)
reference pages for more information.
The commands in this section show how to use LSM commands for a top-down approach to mirroring LSM volumes. Note that this can be done while the volume is in use:
#
mount | grep mnt8
/dev/vol/myvol1 on /mnt8 type ufs (rw)
The following command mirrors myvol1 using any available disk:
#
volassist mirror myvol1
The following command mirrors the myvol2 volume on the rz12 disk:
#
volassist mirror myvol2 rz12
The following command creates a 50Mb mirrored volume:
#
volassist -U fsgen make v2_mirr 50m nmirror=2
#
volprint -h myvol1 myvol2 v2_mirr
TYPE NAME ASSOC KSTATE LENGTH COMMENT vol myvol1 fsgen ENABLED 204800 plex myvol1-01 myvol1 ENABLED 204800 sd disk01-01 myvol1-01 - 204800 plex myvol1-02 myvol1 ENABLED 204800 sd disk02-04 myvol1-02 - 204800
vol myvol2 fsgen ENABLED 204800 plex myvol2-01 myvol2 ENABLED 204800 sd disk02-01 myvol2-01 - 204800 plex myvol2-02 myvol2 ENABLED 204800 sd rz12-02 myvol2-02 - 204800
vol v2_mirr fsgen ENABLED 102400 plex v2_mirr-01 v2_mirr ENABLED 102400 sd disk02-05 v2_mirr-01 - 102400 plex v2_mirr-02 v2_mirr ENABLED 102400 sd rz10-02 v2_mirr-02 - 102400 #
The following command creates a new AdvFS domain:
#
mkfdmn /dev/vol/rootdg/myvol1 dom1
The following command creates a new fileset:
#
mkfset dom1 fset1
The following command mounts the fileset:
#
mount -t advfs dom1#fset1 /mnt9
On systems that have the AdvFS Advanced Utilities package installed, the following command adds a second volume to the AdvFS domain:
#
addvol /dev/vol/rootdg/myvol2 dom1
The following series of commands demonstrate how to use LSM commands for a bottom-up approach to creating a new, mirrored volume:
The following command creates a subdisk:
#
volmake sd sd1 rz32,0,30m
The following command creates a plex and associates a subdisk with the plex:
#
volmake plex pl1 sd=sd1
The following command creates a volume:
#
volmake -U fsgen vol v_mir2 plex=pl1
The following command starts the volume:
#
volume start v_mir2
The following command creates the second subdisk (sd2):
#
volmake sd sd2 rz34,0,30m
The following command creates the second plex (pl2):
#
volmake plex pl2
The following command associates pl2 with sd2:
#
volsd assoc pl2 sd2
The following command attaches the plex pl2 with volume v_mir2:
#
volplex att v_mir2 pl2
The following command displays the results of these commands:
#
volprint -ht v_mir2
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v v_mir2 fsgen ENABLED ACTIVE 61440 ROUND - pl pl1 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd sd1 pl1 0 0 61440 rz32 pl pl2 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd sd2 pl2 0 0 61440 rz34 #
The following examples show different ways to stripe data with an
LSM volume. Refer to
Chapter 7
and the
volassist(8),
volmake(8),
volsd(8),
volplex(8),
and
volume(8)
reference pages for more information.
The following LSM commands demonstrate a top-down approach to striping data across LSM disks.
#
volassist make v1_stripe 64m usetype=fsgen layout=stripe \
nstripe=4 stwidth=8k
#
volprint -ht v1_stripe
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v v1_stripe fsgen ENABLED ACTIVE 131072 SELECT... pl v1_stripe-01 v1_stripe ENABLED ACTIVE 131072 STRIPE 16 sd disk02-03 v1_stripe-01 0 661732 32768 disk02 sd rz10-01 v1_stripe-01 32768 0 32768 rz10 sd rz12-01 v1_stripe-01 65536 0 32768 rz12 sd rz32-01 v1_stripe-01 98304 61440 32768 rz32 #
The following LSM commands demonstrate a bottom-up approach to creating a new, mirrored volume. The following command creates a subdisk:
#
volmake sd s1-sd rz21,0,500m
The following command creates the second subdisk:
#
volmake sd s2-sd rz36,0,500m
The following command creates a plex:
#
volmake plex s-pl sd=s1-sd,s2-sd layout=stripe stwidth=16k
The following command creates the volume:
#
volmake -U gen vol my_fast_one plex=s-pl
The following command starts the volume:
#
volume start my_fast_one
The following command displays the results of the configuration changes:
#
volprint -ht my_fast_one
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME... KSTATE STATE LENGTH READPOL PREFPLEX PL NAME ... KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME... PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
v my_fast_one fsgen ENABLED ACTIVE 2048000 ROUND... s-pl my_fast_one ENABLED ACTIVE 2048000 STRIPE ... sd s1-sd s-pl 0 0 1024000 rz21 ... sd s2-sd s-pl 1024000 0 1024000 rz36 ...
The following commands create a UFS file system on the volume and mount it:
#
newfs /dev/vol/my_fast_one rz26l
#
mount /dev/vol/my_fast_one /fast1
#
df /fast1
Filesystem 512-blocks Used Avail Capacity... /dev/vol/my_fast_one 1980986 2 1782884 0% ... #
Use the LSM
voledit
command to set or change the owner, group, or mode of the
special device file for a volume.
Do not
use standard UNIX commands such as
chown,
chgrp,
or
chmod
to set or change LSM special device file attributes .
For example, the following
voledit
command changes
the user and group to
dba
and the mode to 0600 for the volume
vol_db
in disk group
dbgrp:
#
voledit -g dbgrp set user=dba group=dba mode=0600 vol_db
Refer to the
voledit(8)
reference page for further information.
When a disk is getting large number of soft errors, you should move all subdisks on that disk to other disks. The free disk space in the disk group must be larger than that of the disk being evacuated.
Enter the following command to determine the free space in the disk group.
#
voldg free
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS rootdg rz10 rz10 rz10 0 32768 - rootdg rz10 rz10 rz10 135168 1914652 - rootdg rz12 rz12 rz12 0 32768 - rootdg rz12 rz12 rz12 270336 1779484 - rootdg disk02 rz19 rz19 661732 32768 - rootdg disk02 rz19 rz19 1034468 1015352 - rootdg rz21 rz21 rz21 1179648 870172 - rootdg rz32 rz32 rz32 61440 1988380 - rootdg rz34 rz34 rz34 61440 1988380 - rootdg rz36 rz36 rz36 1159168 890652 -
Use the
volevac
command to move all volumes from a particular disk to another available
disk. The following example shows that the LSM disk
rz32
is used by volume
v_mir2:
#
volprint -htv v_mir2
V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v v_mir2 fsgen ENABLED ACTIVE 61440 ROUND - pl pl1 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd sd1 pl1 0 0 61440 rz32 pl pl2 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd sd2 pl2 0 0 61440 rz34
To move data from
rz32,
enter the following command:
#
volevac rz32
This command can take a long time to finish.
The following command displays the results of the changes:
#
volprint -htv v_mir2
V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
v v_mir2 fsgen ENABLED ACTIVE 61440 ROUND - pl pl1 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd rz21-02 pl1 0 1056768 61440 rz21 pl pl2 v_mir2 ENABLED ACTIVE 61440 CONCAT - sd sd2 pl2 0 0 61440 rz34
If a disk that was in use by LSM fails to restart or has other hardware problems, you can replace the disk with a new disk. The following sections describe the procedure you use to replace a disk. The procedure varies depending on whether the replacement disk has the same or a different physical unit number than the failed disk.
The examples in this section describe how to replace a disk that has a different unit number from that of the failed disk.
The example that follows is for disk
rz19,
which has hardware problems
and needs to be replaced. Disk
rz19
has been added to the
rootdg
disk group as
disk02.
Follow these steps:
disk02,
from the
rootdg
disk group. To do this, use the
-k
flag with the
voldg
command to keep the disk media records associated with
disk02.
The subdisk records associated with
disk02
will continue to point to the
disk media record. For example:
#
voldg -g rootdg -k rmdisk disk02
rz19
disk (for example,
rz20),
you can use the spare disk to replace the
failed disk. Initialize the new
rz20
disk using similar parameters to
those used for
rz19.
For example:
#
voldisksetup -i rz20 privlen=1024 nconfig=1 nlogs=1
rz20
disk to the
rootdg
disk group. Use the
-k
flag to associate a new disk with the existing disk media records in
the disk group.
#
voldg -g rootdg -k adddisk disk02=rz20
rz19
has any volumes with only one plex, you must
restore data to the volumes from backup media. If the volumes using
rz19
are mirrored, resynchronize the volumes by issuing the following
command:
#
volrecover -sb disk02
If a spare disk is available but has to replace the failed disk at the same unit number, follow these steps:
rz19
from LSM:
#
voldg -g rootdg -k rmdisk disk02
#
voldisk rm rz19
#
voldisksetup -i rz19 privlen=1024 nconfig=1 nlogs=1
rz19
disk to the
rootdg
disk group. Use the
-k
flag to associate a new disk with the existing disk media records in
the disk group:
#
voldg -g rootdg -k adddisk disk02=rz19
disk02:
#
volrecover -sb disk02
rz19
was used by an unmirrored volume, restore the data from backup.
To remove LSM volumes, first ensure that they are not in use. If the LSM
volume is in use, the
voledit
command will fail. Refer to
voledit(8)
for more details.
To remove the volume called
v_mir2
from the
rootdg
disk group, enter the following command:
#
voledit -g rootdg -rf rm v_mir2
To remove a disk from LSM, first ensure that the disk is not in use by any subdisk. If the disk is in use, refer to Section C.9 for information about how to evacuate a disk.
Follow these steps to remove a disk from LSM:
#
voldisk list
DEVICE TYPE DISK GROUP STATUS rz10 sliced rz10 rootdg online rz12 sliced rz12 rootdg online rz17 sliced disk01 rootdg online rz18g simple rz18g rootdg online rz18h simple rz18h rootdg online rz19 sliced disk02 rootdg online rz21 sliced rz21 rootdg online rz32 sliced disk03 rootdg online rz34 sliced rz34 rootdg online rz36 sliced rz36 rootdg online rz8d nopriv rz8d rootdg online
disk03
from the
rootdg
disk group. For example:
#
voldg -g rootdg rmdisk disk03
voldisk
command to note the change in status for
rz32.
For example:
#
voldisk list
DEVICE TYPE DISK GROUP STATUS rz10 sliced rz10 rootdg online rz12 sliced rz12 rootdg online rz17 sliced disk01 rootdg online rz18g simple rz18g rootdg online rz18h simple rz18h rootdg online rz19 sliced disk02 rootdg online rz21 sliced rz21 rootdg online rz32 sliced - - online rz34 sliced rz34 rootdg online rz36 sliced rz36 rootdg online rz8d nopriv rz8d rootdg online
rz32
from LSM:
#
voldisk rm rz32
rz32
is not among the disks listed.
#
voldisk list
DEVICE TYPE DISK GROUP STATUS
rz10 sliced rz10 rootdg online rz12 sliced rz12 rootdg online rz17 sliced disk01 rootdg online rz18g simple rz18g rootdg online rz18h simple rz18h rootdg online rz19 sliced disk02 rootdg online rz21 sliced rz21 rootdg online rz34 sliced rz34 rootdg online rz36 sliced rz36 rootdg online rz8d nopriv rz8d rootdg online
To remove the last disk of a disk group other than
rootdg,
you must deport the disk group. For example:
#
voldg deport dg1
After deporting the disk group, You can use the
voldisk rm
command to remove the last disk in that disk group from LSM.
To remove the last disk in the
rootdg
disk group, you must shut down the LSM configuration
daemon,
vold.
To stop
vold,
enter the following command:
#
voldctl stop
After shutting down
vold,
the last disk is no longer in use by LSM.
You can use the
volmake
description file to save and re-create
the volume configuration when disks are moved from one diskgroup
to another.
For example, the following steps show how to move disks
rz16
and
rz17
(with disk names
disk01
and
disk02,
respectively) from disk group
dg1
to disk group
dg2.
The disks must have the same disk names in disk group
dg2
as
they had in disk group
dg1.
dg1.
#
volprint -g dg1 -vn -e "aslist.aslist.sd_da_name==\"rz16\""
vol1
vol2
#
volprint -g dg1 -vn -e "aslist.aslist.sd_da_name==\"rz17\""
vol3
vol4
volmake
description file for the affected volumes.
#
volprint -g dg1 -mh vol1 vol2 vol3 vol4 > tmp1.df
This command creates a description file for volumes
vol1,
vol2,
vol3,
and
vol4
in disk group
dg1.
dg2,
deport the disk group
dg1.
#
voldg deport dg1
If only some of the disks are being moved, remove the volumes from
disk group
dg1,
and then remove the disks from disk group
dg1.
#
voledit -g dg1 -rf rm vo1l vol2 vol3 vol4
#
voldg -g dg1 rmdisk disk01 disk02
dg2.
For example, the following command initializes a new disk group
dg2
with disk
rz16.
The disk is named
disk01,
since this was its name in
disk group
dg1.
#
voldg init dg2 disk01=rz16
rz17
to the disk group
dg2.
#
voldg -g dg2 adddisk dg2 disk02=rz17
vol1,
vol2,
vol3,
and
vol4.
#
volmake -g dg2 -d tmp1.df
volmend
command. Then
start the volumes in disk group
dg2.
#
volume -g dg2 start vol1 vol2 vol3 vol4
Note
If a plex was in the STALE state make sure that the plex state is set to STALE before starting the volume.
dg2,
modify the
/etc/fstab
file or the
/etc/fdmns
directory to use the appropriate special device file.
To initialize LSM and encapsulate all of the partitions on
the boot disk,
run the
volencap
utility, using
the name of the
system boot disk
as an argument.
In the following example,
rz3
is the boot disk and has the following label:
# /dev/rrz3a: type: SCSI disk: RZ26 label: flags: bytes/sector: 512 sectors/track: 57 tracks/cylinder: 14 sectors/cylinder: 798 cylinders: 2570 sectors/unit: 2050860 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0
8 partitions: # size offset fstype [fsize bsize cpg] a: 131072 0 AdvFS # (Cyl. 0 - 164*) b: 262144 131072 swap # (Cyl. 164*- 492*) c: 2050860 0 unused 0 0 # (Cyl. 0 - 2569) d: 552548 393216 unused 0 0 # (Cyl. 492*- 1185*) e: 552548 945764 unused 0 0 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 0 0 # (Cyl. 1877*- 2569) g: 819200 393216 AdvFS # (Cyl. 492*- 1519*) h: 838444 1212416 unused 0 0 # (Cyl. 1519*- 2569)
Follow these steps:
#
volencap rz3
Setting up encapsulation for rz3. - Disk rz3 is the system boot disk and LSM is not initialized. Creating simple disk rz3d to initialize LSM and rootdg. - Partition rz3a is the root partition which requires 2 passes to encapsulate and the temporary use of a free partition. Using partition rz3e for temporary root encapsulation. - Creating nopriv disk for primary swap device rz3b. - Creating nopriv disk for rz3g.
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz3d rz3a rz3e rz3b rz3g
#
shutdown -r now
... ADVFS: using 566 buffers containing 4.42 megabytes of memory starting LSM LSM: /sbin/swapdefault has been moved to /sbin/swapdefault.encap. LSM: Rebooting system to initialize LSM. syncing disks... 3 done rebooting.... (transferring to monitor) ...
... ADVFS: using 566 buffers containing 4.42 megabytes of memory vm_swap_init: warning /sbin/swapdefault swap device not found vm_swap_init: swap is set to lazy (over commitment) mode starting LSM LSM: Initializing rz3d. LSM: Encapsulating first pass for root using rz3e. LSM: Encapsulating primary swap device rz3b. LSM: Encapsulating rz3g. LSM: LSM: The following disks were encapsulated successfully: LSM: rz3b rz3g LSM: LSM: The following disks are queued for encapsulation at reboot: LSM: rz3a
The system is rebooting for the following reason(s): Second pass root encapsulation to move rootvol from rz3e to rz3a. To enable swapvol.
syncing disks... 2 done rebooting.... (transferring to monitor) ...
... ADVFS: using 566 buffers containing 4.42 megabytes of memory starting LSM LSM: Moving root volume from rz3e to rz3a. LSM: This may take a few minutes. LSM: LSM: The following disks were encapsulated successfully: LSM: rz3a Checking local filesystems /sbin/ufs_fsck -p ...
The following listing shows the configuration that results from this procedure:
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821915478.1025.lsmtest
dm rz3a rz3a nopriv 0 131072 /dev/rrz3a dm rz3b rz3b nopriv 0 261120 /dev/rrz3b dm rz3d rz3d simple 1024 0 /dev/rrz3d dm rz3g rz3g nopriv 0 819200 /dev/rrz3g
v rootvol root ENABLED ACTIVE 131072 ROUND - pl rootvol-01 rootvol ENABLED ACTIVE 131072 CONCAT - RW sd rz3a-01 rootvol-01 0 0 131072 rz3a rz3a
v swapvol swap ENABLED ACTIVE 261120 ROUND - pl swapvol-01 swapvol ENABLED ACTIVE 261120 CONCAT - RW sd rz3b-01 swapvol-01 0 0 261120 rz3b rz3b
v vol-rz3g fsgen ENABLED ACTIVE 819200 SELECT - pl vol-rz3g-01 vol-rz3g ENABLED ACTIVE 819200 CONCAT - RW sd rz3g-01 vol-rz3g-01 0 0 819200 rz3g rz3g
This example shows how to encapsulate the root and swap partitions when LSM is already initialized. The example is based on the LSM configuration shown in the following listing:
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821984014.1025.lsmtest
dm rz3d rz3d simple 1024 0 /dev/rrz3d
To encapsulate the root and swap partitions, run the
volencap
utility with the partition names as arguments. For example:
#
volencap rz3a rz3b
Setting up encapsulation for rz3a. - Partition rz3a is the root partition which requires 2 passes to encapsulate and the temporary use of a free partition. Using partition rz3e for temporary root encapsulation.
Setting up encapsulation for rz3b. - Creating nopriv disk for primary swap device rz3b.
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz3a rz3e rz3b
#
shutdown -r now
... ... ADVFS: using 566 buffers containing 4.42 megabytes of memory vm_swap_init: warning /sbin/swapdefault swap device not found vm_swap_init: swap is set to lazy (over commitment) mode starting LSM LSM: Encapsulating first pass for root using rz3e. LSM: Encapsulating primary swap device rz3b. LSM: LSM: The following disks were encapsulated successfully: LSM: rz3b LSM: LSM: The following disks are queued for encapsulation at reboot: LSM: rz3a
The system is rebooting for the following reason(s): Second pass root encapsulation to move rootvol from rz3e to rz3a. To enable swapvol.
syncing disks... 2 done rebooting.... (transferring to monitor) ... ... ADVFS: using 566 buffers containing 4.42 megabytes of memory starting LSM LSM: Moving root volume from rz3e to rz3a. LSM: This may take a few minutes. LSM: LSM: The following disks were encapsulated successfully: LSM: rz3a Checking local filesystems /sbin/ufs_fsck -p ...
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821984014.1025.lsmtest
dm rz3a rz3a nopriv 0 131072 /dev/rrz3a dm rz3b rz3b nopriv 0 261120 /dev/rrz3b dm rz3d rz3d simple 1024 0 /dev/rrz3d
v rootvol root ENABLED ACTIVE 131072 ROUND - pl rootvol-01 rootvol ENABLED ACTIVE 131072 CONCAT - RW sd rz3a-01 rootvol-01 0 0 131072 rz3a rz3a
v swapvol swap ENABLED ACTIVE 261120 ROUND - pl swapvol-01 swapvol ENABLED ACTIVE 261120 CONCAT - RW sd rz3b-01 swapvol-01 0 0 261120 rz3b rz3b
Note
Although it is possible to encapsulate the root and swap partitions individually, encapsulating only root or swap is not a supported configuration.
This example shows how you can use the
volencap
utility to set up encapsulation for a complex configuration containing
a mix of disk, domain, and partition types.
The example uses the LSM setup shown in the following listing:
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821915478.1025.lsmtest
dm rz3d rz3d simple 1024 0 /dev/rrz3d
In this example,
/usr
is an AdvFS file system on
rz3g.
#
ls /etc/fdmns/usr_domain
rz3g@
The following LSM command will attempt to set up encapsulation for numerous disks and partitions as well as an AdvFS domain.
#
volencap rz3d usr_domain rz3h rz3g rz9a rz9b rz10 rz9 rz3
Setting up encapsulation for rz3d. Cannot encapsulate rz3d since the following disks/partitions are already in use by LSM: rz3d
Setting up encapsulation for usr_domain. - Creating nopriv disk for rz3g.
Setting up encapsulation for rz3h. - Creating nopriv disk for rz3h.
Setting up encapsulation for rz3g. Cannot encapsulate rz3g since the following disks/partitions are already setup for encapsulation: rz3g
Setting up encapsulation for rz9a. - Creating nopriv disk for rz9a.
Setting up encapsulation for rz9b. - Creating nopriv disk for rz9b.
Setting up encapsulation for rz10. - Creating simple disk for rz10c.
Setting up encapsulation for rz9. Cannot encapsulate rz9c since the following disks/partitions are already setup for encapsulation: rz9a rz9b
Setting up encapsulation for rz3. Cannot encapsulate rz3 since the following disks/partitions are already in use by LSM: rz3d
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz3g rz3h rz9a rz9b rz10c
When the system reboots, the
init
process executes
/sbin/vol-reconfig,
which performs the encapsulations. All the necessary system files
are updated (for example,
/etc/fstab
and
/etc/fdmns).
Note the following results of the encapsulation process:
rz3g,
rz3h,
rz9a,
and
rz9b).
rz10).
volencap usr_domain
resulted in encapsulation of the partition,
rz3g.
This section shows how to mirror the root disk using the
volmirror
utility. The examples
assume
that the root disk has already been encapsulated
as described in
Section C.15.
The disk used as the mirror must not have any partitions in use.
The
volrootmir
utility will only mirror the root disk to an unused disk. For example:
#
volrootmir rz10
Some partitions on rz10 seem to be in use. Reinitialize the disklabel before using rz10 for mirroring the root disk.
The following example shows how to initialize the disk label to use the disk as a root mirror.
#
disklabel -z rz10
#
disklabel -wr rz10 rz26
#
volrootmir rz10
Mirroring rootvol to rz10a. Mirroring swapvol to rz10b.
In the following example, the
-a
option is used with
volrootmir
to mirror the entire root
disk to the target disk. This includes copying the disk partition map
and mirroring all volumes on the disk.
#
volrootmir -a rz10
Mirroring system disk rz3 to disk rz10.
This operation will destroy all contents on disk rz10. The disk label from rz3 will be copied to rz10 and all volumes associated with rz3 will be mirrored.
Do you want to continue with this operation? (y or n) y
Initializing rz10.
Mirroring rootvol to rz10a. Mirroring swapvol to rz10b. Mirroring vol-rz3g to rz10g.
When mirroring the entire root disk, the target disk must be of the
same type as the root disk. If the disks differ,
volrootmir
prints an error and exits. For example:
#
volrootmir -a rz13
ERROR: disk rz3 is an RZ26 type device while disk rz13 is an RZ73 type device. Both disks must be of the same type.
If you want to mirror only the root and swap partitions, the target disk can be a different type. For example:
#
volrootmir rz13
Mirroring rootvol to rz13a. Mirroring swapvol to rz13b.
This example shows how to delete queued encapsulation requests.
Follow these steps:
#
volencap -s
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz3g rz3h rz9a rz9b
To remove the encapsulation requests for specific partitions, use the following commands:
#
volencap -k rz3g rz9a
#
volencap -s
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz3h rz9b
To remove the encapsulation requests for an entire disk, use the following commands:
#
volencap -k rz3
#
volencap -s
The following disks are queued up for encapsulation or use by LSM. You must reboot the system to perform the actual encapsulations. rz9a rz9b
This example shows how to unencapsulate the system boot disk using the
volunroot
utility.
The following listing shows the configuration used in the example.
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821915478.1025.lsmtest
dm rz10a rz10a nopriv 0 131072 /dev/rrz10a dm rz10b rz10b nopriv 0 261120 /dev/rrz10b dm rz10d rz10d simple 1024 0 /dev/rrz10d dm rz10g rz10g nopriv 0 819200 /dev/rrz10g dm rz3a rz3a nopriv 0 131072 /dev/rrz3a dm rz3b rz3b nopriv 0 261120 /dev/rrz3b dm rz3d rz3d simple 1024 0 /dev/rrz3d dm rz3g rz3g nopriv 0 819200 /dev/rrz3g
v rootvol root ENABLED ACTIVE 131072 ROUND - pl rootvol-01 rootvol ENABLED ACTIVE 131072 CONCAT - RW sd rz3a-01p rootvol-01 0 0 16 rz3a rz3a sd rz3a-01 rootvol-01 16 16 131056 rz3a rz3a pl rootvol-02 rootvol ENABLED ACTIVE 131072 CONCAT - RW sd rz10a-01p rootvol-02 0 0 16 rz10a rz10a sd rz10a-01 rootvol-02 16 16 131056 rz10a rz10a
v swapvol swap ENABLED ACTIVE 261120 ROUND - pl swapvol-01 swapvol ENABLED ACTIVE 261120 CONCAT - RW sd rz3b-01 swapvol-01 0 0 261120 rz3b rz3b pl swapvol-02 swapvol ENABLED ACTIVE 261120 CONCAT - RW sd rz10b-01 swapvol-02 0 0 261120 rz10b rz10b
v vol-rz3g fsgen ENABLED ACTIVE 819200 SELECT - pl vol-rz3g-01 vol-rz3g ENABLED ACTIVE 819200 CONCAT - RW sd rz3g-01 vol-rz3g-01 0 0 819200 rz3g rz3g pl vol-rz3g-02 vol-rz3g ENABLED ACTIVE 819200 CONCAT - RW sd rz10g-01 vol-rz3g-02 0 0 819200 rz10g rz10g
The
volunroot
utility will not unencapsulate volumes that do not map directly to a
partition or that are mirrored. For example:
#
volunroot
There are 2 plexes associated with volume rootvol. rootvol should have only 1 plex to use volunroot. The volunroot operation cannot proceed. Please refer to volunroot(8).
To unencapsulate the boot disk, the mirrors must first be removed from the volumes. For example:
#
volplex dis rootvol-02 swapvol-02 vol-rz3g-02
#
voledit -rf rm rootvol-02 swapvol-02 vol-rz3g-02
#
voldg rmdisk rz10a rz10b rz10g rz10d
#
voldisk rm rz10a rz10b rz10g rz10d
#
volunroot
This operation will convert the following file systems on the system disk rz3 from LSM volumes to regular disk partitions: Replace volume rootvol with rz3a. Replace volume swapvol with rz3b.
This operation will require a system reboot. If you choose to continue with this operation, your system files will be updated to discontinue the use of the above listed LSM volumes. You must then reboot the system. /sbin/vol-reconfig should be present in /etc/inittab to remove the named volumes during system reboot.
Do you wish to do this now ? (y or n) n
If the
volunroot
command is used with no arguments,
volunroot
unencapsulates only the
rootvol
and
swapvol
volumes.
If the
-a
argument is supplied,
volunroot
attempts to unencapsulate all
previously-encapsulated volumes on the boot disk.
The
volunroot
utility modifies the necessary system files to remove the selected
volumes. Then
volunroot
creates a script which is run by
/sbin/vol-reconfig
during
init
processing. The script then deletes the selected volumes and
associated disks from LSM.
Note that
volunroot
shuts down to the boot prompt and does not automatically reboot the
system. This allows you to update the default boot device
if necessary.
#
volunroot -a
This operation will convert the following file systems on the system disk rz3 from LSM volumes to regular disk partitions: Replace volume rootvol with rz3a. Replace volume swapvol with rz3b. Replace volume vol-rz3g with rz3g.
This operation will require a system reboot. If you choose to continue with this operation, your system files will be updated to discontinue the use of the above listed LSM volumes. You must then reboot the system. /sbin/vol-reconfig should be present in /etc/inittab to remove the named volumes during system reboot.
Do you wish to do this now ? (y or n) y
Changing rootvol in /etc/fdmns/root_domain to /dev/rz3a. Removing 'lsm_rootdev_is_volume=' entry in /etc/sysconfigtab. Changing /dev/vol/rootdg/swapvol in /etc/fstab to /dev/rz3b. Removing 'lsm_swapdev_is_volume=' entry in /etc/sysconfigtab. Changing vol-rz3g in /etc/fdmns/usr_domain to /dev/rz3g.
A shutdown is now required to complete the unencapsulation process. Please shutdown before performing any additional LSM or disk reconfiguration.
When would you like to shutdown (in minutes) e.g. now, q, 1,2,3,4 (default: 2) ? now Shutdown at 19:34 (in 0 minutes) [pid 874]
*** FINAL System shutdown message from root@lsmtest ***
System going down IMMEDIATELY ...
System shutdown time has arrived ...
System shutdown time has arrived # syncing disks... 2 done CPU 0: Halting... (transferring to monitor)
?05 HLT INSTR PC= FFFFFC00.004401C0 PSL= 00000000.00000005
>>>b ...
... ADVFS: using 566 buffers containing 4.42 megabytes of memory starting LSM LSM: Can't open device rz3a, device busy or inaccessible. Checking local filesystems /sbin/ufs_fsck -p ...
Notice the error message LSM prints when attempting to access device
rz3a.
This is expected behavior. The system has
mounted
rz3a
for root and LSM still recognizes
rz3a
as an LSM disk. When LSM starts up, it will
attempt to open
rz3a,
resulting in the error message. Once LSM has
started, it will remove
rz3a
from its configuration to complete the root
disk unencapsulation process. After this, the error message will not be
seen again.
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ST-WIDTH MODE SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME DEVICE
dg rootdg 821915478.1025.lsmtest
dm rz3d rz3d simple 1024 0 /dev/rrz3d
To use disk partitions instead of LSM volumes for the
/usr
or
/var
file systems, follow these steps:
/etc/fdmns/domain directory to use a disk partition instead
of an LSM volume. For example:
#
cd /etc/fdmns/usr_domain
#
ls -l
total 0
lrwxrwxrwx 1 root system 24 Dec 12 14:37 vol-rz3h-> /dev/vol/ rootdg/vol-rz3h
#
rm -f vol-rz3h
#
ln -s /dev/rz10g rz10g
If the file system is UFS, change the entry
for the
/usr
file system in
/etc/fstab
from an LSM volume to a disk partition.
#
/sbin/lsmbstartup
#
volume stop vol-rz3h
#
voledit -r vol-rz3h
#
voldg rmdisk rz10g rz11g
#
voldisk rm rz10g rz11g
Digital recommends that you back up
the current LSM configuration on a regular basis, using
the
volsave
utility. You can use the default backup directory
(/usr/var/lsm/db),
or specify a location of your choice.
To create a backup copy of the current
LSM configuration using the default backup directory, enter
the
volsave
command with no options. Note that the backslash in the following example is for line continuation and is not in the actual command.
#
volsave
LSM configuration being saved to \ /usr/var/lsm/db/LSM.19951226203620.skylark
volsave does not save configuration for volumes used for root, swap, /usr or /var. LSM configuration for following system disks not saved: rz8a rz8b
LSM Configuration saved successfully.
#
cd /usr/var/lsm/db/LSM.19951226203620.skylark
#
ls
dg1.d header volboot dg2.d rootdg.d voldisk.list
In this example, the
volsave
utility created the following files and directories:
LSM.19951226203620.skylark,
containing the
header,
volboot,
and
voldisk.list
description files
diskgroup.d subdirectory for each of the system's three disk groups,
dg1,
dg2,
and
rootdg.
allvol.DF
file in each of the
diskgroup.d subdirectories. This file is a
volmake
description file for all volumes, plexes, and subdisks in that disk group.
Note that
volsave
does not save volumes associated with the root,
swap,
/usr
and
/var
file systems.
After the
rootdg
disk group
is restored, the partitions that are in use on the system disk
will have to be reencapsulated using the procedure described in
Chapter 4.
To save the LSM configuration in a timestamped subdirectory in a
directory other than
/usr/var/lsm/db,
use the
following command syntax:
#
volsave -d /usr/var/
dirname
/LSM.%date
For example, the following command saves the LSM configuration in the
/usr/var/config
subdirectory. Note that the backslash in the following example is for line continuation and is not in the actual display.
#
volsave -d /usr/var/config/LSM.%date
LSM configuration being saved to \ /usr/var/config/LSM.19951226203658
.
.
.
LSM Configuration saved successfully.
To save an LSM configuration to a specific directory, use the following command syntax:
#
volsave -d
dirname
For example, the following command saves the LSM configuration in the
/usr/var/LSM.config1
subdirectory:
#
volsave -d /usr/var/LSM.config1
LSM configuration being saved to /usr/var/LSM.config1
.
.
.
LSM Configuration saved successfully.
To list the LSM configuration from the last timestamped subdirectory in
/usr/var/lsm/db,
use the following command:
#
volrestore -l
To list the LSM configuration in any other directory use the
-l
and
-d
options
with
volrestore
as shown below:
#
volrestore -l -d /usr/var/config/LSM.19951226203658.skylark
or
#
volrestore -l -d /usr/LSM.config1
The
volrestore
utility lists the LSM
configuration in a format similar to the output of
volprint -htA.
For example:
#
volrestore -l
LSM Configuration Save Utility Version 1. Configuration Information stored in directory /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
LSM Configuration for Diskgroup dg1.
Working .
dm rz10 rz10 sliced 512 2050332 /dev/rz10g dm rz11g rz11g simple 512 818688 /dev/rz11g dm rz11h rz11h nopriv 0 838444 /dev/rz11h
v vol1 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-02 vol1-01 0 20480 20480 rz11g ... pl vol1-02 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz10-02 vol1-02 0 20480 20480 rz10 ...
v vol2 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol2-01 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-01 vol2-01 0 0 20480 rz11g ... pl vol2-02 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz10-01 vol2-02 0 0 20480 rz10 ...
v vol3 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol3-01 vol3 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-03 vol3-01 0 40960 20480 rz11g ...
LSM Configuration for Diskgroup dg2.
Working .
dm rz11b rz11b simple 128 262016 /dev/rz11b dm rz9 rz9 sliced 512 2050332 /dev/rz9g
v vol1 fsgen ENABLED ACTIVE 100 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 100 CONCAT ... sd rz11b-01 vol1-01 0 0 100 rz11b ...
v vol6 fsgen ENABLED ACTIVE 100 SELECT ... pl vol6-01 vol6 ENABLED ACTIVE 100 CONCAT ... sd rz11b-06 vol6-01 0 500 100 rz11b ... pl vol6-02 vol6 ENABLED ACTIVE 100 CONCAT ... sd rz9-02 vol6-02 0 100 100 rz9 ...
v vol9 fsgen ENABLED ACTIVE 100 SELECT ... pl vol9-01 vol9 ENABLED ACTIVE 100 CONCAT ... sd rz11b-09 vol9-01 0 800 100 rz11b ... pl vol9-02 vol9 ENABLED ACTIVE 100 CONCAT ... sd rz9-01 vol9-02 0 0 100 rz9 ...
LSM Configuration for Diskgroup rootdg.
Working .
dm rz12 rz12 sliced 512 2050332 /dev/rz12g dm rz8a rz8a nopriv 0 131072 /dev/rz8a dm rz8b rz8b nopriv 0 262144 /dev/rz8b dm disk01 rz8h simple 512 837932 /dev/rz8h
v rootvol root ENABLED ACTIVE 131072 ROUND ... pl rootvol-01 rootvol ENABLED ACTIVE 131072 CONCAT ... sd rz8a-01 rootvol-01 0 0 131072 rz8a ...
v swapvol swap ENABLED ACTIVE 262144 ROUND ... pl swapvol-01 swapvol ENABLED ACTIVE 262144 CONCAT ... sd rz8b-01 swapvol-01 0 0 262144 rz8b ...
v vol1 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 20480 CONCAT ... sd disk01-01 vol1-01 0 0 20480 disk01 ... pl vol1-02 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz12-01 vol1-02 0 0 20480 rz12 ...
v vol2 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol2-01 vol2 ENABLED ACTIVE 20480 CONCAT ... sd disk01-02 vol2-01 0 20480 20480 disk01 ... pl vol2-02 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz12-02 vol2-02 0 20480 20480 rz12 ...
To restore an LSM configuration, you first need to determine the
location of the most recent copy of the LSM configuration that was
saved using
volsave.
/usr/var/lsm/db),
volrestore
automatically retrieves the last timestamped subdirectory in that
directory.
-d
option to specify another directory during the
volsave
operation,
you must use the
-d
option to specify that same directory to
volrestore.
To restore a specific disk group, enter the
volrestore
command with the
-g
option. The
volrestore
utility will attempt
to reimport the disk group. If the import operation succeeds,
any volumes that do not exist will be re-created.
If the import operation fails,
volrestore
re-creates the disk group.
The following example shows a successful import operation of disk group
dg1.
#
volrestore -g dg1
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Working .
Restoring dg1
vol1 in diskgroup dg1 already exists. (Skipping ..)
vol2 in diskgroup dg1 already exists. (Skipping ..)
vol3 in diskgroup dg1 already exists. (Skipping ..)
The following example shows what happens when the disk group import operation fails:
#
volrestore -g dg1
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
+ voldg init dg1 rz10=rz10 Working .
Restoring dg1
Checking vol1
Checking vol2
Checking vol3
The following volumes in diskgroup dg1 cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol1 vol2 vol3
The
volrestore
utility creates all volumes and plexes in the disk group in
the DISABLED and EMPTY state.
If a volume is mirrored, before starting
the volume make sure to set the state to
STALE for plexes with outdated
data.
Take care to set the plex state appropriately, since the plex
state can change between the time the LSM configuration was
saved using
volsave
and the time when the configuration is restored.
For example, in the configuration listed below, if disk
rz10
had a failure just prior to the restoration of the LSM
configuration using
volrestore,
you would set the plexes
vol1-02
and
vol2-02
to the STALE state before starting the volumes
vol1
and
vol2.
#
volprint -ht -g dg1
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
dg dg1 820028958.80337.lsm
dm rz10 rz10 sliced 512 2050332 /dev/rrz10g dm rz11g rz11g simple 512 818688 /dev/rrz11g dm rz11h rz11h nopriv 0 838444 /dev/rrz11h
v vol1 fsgen DISABLED EMPTY 20480 SELECT ... pl vol1-01 vol1 DISABLED EMPTY 20480 CONCAT ... sd rz11g-02 vol1-01 0 20480 20480 rz11g ... pl vol1-02 vol1 DISABLED EMPTY 20480 CONCAT ... sd rz10-02 vol1-02 0 20480 20480 rz10 ...
v vol2 fsgen DISABLED EMPTY 20480 SELECT ... pl vol2-01 vol2 DISABLED EMPTY 20480 CONCAT ... sd rz11g-01 vol2-01 0 0 20480 rz11g ... pl vol2-02 vol2 DISABLED EMPTY 20480 CONCAT ... sd rz10-01 vol2-02 0 0 20480 rz10 ...
v vol3 fsgen DISABLED EMPTY 20480 SELECT ... pl vol3-01 vol3 DISABLED EMPTY 20480 CONCAT ... sd rz11g-03 vol3-01 0 40960 20480 rz11g ...
#
volume -g dg1 init clean vol1 vol1-01
#
volume -g dg1 init clean vol2 vol2-01
#
volprint -ht -g dg1
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
dg dg1 820028958.80337.lsm
dm rz10 rz10 sliced 512 2050332 /dev/rrz10g dm rz11g rz11g simple 512 818688 /dev/rrz11g dm rz11h rz11h nopriv 0 838444 /dev/rrz11h
v vol1 fsgen DISABLED CLEAN 20480 SELECT ... pl vol1-01 vol1 DISABLED CLEAN 20480 CONCAT ... sd rz11g-02 vol1-01 0 20480 20480 rz11g ... pl vol1-02 vol1 DISABLED STALE 20480 CONCAT ... sd rz10-02 vol1-02 0 20480 20480 rz10 ...
v vol2 fsgen DISABLED CLEAN 20480 SELECT ... pl vol2-01 vol2 DISABLED CLEAN 20480 CONCAT ... sd rz11g-01 vol2-01 0 0 20480 rz11g ... pl vol2-02 vol2 DISABLED STALE 20480 CONCAT ... sd rz10-01 vol2-02 0 0 20480 rz10 ...
v vol3 fsgen DISABLED EMPTY 20480 SELECT ... pl vol3-01 vol3 DISABLED EMPTY 20480 CONCAT ... sd rz11g-03 vol3-01 0 40960 20480 rz11g ...
Once the plex states for mirrored volumes have been set appropriately, the volumes can be started. Starting the volumes will resynchronize the stale plexes in the volume. For example:
#
volume -g dg1 start vol1
#
volume -g dg1 start vol2
#
volume -g dg1 start vol3
#
volprint -ht
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
dg rootdg 818201657.1025.lsm
dm disk01 rz8h simple 512 837932 /dev/rrz8h dm rz12 rz12 sliced 512 2050332 /dev/rrz12g dm rz8a rz8a nopriv 0 131072 /dev/rrz8a dm rz8b rz8b nopriv 0 262144 /dev/rrz8b
v rootvol root ENABLED ACTIVE 131072 ROUND ... pl rootvol-01 rootvol ENABLED ACTIVE 131072 CONCAT ... sd rz8a-01 rootvol-01 0 0 131072 rz8a ...
v swapvol swap ENABLED ACTIVE 262144 ROUND ... pl swapvol-01 swapvol ENABLED ACTIVE 262144 CONCAT ... sd rz8b-01 swapvol-01 0 0 262144 rz8b ...
v vol1 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 20480 CONCAT ... sd disk01-01 vol1-01 0 0 20480 disk01 ... pl vol1-02 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz12-01 vol1-02 0 0 20480 rz12 ...
v vol2 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol2-01 vol2 ENABLED ACTIVE 20480 CONCAT ... sd disk01-02 vol2-01 0 20480 20480 disk01 ... pl vol2-02 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz12-02 vol2-02 0 20480 20480 rz12 ...
If a volume is deleted by mistake and needs to be re-created,
you can use the
volrestore
utility to re-create the volume configuration.
To re-create a specific volume, use the
-g
and
-v
options with
volrestore.
For example, to re-create volume
vol2
you would use the following command:
#
volrestore -g dg1 -v vol2
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Working .
Restoring dg1
Checking vol2
The following volumes in diskgroup dg1 cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol2
If both plexes of the volume contain valid data, they can be set to the ACTIVE state and no plex recovery will be carried out.
For example:
#
volume -g dg1 init active vol2
#
volprint -ht -g dg1
DG NAME GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN PUBPATH V NAME USETYPE KSTATE STATE LENGTH READPOL ... PL NAME VOLUME KSTATE STATE LENGTH LAYOUT ... SD NAME PLEX PLOFFS DISKOFFS LENGTH DISK-NAME...
dg dg1 820028958.80337.lsm
dm rz10 rz10 sliced 512 2050332 /dev/rrz10g dm rz11g rz11g simple 512 818688 /dev/rrz11g dm rz11h rz11h nopriv 0 838444 /dev/rrz11h
v vol1 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-02 vol1-01 0 20480 20480 rz11g ... pl vol1-02 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz10-02 vol1-02 0 20480 20480 rz10 ...
v vol2 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol2-01 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-01 vol2-01 0 0 20480 rz11g ... pl vol2-02 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz10-01 vol2-02 0 0 20480 rz10 ...
v vol3 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol3-01 vol3 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-03 vol3-01 0 40960 20480 rz11g ...
#
volinfo -g dg1
vol1 fsgen Started vol3 fsgen Started vol2 fsgen Started
Volumes can also be re-created by specifying just the
-g
option
with
volrestore.
For example:
#
volrestore -g dg1
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Working .
Restoring dg1
vol1 in diskgroup dg1 already exists. (Skipping ..)
Checking vol2
vol3 in diskgroup dg1 already exists. (Skipping ..)
The following volumes in diskgroup dg1 cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol2
After setting the plex states appropriately,
restart the re-created volume
vol2.
To restore the
rootdg
disk group configuration, use the following
command:
#
volrestore -g rootdg
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Disk rz8a is in use for a system (root, /usr, /var) volume. volrestore will not restore disks and volumes used for root, swap, /usr, or /var. Refer to volrestore(8).
Disk rz8b is in use for a system (root, /usr, /var) volume. volrestore will not restore disks and volumes used for root, swap, /usr, or /var. Refer to volrestore(8).
Working .
Restoring rootdg
Checking vol1
Checking vol2
The following volumes in diskgroup rootdg cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol1 vol2
In this example, you would need to reencapsulate the system disk
rz8
to use LSM volumes.
Also, you would need to
set the plex states appropriately for
volumes
vol1
and
vol2
in the
rootdg
disk group and then restart the volumes.
The
volrestore
utility fails if a disk is unavailable or if an LSM object name is
already in use, causing a conflict between
the current configuration and the saved configuration.
When
volrestore
encounters a failure in restoring a disk group, it
backs out the changes made for that disk group and proceeds with the
next disk group that needs to be restored.
In the following example,
volrestore
fails because disk
rz10
in disk group
dg1
cannot be initialized.
#
volrestore -g dg1
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Initializing disk rz10 failed. voldisk: Device rz10: define failed: Device path not valid
Quitting ....
To override the failure, use the
-b
option with
volrestore.
This option specifies the "best" possible configuration
despite the failures. All failures
are reported, as shown below:
#
volrestore -b -g dg1
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Initializing disk rz10 failed. voldisk: Device rz10: define failed: Device path not valid Working .
Restoring dg1
vol1 in diskgroup dg1 could not be restored because of the following errors: volmake: Failed to obtain locks: rz10: no such object in the configuration
The configuration of vol1 in /usr/var/lsm/db/LSM.19951226203620 is
v vol1 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol1-01 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-02 vol1-01 0 20480 20480 rz11g ... pl vol1-02 vol1 ENABLED ACTIVE 20480 CONCAT ... sd rz10-02 vol1-02 0 20480 20480 rz10 ...
vol2 in diskgroup dg1 could not be restored because of the following errors: volmake: Failed to obtain locks: rz10: no such object in the configuration
The configuration of vol2 in /usr/var/lsm/db/LSM.19951226203620 is
v vol2 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol2-01 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-01 vol2-01 0 0 20480 rz11g ... pl vol2-02 vol2 ENABLED ACTIVE 20480 CONCAT ... sd rz10-01 vol2-02 0 0 20480 rz10 ...
vol3 in diskgroup dg1 already exists. (Skipping ..)
The following volumes in diskgroup dg1 cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol3
In this example, volumes
vol1
and
vol2
could not be restored because the
disk
that these volumes use
(rz10)
was unavailable.
To restore these volumes, the LSM configuration that was saved
using
volsave
needs to be edited.
See
Section C.26.3
for further information.
In the following example,
volrestore
fails because
plex
vol3-01
is associated both with volume
vol3
in the
saved LSM configuration
and with volume
vol2
in the current configuration.
#
volrestore -g dg1 -v vol3
Using LSM configuration from /usr/var/lsm/db/LSM.19951226203620.skylark Created at Tue Dec 26 20:36:30 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
Working .
Restoring dg1
vol3 in diskgroup dg1 could not be restored because of the following errors: volmake: Plex vol3-01 already exists volmake: Error associating plex vol3-01 with vol3: Record is associated
The configuration of vol3 in /usr/var/lsm/db/LSM.19951226203620.skylark is
v vol3 fsgen ENABLED ACTIVE 20480 SELECT ... pl vol3-01 vol3 ENABLED ACTIVE 20480 CONCAT ... sd rz11g-03 vol3-01 0 40960 20480 rz11g ...
Conflicts such as the one shown in this example
occur when LSM configuration changes are made after saving the LSM
configuration
with
volsave.
To resolve such conflicts, you can either change the existing LSM configuration or edit the saved LSM configuration.
In the example in
Section C.26.1,
volumes
vol1
and
vol2
were not restored even
though disk
rz11g
was still available. To restore these volumes with
the one plex on
rz11g,
the LSM configuration in
/usr/var/lsm/db/LSM.19951226203620.skylark
must be edited, as follows:
vi,
edit the
allvol.DF
file in
/usr/var/lsm/db/LSM.19951226203620.skylark/dg1.d
as follows:
vol1-02
from the description of volume
vol1
vol2-02
from the description of volume
vol2
vol1-02
and
vol2-02
and the subdisks associated with them
volrestore
command with the
-f
option. For example:
#
volrestore -b -f -g dg1 -v vol1 vol2
You must use the
-f
option with
volrestore
to override checksum validation
because editing the
allvol.DF
file causes checksum validation to fail.
An LSM configuration created on one system can be replicated on other systems.
Any volume can be replicated except
volumes used for the root,
/usr,
and
/var
file systems and for the primary swap volume.
The disk partitions
used for root,
/usr,
/var,
and primary swap must be encapsulated to LSM
volumes as described in
Chapter 4.
Follow these steps to replicate an LSM configuration:
volsave.
For example:
#
volsave -d /usr/var/LSMCONFIG1
LSM configuration being saved to /usr/var/LSMCONFIG1
volsave does not save configuration for volumes used for root, swap, /usr or /var. LSM configuration for following system disks not saved: rz8a rz8b
LSM Configuration saved successfully.
/usr/var/LSMCONFIG1
to the second system. The
voldisk.list
file in this directory has a description of all disks
used by LSM. If the physical disk names are different,
the
voldisk.list
file will be different.
The
voldisk.list
file looks as follows:
#
cd /usr/var/LSMCONFIG1
#
cat voldisk.list
DEVICE DISK TYPE GROUP PRIVLEN NCONFIG CONFIGLEN NLOG LOGLEN PUBLEN PUBPATH SYSTEMDISK rz11b rz11b simple dg2 128 2 31 2 4 262016 /dev/rz11b NO rz12 rz12 sliced rootdg 512 0 0 0 0 2050332 /dev/rz12g NO rz8a rz8a nopriv rootdg 131072 /dev/rz8a YES rz8b rz8b nopriv rootdg 262144 /dev/rz8b YES rz8h disk01 simple rootdg 512 2 173 2 26 837932 /dev/rz8h... rz9 rz9 sliced dg2 512 2 173 2 26 2050332 /dev/rz9g NO
rz11,
rz12,
rz24,
and
rz25.
Edit the
voldisk.list
file on the second system
to change the physical device name.
Do not change the disk media name.
After editing, the
voldisk.list
file looks like this:
#
cat voldisk.list
DEVICE DISK TYPE GROUP PRIVLEN NCONFIG CONFIGLEN NLOG LOGLEN PUBLEN PUBPATH SYSTEMDISK rz11b rz11b simple dg2 128 2 31 2 4 262016 /dev/rz11b NO rz12 rz12 sliced rootdg 512 0 0 0 0 2050332 /dev/rz12g NO rz24a rz8a nopriv rootdg 131072 /dev/rz8a YES rz24b rz8b nopriv rootdg 262144 /dev/rz8b YES rz24h disk01 simple rootdg 512 2 173 2 26 837932 /dev/rz8h... rz25 rz9 sliced dg2 512 2 173 2 26 2050332 /dev/rz9g NO
volrestore
command using the
-f
option to override checksum validation, as follows:
#
volrestore -f -d /usr/var/LSMCONFIG1
Using LSM configuration from /usr/var/LSMCONFIG1 Created at Wed Dec 27 20:23:29 EST 1995 on HOST skylark
Would you like to continue ? [y,n,q,?] (default: n)
y
/etc/vol/volboot does not exist. To restart LSM this file is required.
Restore saved copy of /etc/vol/volboot? [y,n,q,?] (default: y)
y
System does not have a valid rootdg configuration.
Would you like to re-create rootdg from LSM description set in /usr/var/LSMCONFIG1 ??
Would you like to continue ? [y,n,q,?] (default: n)
y
Disk rz8a is in use for a system (root, /usr, /var) volume. volrestore will not restore disks and volumes used for root, swap, /usr, or /var. Refer to volrestore(8).
Disk rz8b is in use for a system (root, /usr, /var) volume. volrestore will not restore disks and volumes used for root, swap, /usr, or /var. Refer to volrestore(8).
+ voldg adddisk disk01=rz24h + voldctl enable Working .
Restoring dg2
vol1 in diskgroup dg2 already exists. (Skipping ..)
vol6 in diskgroup dg2 already exists. (Skipping ..)
vol9 in diskgroup dg2 already exists. (Skipping ..)
The following volumes in diskgroup dg2 cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
vol1 vol6 vol9
Working .
Restoring rootdg
Checking vol1
Checking vol2
The following volumes in diskgroup rootdg cannot be started. Refer to LSM documentation on how to set the plex states before starting the volumes.
Follow these steps to deinstall LSM:
rootdg
disk group and remove the disks used
in the disk groups from LSM. For example:
#
voldg deport dg1 dg2
#
voldisk rm rz16 rz17 rz18 rz19 rz20 rz21 rz22 rz23
rootdg
disk group and remove all but the last
disk from the
rootdg
disk group. For example:
#
voledit -g rootdg -rf rm vol1 vol2 vol3
#
voldg rmdisk rz1h rz2 rz3 rz4
#
voldisk rm rz1h rz2 rz3 rz4
/etc/vol/volboot
file:
#
rm /etc/vol/volboot
/etc/inittab
file and remove LSM entries.
#
shutdown -r now
The system now reboots without starting LSM and all the disks that LSM previously used are no longer in use. If required, you can removed the LSM subsets at this point.
LSM supports the
ioctl
requests
DEVIOCGET
and
DEVGETGEOM for
use by applications that need to determine the volume size.
The DEVIOCGET request can be used to determine if a special device file is associated with an LSM volume.
The DEVGETGEOM request can be used to determine the size of the LSM volume.
For example, the following code segment shows the use of DEVGETGEOM to determine the volume size.
#include <sys/stat.h> #include <sys/ioctl.h> #include <sys/types.h> #include <sys/file.h>
int getsize(char *name) { int lsm_fd; DEVGEOMST volgeom; struct stat st; long volsize = 0;
/* Check if the device exists */
if (stat(name, &st) < 0) { perror("stat"); }
/* open the volume */ if ((lsm_fd = open(name, O_RDONLY)) == -1) { perror("open"); }
if ( ioctl(lsm_fd, DEVGETGEOM , (char *)(&volgeom)) == 0) { volsize = volgeom.geom_info.dev_size; } else { /*ioctl failed */ close(lsm_fd); perror("ioctl"); return -1; }
close(lsm_fd);
return volsize; /* volsize in sectors */ }